I currently have three windows up on my computer, this web page, terminal, and Rick and Morty playing in the background.
I absolutely love Rick and Morty (thanks Kyle for introducing it to me). The tech industry take itself very seriously sometimes, and this TV show manages to lighten the mood. One of my favorite snippets is in an episode from season 2 – Rick and Morty want Summer to stay in the car while they go and take care of something. Rick proceeds to instruct the car ‘to keep summer safe‘. The parameters ‘keep summer safe’ then proceed to break all convention in an attempt to fulfill that one goal. Watch the clip and you will know what I’m talking about. The short scene is almost a synopsis of the movie ‘A Space Odyssey’ – adhering to these three rues (in this order)… 1. do not harm humans, 2. obey humans, 3. protect yourself. Put in the hands of un-conscience beings, these parameters do not seem to work in a world of conscious ones.
Lets start by establishing that we are ALL very well acquainted with deep learning. YES – YOU – you have a lot of experience with deep learning. Here are some everyday, concrete examples… Google allows us to find information based algorithms, Netflix recommends shows and movies based on what you have enjoyed in the past, and Amazon suggests products based on what is currently in your shopping cart. These are all examples of machine learning, or more specifically deep learning. Right now a lot of us are entranced with the idea of deep learning. Wether it is body augmentation, or the dream of owing a self driving car, we all have ideas about how deep learning could improve our lives – make us more productive while at the same time making our lives easier.
This past week Louis Monier and Gregory Renard gave a seminar on deep learning at Holberton School. They covered everything from the ethics of deep learning to hands on projects allowing us to see how machine learning happens first hand. At the very end we had a fire side chat about some of the ethical implications, both good and bad, of deep learning. There was mention of the possibility of improving elderly care, or those with disabilities, but we kept coming back to this one issue… what will happen when deep learning has to make moral decisions?
Let me give you a specific example. In about fifteen years when self driving cars have become the norm (and yes, that is inevitable at this point), what happens when the car is in a situation where it needs to make a moral decision? Lets say the car is driving down a busy city side street and out of nowhere, a small child runs in front of the car chasing after a ball. The car does not have enough time to make a complete stop without hurting the child. To the right of the car is a cement barrier and to the left is someone on a bicycle. The car needs to decide how to react to the situation. If it goes straight, the child is harmed, if it goes right the passenger is harmed, and if it goes left the bicyclist is harmed. How should it proceed? Now, this might be a hypothetical situation – maybe a little unrealistic… but humor me. How do you program a car to make a decision like that. One might argue, well, people find themselves in situations like that all of the time, and we always have to make moral decisions like this every day. If self driving cars did not exist, the passenger would be the driver, and they would have to make that decision in a split of a second – harm the child, harm the bicyclist, harm herself. Moral decisions are made by every singe one of us every single day. The big question being ‘How do you program morals’? This isn’t a new question, and the discussion is going to be around in the foreseeable future. Deep learning integrating into our lives is inevitable, but it brings along some very difficult questions. Get your thinking caps ready, because we are in for some trough conversations.
(Greg getting our fireside chat going)
If you are behind in your A.I. media – watch these to catch up
- Rick and Morty (pretty much any episode)
- A Space Odyssey (one of the most classic examples of deep learning)
- Terminator (deep learning gone wrong)
- Eureka – (season 1 episode 2 – the next steps in home automation – or the internet of things)
- Her (but be prepared to be depressed about the future of our relationships with technology)
- Transcendence (Johnny Depp uploads his brain to a computer – can you take human intelligence and transfer it to another object)
- Ex machina (who can we trust, humans … or technology)