Yes, We Will Live With Artificial Intelligence. But It Will Be Friend, Not Foe.
--
The AI Scientist Behind Anki Tells Us that We Are Not Doomed to Be Ruled by Robots
An autonomous car will one day kill a human.
No one disputes this. We are on the brink of breakthroughs in robotics and artificial intelligence (AI) that will have the potential to do a great many things for the human race. This technology has started to move beyond science and research labs to help progress old and new industries forward. But what happens when someone dies by the hand of a robot? Will this be the golden age of humankind or will AI consume us?
Bill Gates, Elon Musk and Stephen Hawking have gone on record stating that AI presents one of the deepest threats to humanity. It’s hard to ignore such sentiment coming from some of the most brilliant minds alive today. Despite the undeniable benefits of advances in robotics and AI, they point to the fear of these machines forming cognitive abilities that lead to behavior that is directly, and intentionally, at odds with humanity’s best interests. These men seem to accept a reality where The Terminator is no longer just on the big screen.
I have a message to Bill, Elon, Stephen, and the well-intentioned signers of a well-publicized open letter warning us about the dire consequences of AI: we are not doomed to be ruled by robots. Yes, it’s inevitable that an autonomous car will one day kill a human — but not because of a devious act of self-awareness, but because a sensor will malfunction or an algorithm will improperly evaluate some unforeseen element in the scene. Despite the grim outcome, such an error is no different in substance from that of a chess program making a poor move because the logic it uses to evaluate the quality of a board position didn’t pick up on some hidden nuance. This won’t mitigate the loss of a human life, of course. But it is a risk no different than the one we assume when we adopt any new technology — not a game-changing Singularity that will end the human race.
To better evaluate such a complex threat one has to dig under the hood of how almost all modern AI systems function. As unglamorous as it may sound, almost every application can be thought of simply as an optimization problem, tasked with finding a lowest-cost or highest-reward solution in a particular representation of a problem. There is no magic or emotion in this class of AI. The intention is simply that finding an optimal solution in this search will tend to align with our perception of intelligence. For example, an autonomous car selects a path because it gets to the destination while accumulating the least penalties from risk or distance traveled, and a computer vision algorithm classifying an image will detect a cat on the street simply because the features derived from those millions of pixels are statistically more likely to match a cat than anything else.
Even “deep learning” systems, never more popular than today, are only related to the layers of neurons in the brain in structure, not the way they “learn.” In the end, despite some truly incredible applications, under the hood this approach is yet another optimization problem, pattern matching against a set of training data.
The program deciding on the contents of an image is no more aware of its purpose than the one deciding on a brilliant chess move or the one planning for an autonomous car to avoid traffic and pedestrians. None of these systems truly “think” in the way we do — they optimize a given problem in a way that is intended to align with our human intentions for that situation. And as sophisticated as some of them inevitably have to be, AI systems still only work on linear tasks focusing on programmer-defined particular skillsets.
So in the end, as scary as death at the hands of robots may sound and even death by autonomous cars, we have to remind ourselves that hundreds of thousands of people die in car accidents every year. While we may find comfort in the fact that a human is currently behind the wheel of every car, statistically we will never be safer than when robots rule the road. And when the benefits of autonomous cars overflow far beyond the safety factors — the efficiency we gain during our daily commute, how we plan our cities, even the shifting price dynamics reinventing entire industries — we will come to be no more afraid of them than we are of personal computers.
The true risk posed by robotics and AI is in how these technologies enable intentional misuse by humans. Just like there is a fine line between nuclear energy and nuclear weapons, these advances in robotics and AI present new and unpredictable opportunities for abuse: deadly military applications, vast expansion of surveillance capabilities, and hacking of these same complex systems for malicious intent. But while we should be cognizant of these risks, and fully prepared for the challenges that come with them, we should never forget the incredible benefits to mankind’s capabilities, efficiency, and enjoyment of life that they will open up.
Gates, Musk, and Hawking are right to encourage caution for the vast advances ahead. The threat however is misplaced, as human intent is vastly more dangerous than the machines they invent. Yes, we have unimaginable technologies at our fingertips that were once possible only in science fiction, but there are still some concepts that belong only in pulp comics and movies. Self-aware, mankind-hating killer robots is one of those concepts.
Images: Headline Photograph: Carsten Koall/AFP/Getty Images; Terminator 3 — by Jonathan Mostow (2003); Mercedes Benz Smart Car — 2015 International CES
Follow Backchannel: Twitter | Facebook
Do you agree that the worries about AI are overblown? If you think that the Singulary is in our future — or that an overreliance on AI is inevitable and unwelcome — please build on this essay by responding below.