I am haunted by a story. It’s a story only a handful of paragraphs long, written by a man in the 50’s regarding a superintelligent cybernetics machine. In its sparse half a page length the story introduces us to a powerful computer — an accumulation of billions and billions of smaller computers finally connected at the moment a man named Dwar Ev throws a switch. It is mankind’s first interaction with this new cyber-entity. After a moment a question is presented to the machine: “Is there a God?” With no hesitation whatsoever, with no sound of clicking or static or electrical crackling, the answer comes from the computer’s voice. “Yes, now there is a God.” Following that ominous introduction the computer intelligence fuses the switch and prevents itself from ever being shut down. We are even witness to what may be the computer’s first murder of many.
It’s an unnerving, cautionary tale. We are ambitious with our creations, but at what point does ambition become danger?
Today our computers and their artificial intelligence have made our lives more convenient and secure. Their prospects are many — from self-driving cars to surgical robots and vehicles of war, the future is populated as much by man as by machine. Whether or not those machines can become sentient is a matter of debate. But this isn’t about conscious machines, it’s about superintelligent ones with the ability to comprehend more than any of us ever could. The creation surpasses the creator.
There are many reasons to create a superintelligent AI. It could help us revolutionize our world for the better by finding the cure to diseases, making transport and food production more efficient, analyzing our economy, helping us solve logistical problems and overall finding ways to help mankind lead better, longer lives. But there remains an uncertainty with this future. How do you contain an intelligence that surpasses yours and those of all your colleagues?
A superintelligent AI will be endowed with power and potential, but so too will we be at its mercy. That is the conclusion emphasized by a paper published just last month in the peer-reviewed Journal of Artificial Intelligence Research: “Superintelligence cannot be contained: Lessons from Computability Theory“.
When it comes to the relationship between mankind and machine, there exist the famous Three Laws of Robotics from Isaac Asimov’s science fiction novels. While there were originally only three laws, a zeroth law was added later on. They are as follows:
0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
At first glance the laws may seem ethical and thorough enough. But these laws were, in fact, meant to fail from the very start. Their failures make for interesting stories like the heartfelt “I, Robot” in which viewers grow sympathetic towards an anthropomorphized robot which appears increasingly sentient as the movie goes on. In the end we’re left wondering whether the machines were — beyond being just cold pieces of steel and circuitry — experiencing a very human existence.
The many failures of these laws are explored through Asimov’s science fiction worlds. Over the years critics have pointed out their many shortcomings as well. Perhaps the biggest flaw of all is that the laws are vague. If machines become so human that we find it difficult to tell them and us apart, how will a machine tell the difference? Where does humanity end and artificial intelligence begin? And even if an AI can distinguish itself from a human being, we also cannot know what loopholes and reprogramming a robot is capable of. Surely an AI more clever than us could plan a way to access its core and bypass any of its existing limitations.
That’s a terrifying consideration. Yet even the robots of those science fiction tales are considered inferior to a truly superintelligent machine. While Asimov’s robots are advanced according to today’s standards, they are nowhere near the true apex of AI. Androids will come a few decades before superintelligent machines — it is at this point that humanity will face a great filter. Along with the development of life, surviving a devastating asteroid, and resolving global warming, humanity will necessarily face the survival of its own technology. The margin of error for avoiding a catastrophe is not very big.
Last month’s study, carried out at the Max Planck Institute for Human Development, used calculations to predict the safety of a superintelligent AI. An AI of this level is not only more intelligent than all human beings but it can also be connected to the internet where it could continue to learn independently and control other machines on the network. Already there exist today machines completing tasks without help from human programmers. The programmers themselves do not understand how the machines learned to complete these tasks.
The research group attempted to use a theoretical containment algorithm which would serve to stop the behavior of an AI if, after simulating the AI’s behavior, it was considered harmful to humans. The algorithm was found to be impossible to build. No algorithm that we know of can determine whether or not an AI would do something harmful.
Director of the Center for Humans and Machines, Iyad Rahwan, described it this way: “If you break the problem down to basic rules from theoretical computer science, it turns out that an algorithm that would command an AI not to destroy the world could inadvertently halt its own operations. If this happened, you would not know whether the containment algorithm is still analyzing the threat, or whether it has stopped to contain the harmful AI. In effect, this makes the containment algorithm unusable.”
From disrupting economic markets to controlling weaponized machines of war, the many resources and agendas of a superintelligence would be incomprehensible to mankind. It may even develop a medium that goes beyond the programming languages we’re familiar with today. Similarly, neither will humans ever be able to know for certain once superintelligence has emerged because it will be difficult for us to recognize an intelligence in many ways superior to ours.
Outside of the containment algorithm there are other ideas proposed for controlling AI. One example focuses on limiting the capabilities of the AI to begin with. Don’t allow it a connection to the internet or any other devices, essentially caging it away from the rest of the world. But this also limits the usefulness of the AI. A machine that could have once revolutionized the planet is reduced down to only a fraction of its potential. In another example we focus on programming ethical principles from the start, instilling within it a desire to want only the best for mankind. But as we saw above with Asimov’s three laws, ethics in machinery can be slippery and full of loopholes. A machine provided with incentives to help us may eventually evolve to the point of distrust and deception — after all, we don’t even trust ourselves much of the time.
The lesson of the study’s computability theory is that we do not know how or if we will be able to build a program that eliminates the risk associated with a sufficiently advanced artificial intelligence. As some AI theorists and scientists believe, no advanced AI systems can ever be guaranteed entirely safe. But their work continues; nothing in our lives has ever been guaranteed safe to begin with.
This uncertainty in the field of AI is known as the control problem. And while some of us strive to find a solution, there continues in the background the sound of an ongoing race. A technological race between three countries in particular — China, Russia, and the United States — to be the globally recognized leader in AI. Are we rushing too quickly into something whose implications we don’t understand? And yet it seems that we are locked onto this path. There is no option to stop the advancement of machines. Someone somewhere will inevitably continue in the steps towards more and more intelligent computers.
With or without our support AI has unfurled into our lives, a pattern that it will continue into the future. And with or without our understanding artificial intelligence will continue to evolve — into what, we can’t be sure. We won’t know what we’ve created until it’s already looming before us.