Beyond Killer Robots
This past April, experts from 87 countries gathered for a summit in Geneva on “Killer Robots.” These state party representatives to the United Nations Convention on Conventional Weapons were there to discuss whether these robots, more formally known as “lethal autonomous weapons,” should be regulated or restricted to operating only under “meaningful human control,” which would require humans to retain control over the critical functions of weaponry, such as the selection and engagement of targets.
U.S. Air Force General Paul J. Selva, vice chairman of the Joint Chiefs of Staff, has called this a “Terminator conundrum,” and experts in artificial intelligence (AI) say that the invention of these fully autonomous killer robots is imminent. Bonnie Docherty, senior arms division researcher at Human Rights Watch, said that “there is a real threat that humans would relinquish their control and delegate life-and-death decisions to machines.” Human Rights Watch and Harvard University issued a joint report in April on the subject as well.
In the wake of these discussions, there have been calls, including from the Red Cross, to formalize laws and treaties to ban autonomous weapons that select and engage targets without human intervention. Currently though there exists no international law on the use of fully autonomous weaponry.
It is an important subject. Forms of autonomous weapons, such as remote-controlled systems like drones, are here now, and fully autonomous weapons are on the near horizon. There is even an organization of global robotics and human rights experts dedicated solely to the peaceful use of robotics and regulation of robot weapons. The Future of Life Institute published an open letter last year signed by thousands of people, including notable artificial intelligence and robotics researchers, calling for a ban on autonomous weapons that select and engage targets without human intervention. More recently, there was a conference at Stanford on the Future of Artificial Intelligence, and a UNESCO World Commission on the Ethics of Scientific Knowledge and Technology. Elon Musk has created OpenAI, which he believes is key to keeping AI in check. And representatives of Google parent Alphabet, Amazon, Facebook, IBM, and Microsoft have been meeting privately to discuss a standard of ethics around AI.
In the not too distant future, super intelligent machines and AI are going to present humankind with some really big challenges and some very intelligent people are thinking about the potential implications. But is it possible we are still underestimating our machines’ capacity to learn? To be more like us? Are we are too focused on the machines’ technology, while overlooking the underlying humanity that needs to go into them?
I am an international human rights lawyer, and I recently gave a TEDx talk about the future of human rights and technology, focusing on the impact of AI. I believe intelligent machines will dramatically disrupt the human rights paradigm, and they will certainly require that we look at what it means to be human in new ways. Intelligent machines currently being designed will soon have the ability to reason for themselves, to improve themselves, and in a short time will exponentially exceed the intellectual capacity of human beings. This may be the last frontier of invention and innovation. Our machines will likely become better at inventing and innovating than we have ever been or could ever be, including creating as yet unimagined new weapons.
In order to safeguard humanity, the single most important thing we must do will be to teach concepts of rights and values — to machines. And I’m not alone in my thinking. For instance, Stephen Hawking has said that
“success in creating AI would be the biggest event in human history…Unfortunately, it might also be the last, unless we learn how to avoid the risks.”
Hawking thus believes that the “development of full artificial intelligence could spell the end of the human race.” Elon Musk has called the prospect of artificial intelligence “our greatest existential threat” and thinks that AI weaponry could pose an even bigger threat than nuclear warfare, which is why he supports regulatory oversight of AI. However, in the recent Stanford Report, “The Study Panel’s consensus is that attempts to regulate ‘AI’ in general would be misguided, since there is no clear definition of AI (it isn’t any one thing), and the risks and considerations are very different in different domains.”
The eminent philosopher and AI expert Nick Bostrom predicts that
“the first ultraintelligent machine is the last invention that man need ever make…provided that the machine is docile enough to tell us how to keep it under control.”
Regarding the rise of AI, he thinks that
“humans are like small children playing with a bomb.”
If we cannot even create common sense gun control policies in the United States, how are we going to do the same for lethal autonomous weapons? Instead of leading with policy, we may be better served by focusing on building compassion into AI operating systems. AI could then not only help control our weapons and save lives, it may also help us find global solutions to many of the existing human rights challenges we already face, and have been unable to solve.
Since the spring, 14 additional countries have joined the call to ban artificially-intelligent weapons. In December, a United Nations group will return to Geneva to debate whether we should establish formal international laws to oversee killer bots. Artificially-intelligent weaponry is inevitable. I believe we will be better served by focusing on finding agreement on values rather than policy. If we rise to that challenge, we may be capable of creating machines that will not only share our highest values, but also help us to improve them.
How we choose to develop AI will be the key to protecting our future basic rights and freedoms. This means beginning a public dialogue now, and not just among the elite and mega-rich tech companies, about our robotics, software, and computers that have the capacity for intelligent behavior, and not just in connection with lethal weaponry. Scientists and engineers are leading the current research and discussions, but we need more humanist thinkers in the room as well.
And how fast is this technology coming? While the concept of what constitutes a “thinking” machine remains open for some debate, techniques like deep learning and crowdsourcing knowledge for AI are bringing us closer each day to machines that think for themselves. AI is being taught to do everything from feel pain to create art. And Futurist Ray Kurzweil predicts that we will reach “technological singularity,” where AI surpasses human intelligence and comprehension, in less than 30 years. The Stanford Report declined to even discuss singularity.
We are already reliant on machines, using simple forms of AI on a daily basis, like Pandora, Netflix, Siri, video games, and Google. AI technologies like driverless cars, autonomous drones, and game-playing robots are rapidly proliferating.
Humanity will be better served by focusing on instilling values into AI and reducing the human bias that can put lives at risk. Because it’s not technology we have to fear, it’s people. As long as we remain dangerous to one another, our machines will be dangerous to us. Noted robotics writer, Evan Ackerman, says:
“What we really need is a way of making autonomous armed robots ethical.”
Top AI scholar Stuart Russell agrees, arguing that the survival of our species may depend on keeping AI beneficial and provably aligned with human values. This goes beyond Isaac Asimov’s Three Laws and makes a case for robots that can learn as they go along, but that also explicitly acknowledge and understand the uncertainty inherent in life, like humans do, so that they have the ability to course correct while pursuing an objective, instead of remaining dangerously absolute in their programmed path.
We need to start thinking very carefully about the patterns, the values, and the code of ethics we will need to build into our machines and our laws. Values are different than rules. Rules can be broken, or followed for the wrong reason; values are deeper. Acting according to our values means we unearth what’s beneath the rules, and follow them only when we determine that they are aligned with our values, like some of our great moral heroes and leaders, including Mahatma Gandhi, Susan B. Anthony, and Martin Luther King, Jr., have shown us, so that we impart to them the best values and aspirations human culture has to offer. I suggested some ideas for what those solutions might look like at TEDx.
I’ve studied war crimes, genocide, conflict resolution, and post-conflict justice. I have become convinced that Dr. Paul Farmer is right:
“The idea that some lives matter less is the root of all that’s wrong in the world.”
So, it is critical that machines understand values, like the concept of equality. But with the human species’ record of human rights violations, are we even capable of this? Do we even have the answers to give them? And who are we going to entrust with this task?