Against Weapons of Terror
I am sure readers will have noted with growing alarm the view being expressed by many people of note, from Stephen Hawking and Elon Musk to Mustafa Suleyman — a robotics expert and Head of Applied AI at Google’s DeepMind — concerning the speedy, unfettered development, of artificial intellgence within the military. Here is why they are so worried…
In the past, warfare was conducted by human combatants using everything from cudgels, swords and guns, to the ultimate weapon of destruction — the atomic bomb.
Over the centuries only one factor has remained constant: every decision related to when, where and how weapons were deployed, what or whom they were used against, and their aftermath, was made by a human being.
Today, an expectation that humans will continue to retain control over the use of weapons is frankly naive. Indeed it is far easier to envisage a future where killer robots are a cheap and ubiquitous method of maintaining social order and compliance — used as a means to ensure submission to whatever authority happens to be wielding power, whether it is the state, a psychopath with deep pockets, or terrorists.
Currently lethal autonomous weapons systems operating outside the strictures of human control are not ethically acceptable, nor legally permissible. But once they are available, they will allow armed conflict at a scale far greater than ever before, and at timescales possibly thousands of times faster than humans can comprehend.
The US and a few other nations have used drones and semi-automated systems to carry out attacks for at least the past decade. But, for the moment, fully removing a human from the loop is at odds with international humanitarian and human rights laws. That will not prevail for too much longer if action is not taken to deter the weapons industry. Low-cost sensors, rapid advances in artificial intelligence, and the seemingly irrepressible urge to push innovation beyond what is within our control, make it increasingly possible to design weapons systems that, once activated, can target and attack without any additional human intervention.
In spite of denials to the contrary, plans to combine drone swarm technology and artificial intelligence are already being developed by at least two countries. Unfortunately, there is no international law explicitly necessitating human intervention in every strike. This is why we are witnessing any number of efforts aimed at preventing the automation of critical functions for selecting targets and applying violent force without human deliberation. The concerns I have heard expressed about taking humans out of the equation, range from a dramatic lowering of the threshold for armed conflict, belittling and easing the taking of human life, empowering both state and religious terrorists, and creating global insecurity.
Make no mistake. These are weapons of terror. It matters not whether you are an activist, a political opponent, an artist who dares challenge the status quo, a member of a religious sect, or even a child with severe autism. Anyone who causes mischief can be eliminated with no effort or messy collatoral damage to justify. In that kind of future, where the source of the attack would be utterly unpredictable and untraceable, who or what could prevent any bloody-minded individual, cult leader, military tyrant, or even a duly elected official in a so-called democracy, from using killer robots to permanently intimidate and bully citizens into compliance? The carnage would be unimaginable.