Why Killer Robots Are A Real and Pressing Concern

People have been trying to kill each other since the dawn of the human race. The fact that billions of us are still around is a testament to our inefficiency. But if there is one thing we’ve proven ourselves adept at, it is building machines that can do things better than us. What if we entrusted the killing to them?

Visvak
Visvak
5 min readOct 22, 2015

--

The distant future is creeping up on us faster than we expected. In an era where virtual reality, self-driving cars and privatised space missions have all made the leap from science fiction to everyday life, it is wise not to bet against what a well­funded R&D team can achieve.

South Korean company DoDAAM has developed a machine gun turret called the Super aEgis II, which is capable of independently finding, tracking and ‘neutralising’ potential threats, after delivering a warning. The BBC reported that the warning was a bit of an afterthought, added at the request of concerned clients. Unlike the current version of the turret, which requires a human to authorise lethal force, the original was capable of eliminating targets without any such intervention. All the help it required was to be pointed in the right direction. The Super aEgis II is intended for deployment along the Korean demilitarised zone, where it will join Samsung Techwin’s machine gun-wielding SGR-A1 which has been in use since 2010. The Indian army has also begun trials for similar weapon systems along the LoC in Kashmir, according to recent reports.

Weapons like these are increasingly easy to build thanks to the rapid progress we are making in the field of Artificial Intelligence. However, we are no closer to solving the ethical questions that spew forth when you give a machine independent killing ability.

In July, an open letter was presented at a conference on Artificial Intelligence that delivered a stark warning about the potentially disastrous consequences of the use of robots in warfare. Describing autonomous weapon systems as “the third revolution in warfare, after gunpowder and nuclear arms,” it called for a complete ban. The letter was signed by Stephen Hawking, Elon Musk and Steve Wozniak, in addition to thousands of other academics, scientists, entrepreneurs, philosophers and engineers working on Artificial Intelligence.

But how close are we really to a world where robots patrol our streets and stand at our gates? The answer to that question requires a basic grounding in Artificial Intelligence.

The term robot instantly conjures up images of a humanoid machine capable of human-level intelligence, also known as Artificial General Intelligence (AGI) ­– the kind of AI that can clean your house, play chess, solve differential equations and ultimately, perform almost any task a human can.

Beyond AGI, lies the murky territory of Artificial Superintelligence (ASI) — machines that are smarter than humans. The idea is that once AGI is achieved, a computer can then be entrusted with the task of recursively improving itself. And the ruthlessly efficient, eternally untiring silicon mind of a computer would kick the pace of AI research into overdrive, which means the transition from playing chess to playing God could occur– in relative terms– almost overnight.

The prospect of ASI is ominous, but God-like computers are at least half a century away according to a survey of leading AI researchers conducted by philosophers Nick Bostrom and Vincent Muller. However, we have already perfected Weak AI or Artificial Narrow Intelligence (ANI) — AI that specialises in performing a specific task. The Roomba is a robot that can clean your house, Deep Blue is a robot that can beat Gary Kasparov at chess and the Super aEgis II is a robot that can defend a border from uninvited guests.

Human Rights Watch (HRW) issued a report in 2012 titled ‘Losing Humanity’ in which it warns that “robots with complete autonomy would be incapable of meeting international humanitarian law standards.” The report explains how unmanned weapon systems would result in a situation where the “burden of war would shift from combatants to civilians caught in the crossfire.” In 2013, the issue was raised at the United Nations Human Rights Council after Christof Heyns, a UN Special Rapporteur added his office’s warning to the growing list. Over the last couple of years, countries have debated the issue through the UN Convention on Conventional Weapons, but progress has been slow, with delegates still trying to agree on a shared definition of automated weapons.

The United States has, in recent years, spent approximately $6 billion a year on unmanned systems of war, including its unmanned drone program. The US Department of Defense directive 3000.09, issued in the wake of the HRW report, mandated that “autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgement over the use of force.” However, such directives are rather hard to swallow given the thousands of civilians killed by drones that were originally intended for surveillance purposes only. As robotic warfare expert Brian Singer notes, “The human is certainly part of the decision making but mainly in the initial programming of the robot. During the actual operation of the machine, the operator really only exercises veto power and a decision to override a robot’s decision must be made in only half a second.”

At a UN meeting called to discuss Lethally Automated Weapon Systems in April this year, the US delegate articulated his country’s position as “neither encouraging nor prohibiting the development of such systems”. Which is exactly the kind of diplomatic pillow talk you deploy when you’re trying to gain a massive tactical advantage in the battlefield. The fact that most of the other major powers including Russia, China and the EU made similarly non-committal responses only served to strengthen the American position.

A large portion of US foreign policy since World War II has been dedicated to stuffing the nuclear genie back into the bottle. If unmanned weapons research continues as is, worldwide proliferation is all but guaranteed, especially since the technology required to reproduce these systems is trivial compared to getting a nuclear fission reaction going.

A ban on autonomous weapons, as proposed by the open letter, the HRW report and numerous other concerned parties, will not stop the rogue terrorist from rigging a quadcopter with an Uzi controlled through sensors hooked up to an Arduino. But it will take the edge out of research into automated weapons and prevent the creation of massive arsenals that could easily fall into the wrong hands. After all, while the global effort to snuff out chemical and biological weapons may not have met with total success, no one can argue that the world is not a better place because of it.

Unfortunately, we are a significant terrorist attack away from taking that conversation seriously. Until then, we face the very realistic prospect of robots becoming evil long before they become sentient.

Originally published by The Hindu Business Line on October 21, 2015

--

--

Visvak
Visvak

Writer-Editor, mostly of narrative non-fiction.