Arms Control for Killer Robots

As dangerous as nuclear weapons, but better at chess. It’s time for a programmable Geneva Convention

A robot from the delicately titled 2016 movie “Kill Command”

Science fiction got one thing right: ‘killer robots’ (properly called Lethal Autonomous Weapon Systems, or LAWS) are not easy to stop. Ten years ago, military uses for artificial intelligence were sporadic and experimental. Ten years from now, it’s likely that every facet of modern warfare will incorporate autonomous machine thinking.

Where reality departs from science fiction is the cause of this rapid automation. We — humans, that is — are disempowering ourselves from the nasty business of fighting wars. The world’s major powers are deep in an arms race over military AI, driving a rapid advance of increasingly sophisticated technology.

Out of the Loop (and a few other problems)

Critics warn that humanity is teetering close to a dangerous precedent, with technology close to enabling LAWS to make its own targeting decisions autonomously, without human input (also known as having humans “out of the loop”). The argument goes that letting robots make these decisions crosses a moral threshold, by empowering a machine to decide whether a human lives or dies.

Moral thresholds aside, humans being left “out of the loop” presents further issues. A major problem is accountability. If a killer robot makes its own decision to target and fire, who is to blame if something goes wrong? The programmer? The manufacturer? The nearest human in the chain of command? Nobody has any particularly good answers for this accountability gap, which poses a different set of concerns for the use of LAWS.

There are plenty of other reasons to be wary of killer robots, even if a human operator does continue to ‘loop in’ for life-or-death decisions. A real risk with LAWS is that they incentivise states to take more aggressive military actions. When soldiers die in combat, the fallout and outcry back home is a very real check on nations engaging in armed conflict (particularly in democracies, where governments are exhaustively focused on maintaining their own popularity).

With an accelerating trend towards the automation of militaries, global powers are not far from being able to engage in significant aggressive actions across air, land and sea without risking a single human life. The United States, for instance, could send columns of unmanned tanks into Venezuela to hasten the end of Nicolas Maduro’s regime. When the worst case scenario looks like a pile of broken machines, rather than funerals at Arlington, you can understand why a president might be more likely to leverage America’s military might.

A final problem worth bearing in mind is the destabilising potential of new autonomous weapons. For one example, take microdrone assassins, chillingly captured in the 2017 stunt “Slaughterbots”. The video is hyperbolic, but the underlying point is not: tiny, intelligent robots built for assassination is a dangerous and likely step in LAWS development. After all, equipping a small drone with facial recognition technology and a payload is hardly the greatest challenge for the world’s best AI developers.

If the knowledge to build this kind of microdrone assassin proliferates (and previous experience suggests that stopping this kind of information from proliferating is almost impossible) to terrorist groups or a rogue state, the potential for instability is colossal. Beyond locking the president and other politicians in their residences permanently (tempting, but a non-starter), how can bodyguards protect leaders around the clock from tiny, quick robots descending from the sky? This threat multiplies as drones become smaller, smarter and faster; and literally multiplies when assailants release ‘drone swarms’ of hundreds or thousands of tiny killers.

In short: the growth of weaponised AI will throw up a uniquely threatening cocktail of problems for global stability. Unfortunately, despite the risks, the ongoing AI arms race complicates the path to a solution (and, in my view, takes a ‘LAWS ban’ off the table). So how should we be thinking about AI arms control? In a word — creatively.

A Programmable Geneva Convention (and a few other ideas)

The 1864 Geneva Convention was a worthy landmark in the creation of international humanitarian law, with all European powers (notwithstanding their ongoing wars with each other) agreeing to respect the immunity of hospitals, the treatment of wounded soldiers, and the conditions of prisoners. These powers would never have agreed to a significant restriction on their military capability, but saw a common interest in averting the worst byproducts of war.

My belief is not just that the same pragmatic logic can underpin efforts at LAWS arms control, but also that modern technology can revolutionise how the regulations are implemented and monitored. If states can agree on some principles and rules to avoid the worst risks of autonomous weapons, these could be programmed directly into the weapons’ AI. The technological potential of a programmable Geneva Convention would allow for direct accountability if those rules were to be broken, and provide greater certainty to states that other signatories would comply.

This kind of international convention has a much greater chance of achieving global consensus than a total ban on LAWS development. For all of China and America’s enthusiasm about their growing arsenal of AI weapons (and wary vigilance about the other), it is not too difficult to come up with a list of Geneva-style rules they might agree on. These might include a ban on assassinating members of other signatories’ governments, and a requirement that drones have the capacity to self-destruct if they fall into unauthorised hands (particularly where those hands belong to terrorists).

And if the global community comes to agree that LAWS should require humans “looped in” for life-or-death decisions, a programmable Geneva Convention — allowing an independent body to authenticate that all signatories are programming such a rule into their AI — may be the only path forward. It’s true, of course, that some states would try to cheat their obligations (much like the Geneva Conventions have historically been breached ) — but a high-tech authentication process ensuring most states are complying is a big step in the right direction.

If a programmable Geneva Convention is the carrot of AI arms control, let’s consider a contender for the stick: an international technology alliance to develop anti-AI defensive weapons, and share cutting-edge defensive AI. The world’s superpowers are developing offensive AI weapons at a breakneck pace. States that are more skeptical of the possibilities of LAWS (and their own security vulnerabilities) might consider it in their common interest to unite for the purpose of developing AI technology to counter some of those offensive weapons.

It is not a new idea to work on defensive AI to counter other LAWS: America has experimented with a range of automated laser weapons, advanced targeting systems and quick-firing jammers to protect from new drone capabilities. However, America is not likely to release these cutting-edge technologies to the world; and most states do not have the resources to spend developing their own defensive technology. It is this asymmetry that makes a global anti-AI defensive alliance viable. Grouping their resources and agreeing to share the fruits of the research, states together would stand a much greater chance of resisting LAWS aggressions than they do apart.

If this global anti-AI alliance sounds unlikely, consider that the seeds of such a cosmopolitan group might be planted already. At least twenty five states have indicated their opposition to LAWS development, including strong regional powers like Pakistan, Brazil and Egypt. It is likely that a durable and transparent commitment to developing defensive AI would draw in more support from smaller states and those adjacent to global superpowers. This alliance may also draw top-tier AI engineers eager to support the fight against new lethal autonomous weapons.

It’s obviously an understatement to say that a programmable Geneva Convention, or an anti-AI technology alliance, would be a drastic step for the global community. But if lethal autonomous weapons turn out to be as dangerous as we have serious reason to believe, drastic steps might be the only ones worth considering.