How Likely is a Real-life Terminator?

Terminator: Dark Fate is in theaters, bringing Arnold Schwarzenegger’s beloved/feared cyborg back into our lives. In the world of the Terminator, human survival is threatened by an artificial intelligence (AI) system that sees people as the enemy. Is this scenario entirely fantastical? Maybe not.

Amie Haven
5 min readNov 14, 2019

Lethal autonomous weapon systems (LAWS) are intelligent machines that make decisions to attack and potentially kill without direct human input. Advocates say they’ll save military and civilian lives. But are we OK with robots killing people?

Either way, AI is definitely going to be part of humanity’s future. There’s excitement about its role in cancer detection, as well as worry about its potential to encode racism and sexism into automated systems. But military applications of AI have received scant attention. Regardless of what we ultimately conclude about it, we should at least understand how this extraordinary technology is being used.

Long before the Terminator, science fiction was preparing us for this day. Author Isaac Asimov famously established the Three Laws of Robotics in 1940. The first and most important law: “A robot may not injure a human being, or through inaction, allow a human being to come to harm.” But how can we prevent our creations from doing us harm when they’ve been developed for just this purpose?

Killer robots: science fact

And they are being developed. In the 2020 budget, the U.S. Department of Defense (DoD) requested $3.7 billion to develop autonomous weapons and $927 million for AI systems and machine learning. The U.S. seems to be serious about securing the lead in the race for autonomous weapon dominance.

Once machines are deemed sufficiently intelligent, it’s hoped that humans (at least the ones on our side) will be removed from direct fighting, while maintaining ultimate judgement over lethal decisions. The technology isn’t there yet, so we’re currently in the realm of theory. But the DoD is clear: this is the direction we’re headed. According to a Defense Primer published in March 2019, the DoD is prepared to develop LAWS if they believe U.S. enemies are doing so. But since states don’t tend to be too forthcoming with their military plans, suspicion and fear may drive the development of LAWS, just as it did with nuclear weapons.

Advocates for LAWS argue that they will save military and civilian lives. Troops and air support may be replaced by swarms of armed machines that will efficiently process information and respond with speed and accuracy, while unmanned vessels patrol the seas. And there are some safeguards in place for the development of this technology. The DoD Directive (DoDD) 3000.09 recognizes the difference between the development of automated weapons systems (systems that require human input in the decision to kill) and LAWS (systems that have full control over lethal decisions), requiring all systems have some human judgement applied to their use.

But while people will judge whether deploying LAWS is in accordance with the rules of engagement, humans may not have control once the LAWS have been deployed. Review processes can also be by-passed in extreme cases, which would mean that innovations self-generated by machine learning could go unchecked.

It’s that machine learning element that is crucial to how this will play out. The decision to kill will be in the ‘hands’ of machines that are learning how to be more effective during each deployment. So at what happens if machines decide they would be more effective without human intervention?

The anti-killer robot movement

Many are calling call for an all-out ban on fully autonomous weapons, arguing that humans must be wholly accountable in decisions over life and death. The late Stephen Hawking and tech mogul Elon Musk have put their names on an open letter which calls for a ban on fully autonomous weapons, arguing that weapons should be under meaningful human control. The discomfort over the implications of killer robots is evident across the world.

A global coalition of 118 non-governmental organizations (NGOs) has spearheaded the Campaign to Stop Killer Robots. Human Rights Watch and Amnesty International are just two of the Campaign’s high-profile names. This campaign is pushing for a preemptive ban on LAWS, arguing that intelligent machines lack the uniquely human capacity for ethics, judgment, accountability, and compassion. They further argue that killer robots will not be confined to the military, and will be potentially deployed in domestic policing and border control.

The UN’s Convention on Certain Conventional Weapons (CCW) seeks to ban or restrict the use of some lethal weaponry. The CCW has failed to gain support from the U.S., U.K., China, Russia, and Israel in its effort to prohibit the development of LAWS. According to the Campaign to Stop Killer Robots, Russia and the U.S. are arguing for the legitimization of this lethal technology. There are signs of resistance, however. During a CCW meeting in August 2019, Jordan was added to the list of 29 states willing to ban LAWS. And the Belgian parliament passed landmark legislation in July 2019, banning their military from developing killer robots and advocating for an international ban.

Robot ethics

When it comes to ethical constraints, Asimov’s Three Laws have holes. The writer himself noted that robots are likely to interpret their ethical programming differently than humans would. And robots could harm humans even within the laws — curtailing human freedom in the name of security, for example.

Some researchers advocate for ‘the power to pick the best solution’ to be built into robotic programming. If this was implemented, robots would empower humans and themselves, providing assistance to achieve goals using context to make decisions. But this means robots would have their own internal ethical dilemmas. Rather than following rigid rules, a robot might think, for example, “I don’t normally kill people, but this person is about to kill an innocent child and I must stop them.” A robot would thus make the decision to kill in much the same way that an armed human police officer is supposed to. But humans get this wrong, and so will robots. That is an inevitable truth that we have to adjust to. And if we can’t, then we have to decide whether we really want LAWS or not.

If democracy is true to its ideological word, then we the people get a say in how and if this technology is developed. Automated weapons systems are already being developed and tested across the world, and LAWS are set to be hot on their heels. We’re both blessed and cursed with the responsibility to protect future generations from what could be a cataclysmic decision to lethally exploit AI. Because not all intelligent armed machines are as endearing as good old Arnie.

Amie Haven is a freelance writer specializing in AI, lethal autonomous weapon systems, and planetary protection.

--

--

Amie Haven

Freelance writer exploring the worlds of science, tech, politics, and culture.