International Humanitarian Law Without the Humanity: Autonomous Weapons in Ukraine and beyond

The Problem of Autonomous Weaponry

Synopsis: Ongoing development and deployment of autonomous weapon systems has raised concerns about how the existing international rules of war apply to this emerging technology. This blog post briefly examines some of the potential problems resulting from autonomous weapons systems and some of the proposed solutions.

Photographs have emerged of what appear to be damaged and destroyed KUB-BLA drones deployed by Russia in Ukraine. The KUB-BLA, made by the Russian arms manufacturer ZALA Aero, is part of a class of drones known as “loitering munitions” (essentially a kamikaze drone that flies around an area until it locates a target) and is described in promotional material as having “intelligent detection and recognition of objects by class and type in real time.” In short, the KUB-BLA is claimed to be able to operate autonomously.

Source: Twitter user @RALee85.

If confirmed, the Russian use of KUB-BLA drones in Ukraine would not be the first usage of autonomous weaponry in warfare. That distinction goes to Turkey, which used autonomous Kargu-2 drones in March 2020 to “hunt[] down” members of the Libyan National Army, according to a UN report. Loitering munitions were also reportedly employed by Azerbaijan against Armenia in the Nagorno-Karabakh war in late 2020. These first uses of autonomous weaponry are signals of a rapidly-approaching future of warfare — one which is poorly regulated by existing international law.

As we think about how best to protect equity in a new world remade by AI, there is arguably no more important arena to consider than the application of AI to warfare. An unregulated, uninhibited arms race to amass autonomous weapons poses the greatest risk to world security since the invention of the atom bomb.

Defining Autonomous Weapons

What is an autonomous weapon? In a broad sense, autonomy is simply a machine operating without human control and in response to environmental stimuli. Some degrees of autonomy have existed in weaponry for centuries. For example, the modern conception of a land mine — a container full of explosives with a pressure-sensitive trigger — dates back to the US Civil War. This is a crude form of autonomy, since a land mine only responds to a single form of stimulus — whether sufficient pressure has been applied — but autonomy nonetheless.

Modern concerns around autonomous weapons arise from the level of decision-making delegated to the weapon itself. While exact definitions vary, there is broad consensus that such weapons can select targets and choose to engage them without the involvement of a human operator. In modern autonomous weaponry, decision-making is no longer made by mechanical pressure switches but is instead facilitated by machine learning to help a weapons system distinguish between targets and non-targets. Depending on the system, a human may not need to be involved at all — a 2017 analysis found that roughly a third of automated systems could engage targets without human involvement. The extent to which a human is “out of the loop” in the final decision for whether to launch an attack has grave implications for the law of war.

Source: The Economist.

Brief Overview of International Humanitarian Law

International humanitarian law (IHL) — also known as the law of war — imposes certain duties on belligerents in times of conflict. IHL is a broad topic, but most relevant to the question of autonomous weaponry are the principles of distinction, precaution, and proportionality, as well as restrictions on the means and methods of warfare.

The principle of distinction requires all belligerents to “distinguish between the civilian population and combatants and between civilian objects and military objectives and [to] accordingly direct their operations only against military objectives.” Precaution requires “constant care,” both in preparing for an attack and in the course of operations to minimize harm to civilians and civilian objects. The principle of proportionality forbids the launching of attacks which would cause incidental harm to civilians in excess of the “concrete and direct military advantage anticipated.”

Problems with Autonomous Weaponry

One of the biggest problems with ensuring that autonomous weaponry comports with existing international law is the lack of clarity on who is ultimately responsible when things go wrong. As a general rule, IHL is enforceable against individuals responsible for violations. Examples include the Nuremberg and Tokyo trials following the Second World War, tribunals for Rwanda and former Yugoslavia, and more modern trials in the International Criminal Court.

Holding individuals accountable for the actions of autonomous weapon systems is no easy task. Lieutenant Colonel André Haider of the Joint Air Power Competence Centre, a military think tank sponsored by 16 NATO nations, concluded that the possible responsible individuals could be the commander, the operator, and the programmer of the automated weapon system. However, if a system is truly autonomous, then there is no operator to speak of and large-scale software projects have too many members to easily hold an individual responsible. The most likely defendant, according to Lt. Col. Haider, would be the commander, but only if they were aware of the potential of unlawful actions before they ordered the deployment of the system. With the resulting ambiguity, there are concerns that violations of IHL may go unpunished, reducing its deterrent effect and leading to more frequent usage of weaponry that allows an attacker to wash their hands of potential war crimes. It may also encourage the development of systems that do not keep humans “in the loop” for decision making for more plausible deniability.

Removing humans from the decision-making chain brings with it a whole host of additional problems. Automated systems are not infallible under the best of circumstances, and warzones are often chaotic and unpredictable. Machine learning systems are only as good as the data they are presented with, but even well-trained systems have problems. These systems can be easily fooled by small changes from the training data set, resulting in sometimes bizarre outcomes. For example, one experiment found that a model turtle was repeatedly identified as a rifle by a Google image-classification algorithm. If self-driving cars have yet to master the relatively ordered traffic systems in peacetime, military applications of algorithms “can never anticipate all the hazards and mutations of combat.” Confusion of image-recognition algorithms in a battlefield setting could have disastrous results.

Even if image-recognition technology is perfected, autonomous weaponry seems unlikely to be able to properly interpret the wider context and respond as required by IHL. For example, modern conflicts often involve irregular forces who don’t wear uniforms. How could an algorithm distinguish a combatant from a civilian, especially in areas of the world where civilians are armed with military-grade weaponry? Trained human combatants have to interpret behavior and actions in ways that it would be extremely difficult, if not impossible, for an autonomous weapon system to imitate.

Because it is impossible to predict the harm to civilians and civilian objects without first identifying their presence, the principles of proportionality, precaution, and distinction are intertwined. An autonomous weapon system may correctly identify an enemy combatant, but unless it also recognizes surrounding civilians and the harm that an attack would cause, there is a high risk of disproportionate harm to civilians. A weapon system with narrow targeting criteria would be at particular risk of disproportionate attacks. Some autonomous weapons systems, like the Israeli Harpy drone, are designed to target enemy radar installations by locking onto the source of the radiation and attacking it. It is easy to imagine a similar drone correctly identifying a radar source but ignoring a nearby hospital simply because it was never designed to recognize or avoid disproportionate harm.

Such concerns could potentially be dealt with by ensuring that proper precautions are taken to protect non-combatants. The greatest focus so far has been on ensuring that there will always be a human “in the loop,” which the US Department of Defense has previously committed to. Having a human in the loop allows for the decision-making process to be controlled, at least to an extent, by a human. However, a 2018 Congressional Research Service report concluded that “it is possible if not likely, that the U.S. military could feel compelled to develop. . . fully autonomous weapon systems in response to comparable enemy ground systems or other advanced threat systems that make any sort of ‘man in the loop’ role impractical.” Ominously, a list of principles for the use of artificial intelligence published by the Department of Defense in 2020 makes no mention of humans being “in the loop.”

An automated weapon poses similar problems for the principle of precaution. For a truly autonomous weapon system, precaution requires consideration of the likelihood of mistakes being made. Leaving aside the question of how a weapon might be designed to calculate this likelihood, there is always the question of errors and software bugs. Any software invites bugs, but when the potential consequences are death and destruction, users will have to decide how many errors are acceptable.

Proposed Solutions

Rapid technological advances and the increasing feasibility of autonomous weaponry have led to calls for action on many fronts. Most recently, the 125 members of the Convention on Conventional Weapons (CCW), met to discuss solutions. The CCW is an international agreement which restricts the usage of weapons considered to be inhumane. A majority of the parties desired restrictions on autonomous weapons but were stymied by states currently investing in the field, such as the US and Russia. The US has argued that existing IHL is sufficient to regulate the deployment of autonomous weapon systems and has instead proposed a voluntary “code of conduct” for their use, although no details have yet been offered. In the face of this opposition, about two dozen countries have proposed a separate treaty completely banning fully autonomous weapons. While supported by the Campaign to Stop Killer Robots — an international coalition of 89 non-governmental organizations — this proposal seems unlikely to get off the ground. Many nations with advanced research programs are hesitant to abandon the field, in large part because they are concerned about ceding a potential military advantage. Some countries wish to avoid parallels to the Treaty on the Non-Proliferation of Nuclear Weapons (also known as the NPT), which limited possession of nuclear weapons to the five states that possessed them prior to 1967. India in particular does not wish to be on the wrong side of a treaty limiting military advantages again.

Some advocates have proposed a century-old solution to this decidedly modern problem. The Martens Clause, dating back to 1899, requires all means and methods of warfare to comply with the “principles of humanity” and “dictates of the public conscience.” Bonnie Docherty of Human Rights Watch has argued that “fully autonomous weapons could not appreciate the value of human life and the significance of its loss. . . . They would thus fail to respect human dignity.” However novel this argument may be, it seems unlikely to be decisive. As The Economist wryly notes, the dictates of public conscience “are more flexible than a humanitarian would wish.”

Conclusion

Autonomous weaponry development around the world shows no signs of slowing down. While acknowledging the risks of “unchecked global use” of autonomous weapons, the US National Security Commission on AI has nonetheless lauded their “substantial military and even humanitarian benefit[s].” Russian President Vladimir Putin has gone further, declaring that the world’s AI leader “will become the ruler of the world.” China is reportedly developing a large range of autonomous systems operating in the air, on the ground, and at sea. In early 2021, India unveiled a swarm of 75 kamikaze drones attacking simulated targets, a small demonstration of the 1,000-strong drone swarms that they are planning. The risk of an autonomous arms race has raised concerns about of what retired US Marine Corps General John R. Allen and Amir Husain have termed “hyperwar,” where humans are “almost entirely absent” from the decision-making chain, limited only to “providing broad, high-level inputs while machines do the planning, executing, and adapting to the reality of the mission and take on the burden of thousands of individual decisions with no additional input.” This dystopian future is at risk of becoming real, if international action is not taken. Once autonomous weaponry, such as drone swarms, become commonplace, the genie cannot be put back in the bottle and countries may be forced to develop their own autonomous weapons in response. As David van Weel, Assistant Secretary General for Emerging Security Challenges at NATO has said, “you need AI. . . in order to be able to counter AI.”

Rory Hayes is a 3L student at the Santa Clara University School of Law. This blog post was written as part of Professor Colleen Chien’s AI law class.

--

--