“War’s Tragedy Is That It Uses Man’s Best to Do Man’s Worst”

Andy Owen
The Startup
Published in
9 min readFeb 21, 2021

With the development of autonomous weapons we may avoid what Pastor Harry Emerson Fosdick described as war’s greatest tragedy, but will this lead to an even greater tragedy?

A summary of this piece appeared in The Spectator on 21/2/2021 (link below).

Chauvet–Pont–d’Arc Cave in the Ardèche department of southeastern France

The Chauvet–Pont–d’Arc Cave in France contains some of the earliest known Palaeolithic cave paintings, including those of cave lions, bears, and hyenas. They may be the earliest expressions of human fear. It is hard for us living in the mostly urban West to imagine how it must have felt living with creatures that recognise you as prey. This primal fear is buried deep within us. It explains our fascination with the rare stories of shark attacks or big cat mauling’s. We may however find out what that fear is like after the US National Security Commission on Artificial Intelligence (NSCAI) concluded last month that the US should not agree to a proposed global ban on the use or development of autonomous weapons systems (AWS). This could accelerate a new arms race with Russia, China and other powers and make the science fiction nightmare of killer robots driven by rogue algorithms hunting us down, like the cave lions of our past, scientific fact.

The development of AWS raises important questions over their ability to navigate the complexity of the battlefield, the accountability of killings in war and who makes the decision to take a human life.

When I served on operations in Iraq and Afghanistan, immersed in the fog of war, facing an unpredictable enemy, my decision making was swayed by my biases and emotions, and limited by my intelligence and the speed with which I could process information. Driverless cars will supposedly be safer by removing human error, the largest single cause of accidents. Could introducing further automation in war reduce deaths? Vice-chairman of the US Commission and former Defence Secretary, Robert Work, claimed “It is a moral imperative to at least pursue this hypothesis”.

Militaries have used AWS for centuries. The 162 BC Battle of Beth Zechariah saw the Syriac-Greek army use 30 wine-fuelled war elephants. Trampling across the battlefield these drunk elephants wouldn’t have recognised friend from foe, but they may be the first documented use of AWS. Former US Army Ranger Paul Scharre cites the Second World War Falcon torpedo as an early example. They were upgraded after two of the first three U-boats to use them were sunk by the torpedoes zeroing in on their own propellers. Today there are defensive missile systems that can monitor for threats and engage multiple inbound projectiles at a speed no human could match. Many militaries can launch unmanned armed drones to search wide areas and destroy targets they identify. Some are developing autonomous drone swarms. South Korea has an autonomous sentry robot on its northern border. Russia has deployed unmanned armoured vehicles in Syria, and many nations possess software capable of autonomously launching a cyber-counterattack.

The distinction is whether a human is in, on, or out of ‘the loop’. This is the process of searching, detecting, deciding and engaging the target. Semi-autonomous weapons may search and detect the target, but a human decides to engage. With supervised AWS, the whole process is autonomous, but a person monitors and can intervene. With fully autonomous weapons the entire process is automated with no supervision.

There are practical problems to fully removing humans from the loop. How do you turn AWS off when the war ends when you have no communications with them, how do you prevent them being hacked and used against you, and how do you avoid systematic errors that could occur rapidly and at scale? If a soldier goes rogue, there are consequences, but what if, due to one error in millions of lines of code, an army of AWS goes rogue? Philosopher Stuart Armstrong argues that, whilst small teams can go rogue, organisations are resistant to big, sudden, changes in morality. If a leader was prepared to achieve victory at any cost, in the short-term they are limited in how much they can change military ethics at scale. However, just a small team could change settings on a whole army of AWS to set new and dangerous norms.

Such errors could also escalate an evolving crisis into a war. In 2010 a trading algorithm contributed to a ‘flash crash’ in the stock market causing a temporary loss of almost a trillion dollars. Afterwards regulators updated circuit breakers to halt trading when prices drop too quickly, but how do you regulate against a flash war?

The most difficult technical problem is the inability of machines to navigate the fog of war. How do we program a machine to conform to the International Humanitarian Law (IHL) requirements of the principles of distinction (recognising legitimate targets), proportionality (what level of force and collateral damage is appropriate), and necessity (is it necessary to secure overall objectives)? In urban environments how would a machine be able to identify who is a combatant? In Afghanistan Scharre’s team spotted a shepherd with a herd of goats, circling their position. They could hear the young Afghan talking — other patrols had been compromised by shepherds radioing in their locations to the Taliban. Rules of engagement allowed him to shoot. He did not and eventually heard that the young Afghan was singing to the goats rather than radioing in their position. He believes it’s likely an AWS would have engaged. It’s a step-up from understanding the rules of the strategy games AI has had successes in.

Historian, Margaret MacMillan notes the laws soldiers follow in war are not a legal code as recognised in civil society, they are as political theorist, Michael Walzer puts it, ‘a compendium of, professional codes, legal precepts, religious and philosophic principles, and reciprocal arrangements’. Can you design AWS to understand this compendium and set rules in advance that will handle the infinite number of possibilities battlefields provide? How do we test it? If we break down human intelligence into a series of programable tasks, what philosophical principles would we program them with? As AI learns what ethical frameworks will it develop? Some of the most complex neural networks used in AI have become ‘black box’ systems — we can follow the inputs and outputs but what goes on inside is too complex for us to understand. Armstrong highlights the instruction “Keep humans safe and happy” could lead AI to bury us in lead-lined coffins connected to heroin drips. Philosopher Ludwig Wittgenstein claimed, “If a lion could speak, we could not understand him”. As AWS face unanticipated situations they may act in unintended ways.

Armies have trained soldiers and seen how they react over centuries of ever changing warfare, whilst it is impossible to predict with absolute certainty how any one individual will react to the rigours of war, at the unit level modern, well led, volunteer soldiers adapt and behave in line with commanders expectations. They may make mistakes and their conventional weapons may malfunction, however there will be a level of awareness of the error and the errors do not have the same potential to escalate at the unparalleled speed and scale of AWS. The NSCAI report says that we; “must consider factors such as the uncertainty associated with the system’s behaviour and potential outcomes, the magnitude of the threat, and the time available for action.” This opens the door to unspecified levels of uncertainty and euphemistically termed “potential outcomes”, so long as the ends justify the means.

This uncertainty has consequences for the issue of accountability. With no identifiable operator and AWS acting in ways its designers cannot foresee, will commanders avoid prosecution for potential war crimes? Philosophers Thomas W. Simpson and Vincent C. Müller believe accountability is possible through the introduction of tolerance levels, such as a bridge designed to withstand certain weights and environmental conditions. This level will be the result of balancing the costs of system design and construction with the value of the system, the expected costs of its failure, and the distribution of those costs (who might be killed, whether they consent to the risks they are being exposed to and whether they benefit from the activity creating the risk).

Their arguments assume that we build AWS to the best of our ability and the technology results in the risks of deploying AWS being less to non-combatants than deploying an all-human army. Then, if a system performs outside its tolerance level, no one is blameworthy for resultant harms. They may be financially liable but not morally blameworthy. If deployed outside the level, commanders are blameworthy.

For some low population situations the assumptions made hold true — such as underwater, in space or aerial dogfights — it will be clear that AWS are being deployed within their tolerance level. However, with the technology currently available, in densely populated environments where non-combatants and combatants are hard to distinguish amongst the fog of war, will commanders be able to accurately assess whether or not the deployment is within or outside the tolerance level? When AWS are deployed anyway, will individual moral accountability be possible outside clear cases of gross negligence or deliberate misuse? We want those entrusted with lethal weapons to be held to the highest standards. The responsibility this entails is part of the safety catch of those weapons.

Responsibility is also an important part of post-conflict resolution. It’s not uncommon for former adversaries to become friends bonding over their shared experience. What if only one side in a conflict has AWS? One side will have lost human soldiers the other merely machines. How do they reconcile the asymmetry of that loss? (This raises the wider question — When a state uses AWS who does the state being attacked target in response?) For truth and reconciliation commissions that have been successful in Argentina, Northern Ireland and South Africa recognising individual moral accountability has been crucial.

Finally, there is the question of who we allow to make life and death decisions. Would we want an algorithm to decide to turn off a ventilator when it assessed a future negative outcome for a Covid-19 patient rather than a doctor? Likewise, we want whoever pulls the trigger to recognise the value of the life they are taking. One weapon that doesn’t is the landmine and there are currently 164 states signed up to the Ottawa Convention that aims to put an end to their use.

At our most intimate moments we want to be recognised as a person. This involves recognising that my fate is not ultimately separable from yours. We are united by our mortality. This creates what philosopher Judith Butler calls a ‘relationality’ that to neglect would be denying something fundamental about us. This is intrinsically linked to the idea that justice requires reciprocity of risk in war. Philosopher Paul W Kahn argues that the combatants possess the right to injure each other ‘as long as they stand in a relationship of mutual risk’ — which has relevance for the current semi-autonomous drones we use. AI cannot recognise our personhood or reciprocate that vulnerability. As we look to AI to care for our elderly or fulfil our need for sexual relationships, we should consider this. Since we ascended to the apex of our food chain, we have created an anthropomorphic world. The IHL is anthropomorphic, as is our morality and our sense of self. Handing our most intimate acts to an ‘other’, even if we believe we retain a distant control, undermines this world and our place in it.

Horrors like the Holocaust are marked by machine-like processes and the arbitrariness of life and death decisions. In the absence of human judgement, how can we ensure that killing is not arbitrary? When one human attempts to de-humanise another, that person is stained in a way a machine cannot be. That stain represents the significance of the life taken and dignity stolen, like the mark set upon Cain. It survives the killing in the form of psychological injury on the killer. It follows that a society that values life highly will see greater incidents of psychological trauma in its soldiers. This trauma shows the people killed mattered.

With recent developments in the US a ban on AWS looks unlikely. In the report the Commission emphasises that the US’ strategic competitors are already deploying such weapons — and without ethical frameworks for responsible design or use. The imperative to continue to develop AWS seems driven by fear of battlefield disadvantage to an enemy who will not play by our rules. The report recommends the US continues to develop AWS while encouraging others to only deploy them in a responsible manner. No matter what we tell ourselves in peacetime, once we have these weapons, there is no going back.

A soldier’s job is to make judgements. They move through multiple layers of interpretation and context to do so. We should focus more on their ethical education (there is not a comparable amount spent on that education as there is on developing weapons) and use AI to help them in, but not supersede, their decision making. They will remain imperfect but so we all are, and our imperfection binds us together as much as our ultimate vulnerability does. The decision to end a life should never be arbitrary. Those that make that decision will carry the burden of it. Anyone who is part of the kill chain, from the plush-carpeted halls of power to the young scared kids in the desert, must always be aware of this. If we chose to hand over the decision to kill, we will profoundly change the nature of the human world we have created.

--

--