Battle for the future

Magnus Løvold
Points of order
Published in
6 min readMay 7, 2024
Dormitory in Dnipro, Ukraine, after Russian drone attack the evening of 6 January 2024. Image is used for illustration purposes only (State Emergency Service of Ukraine/Wikimedia commons)

“Oh god, don’t do that”, the chair of the meeting hissed, perhaps unaware that his microphone was still on.

Andreas Bilgeri, a representative of Austria, had just requested the floor for a second time during a meeting of the Group of Governmental Experts on Lethal Autonomous Weapons in Geneva. His intent: To hit back at Russian attempts to bar non-governmental organisations from “politicising” discussions about autonomous weapons and “making accusations” against states.

Alert to the perils of self-fulfilling prophecies, the impatient chair, Ambassador Robert in den Bosch of The Netherlands, had first tried to tranquillise: “We have to be careful, if we want to avoid politicisation, that we don’t politicise by talking about politicisation”, he attempted. Observers looked somewhat confused. Who was talking about politicisation now?

In the end, an Austrian tautology, of sorts, settled the matter: “We are ready to try a consensus on this understanding under the understanding [that] there will be no misuse of this understanding by … some delegations”, Bilgeri said.

Oddly, that statement allowed the meeting to proceed.

Four months earlier, far from the hushed conference rooms of the United Nations in Geneva, a new military drone was for the first time catapulted into the air somewhere in Russia. The flying killing machine, dubbed “The Lancet kamikaze drone”, may have carried up to five kilograms of high explosives within its tube-shaped body, ready to detonate upon impact with its target.

But which target? Details are scare, as always in matters of autonomous warfare, forcing speculation: Whizzing through the air at up to 100 kilometres per hour, the drone’s bulb-like camera would scan the landscape, its vacant yet all-attentive eyes looking for targets to strike. From below, it would sound like a lawnmower grazing the clouds.

In rapid succession, the kamikaze drone would lock in its target and launch itself towards the ground. No one but those at the receiving end of the ensuing explosion would know what had just occurred, and they would most likely not live to tell the story. Questions would linger: did the drone work as planned?

In technological terms, the attack would be a small step, barely perceptible. The war in Ukraine has seen widespread use of remotely controlled drones on both sides with various degrees of autonomous capabilities, including a Ukrainian drone powered by artificial intelligence. Programming these drones to not only identify but also attack their targets could, quite probably, be accomplished by a keystroke.

But in moral, social and legal terms, the Russian attack would be a giant leap into the unknown: Possibly for the first time, a country at war had allowed an autonomous weapon to hunt down and attack targets.

More than ten years have passed since the parties to the awkwardly named Convention on Certain Conventional Weapons (CCW) first met to come to grips with “questions related to emerging technologies in the area of lethal autonomous weapons”.

One would have thought that tens years of discussions about something that the UN Secretary-General has labelled “a direct threat to human rights and fundamental freedoms” would lead the world’s countries to, at least, draw a line in the sand; to agree that, whatever one may think about war or autonomous technologies, we will not set ChatGPT out on a killing spree.

Think again. While autonomous technologies are developing at warp speed, shaping more and more aspects of people’s lives, the process towards a treaty banning autonomous weapons has been a slow burn, at best.

“The technology is driven forward in absolutely breakneck pace, whereas the efforts to regulate are not really matching this. The window of opportunity is closing very rapidly. I think time is running out”, Ambassador Alexander Kmentt of Austria warned in an interview last week.

The procedural skirmish between Russia and Kmentt’s own country Austria at the opening of last week’s meeting in Geneva illustrates the problem. More than one hundred countries have, according to Human Rights Watch, come to view a treaty on autonomous weapons as “necessary, urgent, and achievable”. Yet, international discussions have been marred by a wholly unrealistic assumption that such a treaty can be adopted by consensus — adopted, that is, without objection from any of the countries participating in the negotiations.

Imagine Vladimir Putin, Vlodomyr Zelenskyy, Benjamin Netanyahu and Mahmoud Abbas shaking hands on a treaty restricting their ability to develop new weapons. You get the picture.

As a result of the consensus requirement, the process towards an autonomous weapons treaty has turned into a spectacular display of delaying tactics.

Countries could have spent the past ten years negotiating rules adapted to the era of autonomous warfare. In a situation of escalating geopolitical rivalries, they could have demonstrated that, whatever the future may bring, there are certain red lines we will never cross. They could have demonstrated, too, that international law, by articulating these red lines, offers a way to mediate our differences and reinforce the notion that there is still, in spite of all, such a thing as a common humanity.

Instead, diplomats have, since 2014, embroiled themselves in lengthy technocratic discussions about “characteristics and definitions” of autonomous weapons — a narrow-minded strategy of complexification pursued by those seeking to prevent the start of treaty negotiations. Several countries — including those espousing the virtues of a “rules-based order” — have questioned the need for international law to keep abreast with the evolving nature of warfare, offering instead non-binding declarations, codes of conduct, compendiums of best practices, and “possible guiding principles”.

Some, especially Russia, have dug even deeper into the toolbox of dilatory tactics. While their autonomous kamikaze drones are piercing through the air along the Ukrainian border, suit-clad Russian representatives in Geneva have, again and again, ensnared diplomats in drawn-out debates about the number of days it can meet and the role and involvement of non-governmental organisations.

Looking at the faces of those that have had the misfortune of participating in these discussions, one gets a clear sense of growing desperation.

There are signs, however, that countries are now getting ready to throw off the paralysing consensual straitjacket of the CCW. In a major announcement made on October 5 last year, the UN Secretary-General and the President of the International Committee of the Red Cross called on political leaders “to urgently establish new international rules on autonomous weapons, to protect humanity”. About a month later, 164 of the world’s countries voted in favour of a resolution requesting the UN Secretary-General to submit a report on ways to address the “challenges and concerns” autonomous weapons raise to the first committee of the UN General Assembly — a forum governed, not by consensus, but by the will of the world’s majority.

Meanwhile, countries have mobilised at a regional level, especially in Latin America and the Caribbean, to call for “the urgent negotiation of an international legally binding instrument on autonomy in weapons systems”. Even within Nato, a group of seven countries have proposed a treaty to prohibit and regulate lethal autonomous weapons.

Next month, moreover, Austria will host an international conference focused on “autonomous weapons and the challenge of regulation”. Austria has not offered much visibility on what the conference is expected to lead to, apart from “advancing the debate on an international regulation” of autonomous weapons. While formal treaty negotiations are unlikely to begin in Vienna, the conference is expected to build further momentum behind the call for a new treaty.

The process towards a treaty banning autonomous weapons is, in more sense than one, a battle for the future. Much like nuclear weapons, autonomous weapons is a question about “the point at which the rights of states must yield to the interests of humanity, the capacity of our species to master the technology it creates”, as the President of the ICRC said in a speech that kick-started the process towards the Treaty on the Prohibition of Nuclear Weapons in 2010.

But unlike nuclear weapons, which emerged with a city-busting bang in Hiroshima in 1945 and left diplomats scrambling — for three quarters of a century — to understand how to bring these weapons under regulatory control, autonomous weapons are still in their infancy. These weapons can still be controlled.

The process towards a treaty on autonomous weapons offers, from this perspective, a chance to show that we have learned. That we can, by mustering the civilising power of international law, reinforce the notion — questioned by many, and reasonably so, after Israel’s war in Gaza and Russia’s war in Ukraine — that warfare, however terrible, is not a technological free-for-all.

--

--

Magnus Løvold
Points of order

Norwegian Academy of International Law. Previously with the ICRC, Article 36, Norway and ICAN.