Slaughterbots!

The AK-47 was the most popular weapon of the 20th century, changing the face of warfare forever. A new breed of autonomous weapons threatens to do the same for this century, and for cheap.

Words: Mic Wright
Illustration: Timo Meyer

When he was a boy, Mikhail Kalashnikov wanted to be a famous poet. Young Mikhail did grow up to be famous, but not for his words. The weapon that bears his name — the AK47 (A for automatic, K for Kalashnikov, 47 for the year it went into production) — is the world’s most used firearm. In his book AK47: The Story of the People’s Gun, Michael Hodges estimated there was one Kalashnikov for every 35 people on planet earth — over 200 million in circulation. That number has only risen in the 13 years since. War has a way of warping dreams.

The AK-47 is a symbol of a different era, an enduring icon of an ageing form of warfare. But we live in a new era — one in which autonomous weapons can increasingly carry out bloody missions with little or no human intervention. The story of the AK-47 — of a weapon created to serve the aims of a state (the Soviet Union) becoming the tool of revolutionaries, terrorists, and other non-state actors the world over because of its affordability, resilience, durability, and replicability — signposts where we’re headed.

“The AK-47 gives us a historical precedent for state-created technology spreading; the USSR was profligate in its willingness to supply the weapons to its allies and ‘rebel’ groups across the world.”

While various forms of drone are the most commonly discussed autonomous weaponry in use today, nation states and arms manufacturers are developing and implementing a range of systems that use artificial intelligence (AI) in some form. In South Korea, the SGR-A1 robot sentry gun has been deployed on the border with the Demilitarised Zone between it and North Korea — it includes surveillance, tracking, firing, and voice recognition systems. Israel’s Harpy missile can be launched from a ground vehicle and “loiter” for hours in an autonomous flight pattern, waiting to identify and destroy radar stations. It has been sold to countries including China, Turkey, South Korea, and India.

At the start of 2020, the Turkish military began deploying “kamikaze” drones developed by the Turkish weapons contractor STM. The KARGU drone uses facial recognition and can be launched then left to autonomously “fire and forget”. STM claims the 30 drones currently in service “have the capacity to destroy an entire brigade [of soldiers], or a warship.” Professor Stuart Russell of the University of California, Berkeley has often warned about the creeping threat of AI used in warfare. He says, “Anti-personnel weapons like the STM KARGU will be traded in large quantities on grey and black markets.”

Toby Walsh, professor of artificial intelligence at the University of New South Wales, has made similar predictions: “The world’s weapons companies will make a killing — pun very much intended — selling autonomous weapons to all sides in every conflict.”

Progress towards a global ban on weapons using AI has been almost non-existent. Reports in late-2019 suggested that as many as 28 governments were pushing for a lethal autonomous weapons ban at the UN level. However, the United States and Russia have repeatedly blocked attempts to create legally binding restrictions. China, Israel, South Korea, the UK and others are also investing huge amounts of money and resources into further development of lethal autonomous systems. In Russia alone, leaked documents revealed $419m of state investment into AI projects.

“Nation states and arms manufacturers are developing and implementing a range of systems that use AI in some form.”

In September 2019, the Chinese government suggested it would support a ban on battlefield deployment of lethal fully autonomous weapons, but that it was not against their development and production. A meeting of the member countries of the UN Convention on Certain Conventional Weapons in Geneva last November failed to produce any consensus, kicking the can down the road for two years.

The AK-47 gives us a historical precedent for state-created technology spreading; the USSR was profligate in its willingness to supply the weapons to its allies and “rebel” groups across the world. It didn’t care about copyright and the AK-47 was replicated widely. In the hands of the Chinese, it was evolved into the AK-56, a version that was even cheaper to build, used smaller ammo and could pack more rounds into each magazine. The same modification and enhancement is happening and will continue to happen in the world of autonomous weaponry. The difference is that the barrier to entry is far lower — the software and hardware tools required are cheaper to acquire and easier to master than their 20th-century predecessors.

In his paper, ‘Weaponomics: The Economics of Small Arms’, the economist Philip Killicoat observed that “assault rifles may be considered a proxy for the cost of specific capital required to mount a rebel movement.” In some areas of the world, the price of an AK-47 is significantly lower than it was 25 years ago. Moisés Naím, the Venezuelan journalist and writer, reported in 2005 that the cost of an AK-47 in one Kenyan village had dropped to 4 cows. In 1986, the same weapon was traded for 15 cows. In the “ant trade”, the movement of small caches of weapons across Europe, AK-47s can emerge from Balkan black markets for as little as €100.

Terrorist groups, criminal organisations, and other non-state actors have been attempting to acquire more advanced weapons technologies for decades. Sometimes those efforts have been successful. Aum Shinrikyo — a Japanese doomsday-cult-turned-terror-cell, formed in 1984 — spent years attempting to develop biological and nuclear weapons, before perpetrating a series of chemical weapons attacks between 1993 and 1995, most notoriously the Tokyo subway sarin attack in March 1995. During the late 1990s, Osama bin Laden and al-Qaeda were alleged to have made a number of attempts to source nuclear material from black-market sources.

From 2004 onwards, the Shia Islamist political party and militant group, Hezbollah has used Iranian-made, military-grade drones for both surveillance and attacks, in an ongoing airwar against the Israeli military’s own drones. In the Syrian war, where Iran, Hezbollah, Russia and Turkey are all combatants, autonomous warfare has had a limited but noteworthy role. In October 2016, ISIL claimed to have used a commercial drone rigged with explosives to kill Kurdish and French troops in Iraq, going on to announce an “official” unmanned aircraft of the mujahideen unit. On 10 January 2018, Russia claimed its military bases in Syria had repeatedly been attracted by armed drones and that it had, at that point, been unable to identify their source.

The use of weaponised drones — both adapted commercial models and military versions supplied by sponsor countries — has not been limited to the combat zones in Syria and Iraq. Houthi rebels in Yemen, cartels in South America, the Kurdish PKK in Turkey, Boko Haram in Nigeria, and Palestinian Islamic Jihad are among the other groups that have used or been alleged to have used armed drones in some form.

“We need to be careful not to make people think it’ll be just as bad as the 50 million or so AK-47s in non-state hands. It’ll be much worse. Imagine if all of those 50 million weapons could get up by themselves, and start shooting civilians, all at once.”

In April 2017, drone maker DJI issued its first software update designed to thwart the use of its products by armed groups — implementing geo-fenced no-fly zones in Iraq and Syria. However, those measures are usually quickly broken. The phrase “a cat and mouse game” is a cliché for a reason. With DJI’s drones costing in the low hundreds of dollars, the low cost of acquiring them makes up for any inconvenience in breaking the software to deploy them.

Expanding on the analogy between the rise of the AK-47 and the autonomous age, Professor Russell paints a worrying parallel: “I think [the example of the AK-47] completely undermines the claim that autonomous weapons will remain in the hands of responsible militaries who will use them only in legal and ethical ways.” However, he believes the analogy doesn’t quite go far enough: “We need to be careful not to make people think it’ll be just as bad as the 50 million or so AK-47s in non-state hands. It’ll be much worse. Imagine if all of those 50 million weapons could get up by themselves, and start shooting civilians, all at once.”

In November 2017, the Campaign To Stop Killer Robots, a coalition of non-governmental organisations, launched Slaughterbots, a concept film about swarms of miniature drones being used for targeted killings. It is an almost Black Mirror-esque short film, deliberately designed to make a strong polemical point about the threat. Some experts, including Paul Scharre, senior fellow and director of the Technology and National Security Program at the Center for a New American Security, were publicly critical of the video’s “sensationalism”.

Just over a year before Slaughterbots premiered — in October 2016 — the US military released footage of 100 micro-drones being released from an F-18 fighter jet. Dr Will Roper, the assistant secretary of the Air Force for acquisition, technology, and logistics, described the swarm as “a collective organism, sharing one distributed brain for decision-making and adapting to each other like swarms in nature”.

Mary Wareham, Advocacy Director in the Arms Division of Human Rights Watch (HRW), which contributes to the Campaign To Stop Killer Robots, says: “Our biggest concern is nation states. Non-state armed groups will acquire and use fully autonomous weapons if governments fail to regulate. As the ones driving development, governments are responsible for resolving that.”

In April 2020, HRW took part in a two-day virtual conference on lethal autonomous weapons systems convened by the German government. The Berlin forum included representatives from 70 countries. Opening the forum, Germany’s foreign minister Heiko Maas said, “Letting machines decide over the life and death of human beings goes against ethical standards and undermines human dignity.” During the conference Bonnie Docherty, a senior researcher at HRW, argued for a treaty prohibiting weapons systems that select and engage targets autonomously, and a general obligation that states retain “meaningful human control over the use of force”. But the international will to create such a binding regime simply isn’t there yet — thanks to a combination of ideological objections to regulation, fears of falling behind “enemies”, and lobbying by arms manufacturers to protect potential profits.

Research done by the tech giants — including Google, Facebook, and Microsoft among others — into computer vision, machine learning and other areas of AI is flowing both directly and indirectly into the military industrial complex. In June 2018, Kate Conger, then at Gizmodo, now of The New York Times, revealed that Google had contributed heavily to Project Maven, a US Department of Defense-funded programme focused on autonomously processing video from surveillance drones. There was a spate of resignations and 4,000 other Google employees signed a letter to the company’s CEO Sundar Pichai protesting its involvement. The internal and external pressure led Google to release a set of ethical principles for AI, which included a pledge to not develop AI for use in weapons. Google also said it would not renew the Maven contract after it expired in 2019.

Daveed Gartenstein-Ross, the CEO of Valens Global, which consults on counter-terrorism measures and the behaviour of violent non-state actors, takes a slightly more optimistic view than some analysts. He says: “Even though I believe more needs to be done in respect of international regulation of autonomous weapons, there is a taboo surrounding them. While autonomous weapons may end up falling into the hands of violent non-state actors, I don’t see it as inevitable.

“It’s possible that international norms, coupled with a research community that refuses to budge from a stance that condemns autonomous weapons can head it off. It’s in the interest of most powerful states to avoid autonomous weapons becoming ubiquitous, something that would tend to help rogue states and violent non-state actors.”

For people working in AI, pushing to further machine autonomy in both the civil and military realms, Mikhail Kalashnikov’s 2002 reflection on his own legacy is instructive: “I’m proud of my invention, but I’m sad that it is used by terrorists. I would prefer to have invented a machine that people could use and that would help farmers with their work — for example, a lawnmower.”

Mikhail Kalashnikov died in 2013. In February 2019, Kalashnikov Concern, the weapons manufacturer that bears his name and still manufactures small arms descended from the original AK, demonstrated a new product — the KUB-BLA, a semi-autonomous kamikaze drone designed to duck radar defences before diving down and destroying its target. Kalashnikov wanted to be a poet, then he imagined building a lawnmower for the world. War has a way of warping dreams.

This is article is from Weapons of Reason’s eighth issue: Conflict.
Weapons of Reason is a publishing project by Human After All, to understand and articulate the global challenges shaping our world.

--

--

Weapons of Reason
The Conflict issue — Weapons of Reason

A publishing project by @HumanAfterAllStudio to understand & articulate the global challenges shaping our world. Find out more weaponsofreason.com