Member preview
Loading…

The Race to Govern Lethal Autonomous Weapons as They’re Developed

On a slight hill near the perimeter of Daejeon, South Korea, sits a squat machine gun turret eyeing the surrounding area. “It’s about the size of a large dog; plump, white, and wipe-clean. A belt of bullets—.50 calibre, the sort that can stop a truck in its tracks — is draped over one shoulder,” wrote the BBC in 2015.

An ethernet cable leads to a bank of computers and other equipment that monitors the machine gun turret, which is mounted on a base that swivels in all directions and has a range of four kilometers as it automatically detects and targets potential enemies as they move into range. However, the machine gun still needs human clearance and a few manual inputs before it is allowed to fire on a detected target.

“It wasn’t initially designed this way,” says Jungsuk Park, a senior research engineer for DoDAAM, the turret’s manufacturer. “Our original version had an auto-firing system. But all of our customers asked for safeguards to be implemented. Technologically it wasn’t a problem for us. But they were concerned the gun might make a mistake.”

DoDAAM’s customers aren’t the only ones concerned about this. People have been worried about issues raised by weapons like these for years, and that concern has escalated as more are developed and appear on the market.

Lethal autonomous weapons systems (LAWS) are able to identify and attack a target without human intervention. This is what sets them apart from conventional weapons systems, such as a human-controlled fighter jet or a tank. Their target could be a person, a missile silo, or even a software program in the case of cyberwarfare. The systems draw on technology such as image recognition, AI, and advanced sensors. The United States and China continue to pour time and money into programs that develop and streamline the capabilities of current systems and push the boundaries of LAWS, yet there is no international consensus on how to govern the development or use of these systems. While some progress is being made, international negotiations are notoriously time consuming and move at a much slower pace than the militaries and defense contractors that are developing LAWS.

The Campaign to Stop Killer Robots was first convened in 2012 in New York to address this lack of international oversight, bringing together a coalition of nongovernmental organizations in an attempt to end production of LAWS. “We decided we would have a clear and simple call to action, which would be a preemptive ban on the production, development, and use of fully autonomous weapons systems,” says Mary Wareham, advocacy director of the Arms Division at Human Rights Watch and coordinator of the Campaign to Stop Killer Robots. “We decided we were going to solely go after future weapons systems and not existing ones.”

LAWS are currently still rare and are even more rarely marketed as being fully autonomous, given the ethical and technical concerns they raise. Most developers note that their systems have autonomous capability but stress that a human involved is in the process or that they limit the autonomy to movement or navigation rather than life-and-death decisions. Like DoDAAM’s customers, most of the public would likely be uncomfortable with the championing of a system that did not have a human involved in a decision to kill a person.

Kalashnikov, best known for its popular AK-47 rifle, has moved well beyond just rifles and is building “a range of products based on neural networks.” These would include a “fully automated combat module” that can identify and shoot at its targets,” a spokesperson for the company told the Russian news agency TASS last month.

It’s instances such as these, where a system has a fully autonomous capability, that represent a central issue when it comes to grappling with how to govern and monitor autonomous weapons. How do we qualify autonomous (where no human is involved) versus semiautonomous (where some systems run on their own, but a human has final control over decision-making)? Many systems possess a mix of these qualities to varying degrees, which can be toggled on and off as needed; therefore, addressing them from a regulatory or legal perspective becomes difficult because it becomes unclear where exactly the line needs to be drawn.

“The concept of meaningful human control is important,” says Heather Roff, a senior research fellow at the University of Oxford Department of Politics and International Relations and a research scientist at Arizona State University’s Global Security Initiative. “If you have a system where you can get rid of a person, and it functions in quite similar ways, where the person was just a rubber stamp on the system’s decision, that’s not good enough.”

Computer-driven autonomic decision aids, which support and augment human decision-making are already used in a lot of different areas and are a good example of how much the lines of demarcation for autonomy have blurred. “We are going to increasingly look at those as AI gets better at image, action, and intent recognition, as well as context and situational awareness and conflict modeling,” says Roff. “We are going to use them everywhere. And so that starts to distribute any real sort of control commanders may have.” Essentially, as systems continue to support and augment our decisions and do so in a greater number of ways, our control over the systems as a whole decreases, because neural nets and other processes are doing most of the heavy lifting and shaping our decision-making. While a human may still have final say over a targeted kill, a large part of the information supplied to us that influences what choices we make are supplied by largely autonomous systems.

These gray areas about what exactly constitutes autonomy are one reason the Campaign to Stop Killer Robots has limited its approach, focusing solely on a preemptive ban on fully autonomous systems.

So far, 19 countries have endorsed the ban, while more than 21 Nobel Peace Laureates have expressed their concern that “leaving the killing to machines might make going to war easier,” and more than 3,000 AI and robotics experts signed an open letter affirming that they have “no interest in building AI weapons and do not want others to tarnish their field by doing so.”

“At some point that ‘preemptive’ is going to come off, and we are going to have to call a ban on fully autonomous weapons unless some action is taken in the meantime,” says Wareham. “We’re very aware that window is closing quickly, and we hear a lot at the UN Convention on Certain Conventional Weapons (CCW) and elsewhere from states that say, ‘Oh, we’ve got no plans to develop fully autonomous weapons systems, we’ll never do that,’ and that all sounds good.”

But if you look at where funding is being directed, as well as the developments that are underway, those claims don’t quite add up, according to Wareham. Last summer, a report from the U.S. Department of Defense called for an increase in spending on and development of autonomous systems, and countries such as Russia and China are moving ahead with their own programs, with Russia claiming it has already deployed fully functioning autonomous machine gun sentinels around missile silos.

One victory of the Campaign to Stop Killer Robot campaign has been governments agreeing to multilateral talks on the subject, which have taken place yearly at the CCW since the end of 2013. Last year, the CCW agreed to formalize these talks, convening a group of governmental experts to examine the subject. But the process isn’t without its obstacles, as some countries that lead in the field, such as Russia and, at times, China, have pushed back. This year’s talks have also been delayed twice because of issues related to UN financial dues from member countries like Brazil.

“Russia has been really intransigent in the CCW meetings,” says Roff. “They’ve been the ones that have been the most vocally against even talking about it. Meanwhile, China is China, and they talk out both sides of their mouth. On the one hand, they’ll say something really interesting, and then on the other, they’ll say something that undermines their previous rhetoric.”

At the same time, the United States continues to point to Pentagon Directive 3000.09 as its governing principle for LAWS, which stipulates requirements when it comes to the acquisition of weapons systems and doesn’t apply to the rest of the government. Additionally, that directive may soon cease to exist in its current form. The Trump administration has told the Campaign to Stop Killer Robots they are reviewing the policy, with the intention to have that review complete by November. Trump’s affinity for reversing just about anything Obama put in place does not bode well for more robust regulation of these weapons.

It seems the future of dealing with these weapons systems is a race between the push for international regulations from NGOs, the technological progress of advanced militaries and defense contractors, and the public’s awareness of LAWS.

“Right now, it seems like countries just aren’t sure whether it’s best for everyone to have these systems or no one to have them,” says Wareham. “But it’s not going to be the case that only a few will have them, because it never stays that way.”

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.

Responses
Only members of Medium may see responses to this story.