ICTC’s Tech & Human Rights Series

Can We Stop Killer Robots?

Khiran O'Neill
ICTC-CTIC
Published in
10 min readSep 28, 2020

--

Photo by 铮 夏 on Unsplash

On September 2, 2020, the Information and Communications Technology Council (ICTC) spoke with tech policy researcher and human rights advocate Marta Kosmyna as part of ICTC’s Technology and Human Rights series. Marta has worked with Human Rights Watch and the Campaign to Stop Killer Robots. Khiran O’Neill, Research and Policy Analyst with ICTC, spoke with Marta about the campaign, autonomous weapons, the ethics of tech development, and arms races.

Khiran: Thank you so much for speaking with me today. Can you explain what the Campaign to Stop Killer Robots is, for those who might not be familiar?

Marta: Absolutely. The Campaign is a coalition of over 160 organizations working in about 65 countries, and they have the singular goal of preventing the development of fully autonomous weapons systems — we call them killer robots for short. Basically, these are weapons that would be able to select and engage targets without any meaningful human control or oversight. What we see right now are systems that use machine learning but still operate with human oversight. But given how quickly these systems can be developed by the US military and other militaries as well, we risk an AI arms race where weapons are developed with greater levels of autonomy. When it comes down to it, this leads to machine algorithms making life and death decisions. And that’s not the future that I envision. So I’m actively working to fight against it.

Khiran: Can you explain how you came to work on killer robots?

Marta: Sure. It’s a bit of a convoluted path. My initial research focused on civilian protection in conflict zones. Whether that was during the Russia-Ukraine conflict where I documented weapons that were being used on the battlefield in 2015, to later when I worked at Human Rights Watch and tracked Russia’s involvement in Syria and other weapons that were used in various wars around the world, and how they affected civilians.

One of the trends I noticed was greater levels of autonomy in the weapons systems that were used in battle. These weapons, while being marketed as more precise, were in fact still killing thousands of civilians — women and children. And so I became really passionate about talking to people who were actually developing the software systems that would be used in weapons and helping them understand how their work could lead to them unintentionally developing killer robots.

Khiran: That’s really interesting. What does your work on killer robots look like right now?

Marta: I’m working on this interactive data visualization project that shows stakeholders within the [US] Department of Defense and the technology industry, academia, the financial industry. It maps who’s influencing US military policy on artificial intelligence. There is a lack of transparency in this space, as anyone who’s ever tried to research military contracts has found. But I’m excited about people who are investigating, reporting, and putting out the research that helps the public better understand these systems. That’s the only way we’re going to push for change. If we can document what’s happening with current systems and understand where they are headed, we can mitigate some of these unintended consequences.

Khiran: I understand that part of the Campaign to Stop Killer Robots was to receive pledges from tech workers and companies to ensure that these weapons don’t get used, and I was wondering what you think the role and responsibility of the tech sector is when it comes to autonomous weapons.

Marta: Right. There are so many ways for tech companies and for tech workers to get involved. But in my role as Silicon Valley Lead [at the Campaign to Stop Killer Robots], we really broadened that out because there were students, budding engineers, who were going to be interning and working at big tech companies and startups in the future. And there were venture capitalists who were working on financing various startups and companies that were working on robotics and AI. So really, there are lots of different stakeholders that are important in preventing the incorporation of artificial intelligence into weapons systems without meaningful human control. The pledge was a really easy way for tech workers to say, “I’m on board. I’m willing to be held accountable. You know, I’ve read your reports, I’m part of your team.” It was part of building a movement to get people more involved so that they could do things like start talking to their representatives. We brought a couple tech workers to the United Nations with us to talk directly with policymakers to bridge that gap between tech and government.

Khiran: I’m glad you mentioned policymakers because that brings up another question I wanted to ask. What’s their role in ensuring that autonomous weapons are avoided?

Marta: For the Campaign specifically, [policymakers are] working on developing a new treaty at the United Nations that countries would sign on to and say unequivocally that there would be a legal boundary that would not allow fully autonomous weapons to exist. Those talks have been going on for seven years now. They move slowly, so the Campaign is looking for other ways to invigorate this conversation. Really what needs to happen is countries must step up and, either individually or within a regional coalition, create policies. Right now, there are a lot of great discussions happening at various events. Coalition members from the Campaign are holding events to educate policymakers. But we’re slightly beyond the point of just raising awareness, and we really need to push for people taking action.

To give you a concrete example, in the United States, the military has a policy within the Department of Defense: Directive 3000.09. The policy requires “appropriate levels of human judgment over the use of force,” which is kind of ambiguous language, and it’s not actually written into law. If a war were to break out, that policy could be waived by a senior Department of Defense official at any point. So what I’m advocating for right now, and what my research is focused on, is how do we go from raising awareness to actually creating laws that are written in stone where our elected representatives are providing oversight and guidance to the military, to law enforcement agencies, to intelligence agencies, and all these different groups that would be involved in the use of fully autonomous weapons.

Khiran: In addition to this particular United States military directive, are there other regulations elsewhere regarding autonomous weapons, that you’re aware of?

Marta: Yeah. We have a collection of statements, articles, and policy proposals from nations around the world that have either committed to supporting a ban on fully autonomous weapons, or they’ve articulated a definition of what they consider a fully autonomous weapon. And there’s been a lot of research that think tanks, academia, and other policy outfits have come out with.

What I really don’t want to happen is the casualties that occurred before the Chemical Weapons Convention or the Biological Weapons Convention were adopted. A huge amount of money was thrust into the research and development of these weapons, and they were used against a great number of victims. These weapons were banned after the fact. I don’t want there to be victims of fully autonomous weapons before the United Nations can step up and pass a treaty, before nations individually step up and create policies. That’s the worst-case scenario. That’s why I think tech workers, students, the financial industry, all of us have a role to play in stopping their development.

Khiran: Yeah, I can see how that’s a concern. Sometimes it seems that policy has to play catch-up rather than be proactive. Since you brought up these conventions on chemical and biological weapons, I’m wondering, are killer robots different than other weapons or arms races?

Marta: The way I look at it, proliferation becomes a bigger concern with fully autonomous weapons because, quite frankly, they can be developed very cheaply. You can develop very dumb killer robots right now. Countries with large military budgets are trying to develop fully autonomous weapons that can supposedly comply with the laws of war — which is not technically possible right now. There are principles of proportionality and distinction that can’t be effectively programmed into a machine.

These weapons can proliferate much more quickly and be developed more cheaply. They may become an equalizing force between the great powers and non-state actors. There’s enough concern to get out ahead of research and the development before these weapons are operationalized or sold. There is this slow creep toward increasing levels of autonomy where, once weapons are adopted by the military, they also trickle down to domestic law enforcement and other government agencies, which I think is a huge concern.

Khiran: I think of the ethical discussions around autonomous vehicles. Inevitably, EVs are going to be involved in car crashes, but ideally they’ll result in fewer deaths than those caused by human error. How does this translate to autonomous weapons?

Marta: I really see autonomous vehicles as different because they aren’t designed to kill. You have policy debates happening about what is an acceptable threshold of violence for autonomous vehicles. But they are literally designed to not harm humans. What we’re talking about with killer robots are systems that use things like machine learning, computer vision, pattern recognition, cloud computing — really common things used in our world today — that would be incorporated into a weapon specifically designed to kill.

Khiran: What are some human rights related opportunities when it comes to AI and automation or other emerging technologies?

Marta: I talk to a lot of developers that have great ideas on how to use technology to protect human rights. I would talk through their project with them and realize that sometimes just because something can be built doesn’t mean it should be built. Oftentimes when you create a technology that can be used to protect human rights, other challenges arise. Say you have a recording device that can document police violence. If you find yourself in that sort of a situation, you can just press play and it starts to record. Well, that puts the onus on the user to be responsible for preventing state violence, essentially. Or if you develop a platform for human rights activists or journalists to use in order to keep their communications safe or their documents safe, then that platform could become a target for hacking. So I always caution against technology projects that are specifically for human rights activists. I think it’s better to develop systems and projects that are broadly used, and support ideals that protect at-risk groups without making them a target as well.

Khiran: Other than autonomous weapons, what do you see as the most pressing concerns when it comes to the interaction of technology and human rights?

Marta: I look a lot at the role of big data, privacy, and surveillance. I’m concerned about things like facial recognition and other biometric tracking technologies that are really in their nascent stages right now. Especially from a human rights perspective, I think there’s a lot of potential for harm, given current conflicts, protests, and legal cases showing potential misuse even when organizations claim to anonymize their data. I think the EFF [Electronic Frontier Foundation] and ACLU [American Civil Liberties Union] are doing some incredible work to raise awareness on these issues and push for policy that actually protects our rights.

Like you said, technology is moving way faster than policymakers are moving, so there are a lot of issues at play. There are a lot of opportunities for research, for policymaking, and for folks to get involved. Nothing is too niche. The US military at one point was testing heartbeat detection to identify individuals, which is so beyond facial recognition. So I think students, developers, and senior engineers need to be aware of how these technologies can be used. They need to be aware of what money, investments, and sponsorship they’re accepting. And they need to ask questions, to organize, and to work as a team. If they come across something that’s ethically wrong or questionable, there need to be avenues for addressing that and sometimes saying, “Even though we’ve put this much money or effort into this project, we can’t release it. We can’t develop it.”

Khiran: You’ve really dedicated your work to this topic. Are you optimistic?

Marta: Always! There is this huge, powerful movement — especially from the tech-worker community, supported by academics and civil society organizations — that pushes beyond [the status quo acceptance of autonomous weapons]. And now, there’s this movement where the public is demanding more from the technology we use. We’re becoming more aware of both the limitations of technology and some of the scary possibilities. And frankly, we’re at this precipice where we have to push for change. We have to educate ourselves and the people around us to build a future that we want to be a part of. When the [US] Department of Defense was looking at ethics in the past, they were looking at things like code of conduct violations. I think they’re now looking at technology ethics in a different way, which wouldn’t be possible without external consultations and some of the astounding researchers who have focused on racial justice, algorithmic bias, data privacy, and anti-surveillance work. We’ve seen each of these focus areas grow into their own discipline.

I think that’s really powerful. Without people stepping up, that wouldn’t have happened. So we need to nurture students and people from the beginnings of their journeys as engineers, computer scientists, roboticists to people who become founders, CEOs, decision makers, and leaders — to essentially build the future that we want, one that we can be proud of.

Marta is a technology policy researcher and human rights advocate. She testified at the United Nations on the use of artificial intelligence in warfare and law enforcement, and spoke at conferences on tech ethics around the United States. She previously served as Silicon Valley Lead for the Campaign to Stop Killer Robots, where she advised engineers and roboticists on the ethical consequences of AI in warfare and policing, and the human rights implications of developing tech for government, military and law enforcement use. Prior to that, she worked at Human Rights Watch, the Institute for the Study of War, and the Ohio House of Representatives. More information is available at martakosmyna.com.
Khiran is a Research and Policy Analyst at the Information and Communications Technology Council (ICTC), a national centre of expertise on the digital economy. Khiran’s interests and experience lie largely in the future of work, experiential learning, and meaningful and equitable employment — all of which he’s fortunate to work on at ICTC. Khiran holds a Master’s of Public Administration from the London School of Economics.

--

--

Khiran O'Neill
ICTC-CTIC

Khiran is a Research and Policy Analyst at the Digital Think Tank by ICTC