We all have a stake in banning “killer robots”. Here’s why.
By Rasha Abdul-Rahim, Researcher/Advisor on Arms Control, Artificial Intelligence & Human Rights
What should we do about “killer robots”? That’s the question states will be asking today when they meet at the UN Convention on Certain Conventional Weapons (CCW) annual meeting of States. “Killer robots”, more commonly known as fully autonomous weapons (AWS), are systems which can decide who to target and when to use force without any human control. They are developing at an alarming rate, and it is in the interest of us all to ban them before they become widespread on the battlefield and in policing.
AWS will incorporate algorithms to make decisions about targets and attacks, and the US lethal drone programme provides a stark example of how crude algorithmic decision-making can be. Earlier this year, Amnesty documented in a report on US drone strikes how the USA has been increasingly relying on signals intelligence to choose its targets. Vast amounts of data about an individual’s behaviours and communications are used to create an algorithm which is then used as a basis to target and kill specific individuals — sometimes resulting in tragic errors.
Delegating life-and-death decisions to machines
Amnesty International, a member of the Campaign to Stop Killer Robots, has frequently raised concerns about the development and potential use of AWS without meaningful human control. The proponents of AWS say removing humans from the equation would increase speed, efficiency, stealth and would also cut out emotions — panic, fear, revenge — which can lead to mistakes and unlawful actions. But there is a fundamental ethical concern here about delegating the power to make life-and-death decisions to machines.
Also, complying with international law requires a set of inherently human skills and it is very unlikely that AWS would be able to replicate the full range of inherently human characteristics necessary to comply with international law. This includes the ability to analyse the intentions behind people’s actions, to assess and respond to dynamic and unpredictable situations, or make complex decisions about the proportionality or necessity of an attack. The use of AWS would also create an accountability gap if, once deployed, they are able to make their own determinations about the use of force.
AWS are also vulnerable. Without human oversight they are prone to design failures, hacking, spoofing, and manipulation, making them unpredictable. As the complexity of these systems increases, it becomes even more difficult to predict their responses to all possible scenarios, as the number of potential interactions within the system and with its complex external world is simply too large. The development of AWS would inevitably spark a new high-tech arms race between world superpowers, causing these weapons to spread widely to unscrupulous actors.
For all these reasons, Amnesty International is calling for a total ban on the development, production and use of AWS. We need legally-binding standards which ensure that humans remain at the core of ‘critical functions’ (selecting and attacking individual targets) of weapons systems. This will enable respect for international law, and address the ethical concerns around delegating life-and-death decisions to machines.
With “killer robots” on the horizon, will states support a ban?
Momentum for a ban has been steadily growing over the past year. At the CCW meetings in April and August this year a majority of States, including Austria, Brazil, Mexico, States forming the African Group and the Non-Aligned Movement, all emphasized the importance of retaining human control over weapons systems and the use of force. Most states expressed support for developing new international law on AWS, and so far, 26 States have called for them to be banned.
UN Secretary-General António Guterres also voiced strong support for a ban, describing weapons that can select and attack a target as “morally repugnant”. In his Agenda for Disarmament he pledged to support states to elaborate new measures, such as a legally binding instrument. On 12 September a large majority (82%) in the European Parliament called for an international ban on AWS and for meaningful human control over the critical functions of weapons.
Despite this, a small group of states including Russia, the USA, UK, Australia, Israel, France and Germany are blocking movement towards negotiations for a ban. These are all countries known to be developing AWS.
For example, a recent report revealed that the UK Ministry of Defence and defence contractors are funding dozens of artificial intelligence programmes for use in conflict, and on 12 November the UK began Exercise Autonomous Warrior, the biggest military robot exercise in British history, which is set to run for a month and will test over 70 prototype unmanned aerial and autonomous ground vehicles.
The UK has repeatedly stated that it has no intention of developing or using fully autonomous weapons. Yet such statements are disingenuous given how narrowly the UK defines these technologies (“machines with the ability to understand higher-level intent, being capable of deciding a course of action without depending on human oversight and control”), making it easier for the UK to state that it has not, and will not, develop such weapons.
Although Russia has said it believes the issue of AWS is “extremely premature and speculative,” it is developing various autonomous systems. Last year Russian arms manufacturer Kalashnikov announced it would be launching a range of “autonomous combat drones” which would be able to identify targets and make decisions without any human involvement.
France and Germany have proposed a Political Declaration as “a first step” to gather support for the principle of human control over future lethal weapons systems and to ensure they are in full compliance with international law. However, in Amnesty’s view a non-legally binding declaration falls far short of the urgent and serious response needed to address the multiple risks posed by these weapons.
What is clear is that the development of the technology is racing ahead while an international response to it lags behind. But it’s not too late to rein in weapons that would on their own select and attack targets, and we can do it without stifling technological development in other fields.
Tech companies and AI experts join call for a ban
Encouragingly, momentum has been growing in the private sector. The workforce of tech giants like Amazon, Google, and Microsoft have all challenged their employers and voiced ethical concerns about the development of artificial intelligence technologies that can be used for military purposes and in policing.
For example, in April around 3,100 Google staff signed an open letter protesting Google’s involvement with Project Maven, a programme which uses machine learning to analyse drone surveillance footage in order to help the US military identify potential targets for drone strikes. Google responded by releasing new artificial intelligence principles, which included a commitment not to develop AI for use in weapons and announced it will not renew the Project Maven contract when it expires in 2019. However, Amnesty still has concerns over the existing contract and questions whether it is in line with human rights or Google’s own principles.
Earlier this year 242 tech companies, including XPRIZE Foundation, Google DeepMind and Clearpath Robotics and 3,179 AI and robotics researchers, engineers and academics signed a Lethal Autonomous Weapons Pledge committing to neither participate in nor support the development, manufacture, trade, or use of autonomous weapons.
If thousands of tech experts are so concerned about the development and potential use of AWS and agree with the Campaign to Stop Killer Robots that they need to be banned, what are governments waiting for?
In Amnesty’s view, legally-binding standards are the most effective way of ensuring that humans retain control over weapons systems and the use of force. States should support negotiations on legally-binding standards and begin negotiations in 2019. A small group of States who are heavily investing in this technology should not be allowed to override the will of the majority of States who want to see meaningful human control exercised over autonomous weapons. After all, it is precisely these States who will most likely be at the receiving end of these systems. Today states at CCW have an opportunity to stay ahead of the game and take steps to protect future generations from AWS — before it’s too late.