We need a global ban on autonomous weapons
Recently, various experts opted for a global ban on autonomous weapons. Autonomous are weapons that can search for and eliminate people meeting certain predefined criteria themselves, without human interaction.
The experts, including Stephen Hawking and Elon Musk, say an autonomous weapons arms race is a bad idea and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control. An autonomous weapon arms race is a bad idea for various reasons. First of all, if a great military power pushes ahead with artificial intelligence weapon development, a global arms race is inevitable, and autonomous weapons will become the main weapons of the future. Autonomous weapons are, in contrast to most powerful weapons today, not made of expensive material. Therefore, autonomous weapons will be available to many different kind of forces in the world, including terrorists and other malicious forces willing to cause harms to humans. The experts state that “Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.” It seems to be obvious that we should ban autonomous weapons globally. The legal question how to ban those kind of weapons is a different one, and is the subject of this article.
If the purpose is to ban autonomous weapons, one has to clarify that this measure is in accordance to the principles of proportionality and subsidiarity. A global ban is a rather radical measure. Is a global ban reasonable and proportionate to the expected danger? Are there no less radical solutions to prevent the danger of autonomous weapons? What are the alternatives? One option is to restrict the use and ownership of autonomous weapons. One may think that it is sufficient to explicitly limit the legal use of autonomous weapons to national armies and people with a license. However, this is the case with a lot of weapons, certainly in most parts of Europe (military forces, the police and people with a gun license). This legal situation does not mean that no one possesses a weapon. We all remember the horrible Charlie Hebdo attacks in Paris and in Amsterdam alone about ten people were shot this year because of what they call “a settlement in the criminal underworld.”
When you take into consideration that autonomous weapons will be much more harmful in comparison to guns, it is obvious that a license system is not a sufficient measure.
The best way to introduce a global ban is by a treaty, as suggested by the United Nations. Autonomous weapons could be added to the list of weapons prohibited by the Convention on Certain Conventional Weapons. The purpose of the Convention is to ban or restrict the use of specific types of weapons that are considered to cause unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately, such as blinding laser weapons or mines and booby traps. It might even be necessary to prohibit not only the use of autonomous weapons, but also the development and production of autonomous weapons through an international legally binding agreement.
One of the questions that has not been addressed often is the question how to define autonomous weapons. The open letter written by various experts on artificial intelligence defines autonomous weapons as “weapons that can search for and eliminate people meeting certain predefined criteria themselves, without human interaction.” But what about weapons that do not eliminate people, but cause a little harm to them? Or what about weapons that cannot search for and eliminate people meeting certain predefined criteria, but can do this with just a little human interaction? These kind of questions are hard to answer and should be addressed in a treaty banning autonomous weapons.
Another argument against a global ban is that the use of autonomous weapons may prevent great danger in some cases. A global ban will cause harm in some situations, therefore a global ban is not the right solution. This argument is easy to tackle by using a utilitarian approach: a global ban is the best solution as long as it prevents damage in most cases.
Other counterarguments are mentioned by former U.S. Army officer and autonomous weapons expert Sam Wallace in this article. He writes: “It would be impossible to completely stop nations from secretly working on these technologies out of fear that other nations and non-state entities are doing the same. (…) It’s not rational to assume that terrorists or a mentally ill lone wolf attacker would respect such an agreement.” Wallace’s main argument comes down to the rather bold statement that “banning a weapons system is unlikely to succeed, so let’s not try.” In a response to his article other artificial intelligence experts counter this argument by mentioning the rather successful bans on biological weapons, space-based nuclear weapons, and blinding laser weapons, concluding that a global ban is certainly possible. When the majority of countries join the treaty, this will create a widely recognised new standard, which will influence even those that did not join the treaty, because they would face serious disapproval.
We need a ban — soon
If we do not ban autonomous weapons before they will be available to a military power, we risk that autonomous weapons will be widely available to virtually everyone who likes to have an autonomous weapon. With new technologies like 3D-printing, cheap hardware and more and more professional cybercriminals, the risk is high that the technology of autonomous weapons will be stolen from the first power who developed it and easily copied afterwards. To prevent this from happening, we need a global ban before a military power has developed an autonomous weapon. With a rapid development of artificial intelligence in general, it is likely that the technology to make a real and dangerous autonomous weapon will be developed in in the next three years. That is why we need a ban — and soon.
Originally published at iusvita.com on August 19, 2015.