Could Military Robots Humanize War?

Military robotics can’t be dismissed straight away

Would it be science-fiction and utopia to dream of a world where no soldiers and no civilians die? Although our world has become considerably more secure than it was at the beginning of the 20th century, wars and conflicts go on and the modern security questions raised by terrorism put the world under some dark clouds. Would robots be the silver lining we’re looking for? In a world where wars would be fought by advanced and efficient robots, both civilians and soldiers could be spared. Deprived of the risks posed by espionage, the long forgotten project of a UN army could be resurrected. Cyber and autonomous weapons would be controlled under international conventional regimes that would content all States and the civil society too. A stable solution would have been found between military necessity and humanitarian imperatives, making sure that new techs of war would respect International Law and human lives. The dehumanization of the battle field could enable the humanization of war.

  • A Harpy IAI being launched. It will autonomously track radars and destroy them.
Intensive research and discussions must be conducted

A great number of difficulties lie ahead, but they may be resolved through research and discussions. Firstly, current international negotiations held within the United Nations are curtailed to autonomous machines defined as fully autonomous: there is no gradation whatsoever that is recognised by the States most concerned by autonomous weapons. This vision of autonomy tends to overshadow immediate and important topics such as cyber weapons which may become autonomous, and drones which are the base for any future autonomisation process. On the other hand, some actors tend to adopt an opposite and equally extreme position saying that autonomous weapons should be banned straight away without further discussion. It is capital that research and debates are held now, before autonomous weapons have been widely deployed amongst armed forces as it will be considerably harder to internationally regulate these weapons if they’re widespread. If nothing is done now, completely illegal weapons could be developed and used, leading to serious war crimes being committed. Once widespread, the regulation of these weapons could lead to great costs for State that already possess these weapon: to change existing models to adapt them, or more drastically get rid of them. More generally, autonomy seems to challenge law that is by essence anthropocentric: the legal review of weapons set by article 36 would have to be completely readapted to weapons slipping out of man’s direct control.

  • Agent Smith from the Matrix movies could be described as an autonomous cyber weapon
The way ahead: a balanced approach

Concrete solutions and balanced requirements have to be set now, so that the development of these technologies will follow a precise roadmap that will tend to better humanitarian results. Precise legal questions have to be answered, like the need for clarity as regards perfidy and treachery: would robots being disguised as humans be perfidious or would it only be a ruse? Quarters also seem problematic: using autonomous armament with a precision level so high it would consist in absolute lethality would amount to conducing warfare by denying quarters. The ethical implications for using weapons making or influencing the decision to use force have to be clearly outlined and the plausibility of autonomy should aliment this debate so as to enlighten the discussions held. To intensively discuss and study this subject will help understand precisely the general feeling of unease that strikes people when thinking about killer robots, and thus confirm it or prove it wrong.

Léonard Van Rompaey — PhD fellow in International Law on Autonomous Weapons