Autonomous Warfare

Joe Yu
Computers and Society @ Bucknell
7 min readApr 30, 2019

Written by Joe Yu & Brad Beacham

Introduction

Gordon went off on his final mission in Iraq in the summer of 2007. During his final days, he was out searching an intersection for a buried improvised explosive device (IED). As he was on his way to the intersection, the IED was detonated 10 feet away from him. Despite this, he continued to search the area until another bomb was detonated and turned him upside down. Even still, Gordon continued working until a fire fight broke out and he was shot 7 times in the underside. After three days in the repair shop, he was back in action.[1]

In 2017, Kalashnikov, the Russian gun manufacturer most famous for the AK-47 rifle, unveiled its new robotic gun system that uses artificial intelligence to identify the targets and make decisions to shoot. According to Sofiya Ivanova, the director of communications for Kalashnikov, the company was working on a set of fully automated combat modules based on neural networks.[2] In 2019, the company presented an automatic weapon control station under the control of combat artificial intelligence. The system was capable of detecting and recognizing targets, determining priorities in the sequence of defeat, giving commands to the tracking machine and making decisions about opening fire.[3]

It is beyond question that military robotic systems have reshaped the modern warfare. Combining the most advanced electronic, computer, surveillance, and weapons technologies, the robots of today have extraordinary capabilities and are quickly changing the landscape of battle and dynamics of war. In particular, the development autonomous weapon systems (AWS) capable of making decisions and carrying out tasks without human intervention carries great military and social significance.

Recently, a military drone AI project by Google has drawn much attention and criticisms. In 2018, over 4000 employees signed a petition asking Google to end its involvement in project Maven–an AI human and object detection system to be used on military drones. Over a dozen employees resigned in protest as well, citing the company’s “don’t be evil” clause. Despite the heavy opposition, Google did not drop the project, and while they have decided not to renew their contract in 2019, it has been noted that the company taking up the project will continue using Google’s services to develop the AI.

Unmanned systems have been seeing much greater use in the field in recent years, but some find the increase in offensive unmanned systems alarming. Many believe that the adoption of such systems will lead to greater distrust of artificial intelligence and machine learning programs in the future. We will begin by exploring the current unmanned military systems in the world, why autonomous weapon systems might be beneficial, and then turn to the discussion of the ethical and technical challenges brought by such systems.

Unmanned Systems In Warfare

Currently, unmanned systems are used in every branch of the military, and are especially helpful with tasks that are dirty, dull, or dangerous. Some of the major benefits of using unmanned systems are forced manipulation, expanding the battle space, persistence, extending the warfighter’s reach, and reduction in human casualties.

In addition to being able to take on tasks that are undesirable or even dangerous for human soldiers, robots can be designed to specialize in specific tasks, which enables them to complete assignments with greater efficiency at lower costs. The use of unmanned systems can also potentially lower the casualty rate and avoid other logistical issues. These are all major benefits for any military force. However, these personal benefits mean that the effects of using them against other forces or people can be all the more devastating.

Surveillance and targeted attacks can be made much more easily with an unmanned or autonomous system than with a manned one. When a human soldier or group of soldiers is sent in to perform an assassination, they are putting themselves in immediate danger going into enemy territory. Additionally, if a soldier were to get captured or the enemy found out who was attacking, there could be severe repercussions as a result of the attack. However, you could send in an autonomous unmanned system without needing to worry about any of those issues. A robot cannot be easily captured and interrogated. You do not need to bargain for the safe return of a microchip, and a robot that blows itself up could hardly be tracked back to its original home. Autonomous killing machines pose many potential threats that make them especially terrifying to many groups. The greatest of these risks are that we have no way to defend against them and that they can be used to terrorize and control much larger groups.

Controversies with Google’s Project

The Maven drone AI project by Google, if not carefully regulated, might result in the exact scenarios depicted in the video above. Although project Maven was proposed with the good intent of identifying terrorists and other potential threats to the society, the system could open up possibilities for further implementations of it on surveillance here at home or even in targeting systems. In addition to these terrifying implications is the more realistic problem of credibility. The learning and decision-making process of the AI might be so complex and even unpredictable that it can be hard to verify if the system is actually making the right decisions, such unpredictability might lead to further distrust in AI and machine learning programs in the future.

According to the ACM code of ethics, computer scientists should avoid unexpected harms to others, comply to existing laws and contribute to the overall social and human well-being. [6] Although there are no laws currently binding the use of autonomous weapon systems, international conventions such as the Law of Armed Conflicts (LOAC) do prohibit the use of lethal forces against non-targets such as civilians and surrenders. With the potential of being misused as tools for assassination and constant surveillance, project Maven certainly has the potential of harming the innocent, thereby violating the LOAC. Therefore it would be the developers’ obligation to prevent the system from being misused.

Broader Ethical Concerns

Aside from issues with potential misuse of these systems, there are still many improvements that need to be made before we can start putting autonomous systems to greater use. Autonomous weapon systems are complex and are prone to component failures and malfunctions. Everything has to break eventually, so these systems need to be made as durable as possible. They also need to be adaptable enough to react if things seem to be going awry. A robot that walks on four legs will need to be able to adapt if it loses a leg and continue fighting. Similarly, if a robot that primarily relies on cameras for vision is facing the sun, the images that it picks up will be much less useful and harder to analyze. It would be beneficial to have a system that is capable of allocating its resources such that it can make the best possible use of any data it takes in.

Another critical issue that needs to be addressed is compliance with Just-War theory and other international conventions. A lethal autonomous systems should be able to identify targets from non-targets (civilians, friendlies, etc.), and should also be able to identify surrendering enemies and other non-threatening occurrences. A mechanical soldier is replaceable, but if it ends up shooting other mechanical soldiers on the same side (or worse, humans) it is as good as useless.

In addition, there is the issue of human trust (or lack thereof) in autonomous weapon systems. The portrayal of autonomous systems in media, as well as the widespread publication of any major failures, has led to a general distrust and fear of these types of systems in the international community. Many people and organizations are heavily against the use of artificial intelligence and machine learning to help with any aspect of warfare not only because they personally distrust these kinds of systems, but because they believe that using these types of systems in the military could slow down acceptance of AI and machine learning in general in the international community, leading to an overall distrust of this incredibly useful technology.

Finally, autonomous systems must achieve interoperability. An autonomous, unmanned system must be capable of working in virtually any environment and with any team. An drone should be just as efficient at working together with human pilots as it is working with other autonomous drones. A robot that fights on the ground should be capable of listening to orders from an actual soldier and coming up with both a goal to complete and a plan for how to achieve it, then execute those orders. If technology is not capable of interacting with human soldiers it can never be used to its full potential.

Conclusion

Given the current state of unmanned systems and the potential legal and ethical issues with autonomous military systems, especially in the case of project Maven, Google’s employees were justified in their response to the company’s involvement in this project. While an autonomous drone system like this would be very useful for performing surveillance on terrorists, it could also be put to use at home watching over the general public. There is also a possibility that such systems could be modified to be put to use offensively in the future. It is a programmer’s obligation to reduce risks and seek to carry out any project as ethically as possible. Google’s employees were morally opposed to the development of project Maven not just because of its potential for misuse, but because they were opposed to the general idea of making technology that can be used to help kill people. Project Maven was meant to be Google’s entry into the government contracting business. By opposing this project, employees at Google were showing that they were against creating any systems for use in war, which might increase the difficulty for Google to enter another contract with the military in the future.

References

[1] Unmanned systems integrated roadmap: FY2011–2036. (2012). Washington: Department of Defense.

[2] Paul Bedard., The maker of the AK-47 made a robotic, AI gun system for Russia. (2017). Washington Examiner. Retrieved from https://www.businessinsider.com/the-maker-ak-47-made-robotic-ai-gun-system-for-russia-2017-7

[3] AI Machine Gun Destroys Targets on its Own. (2019). Military.com. Retrieved from https://www.military.com/video/ai-machine-gun-destroys-targets-its-own

[4] Gary E. Marchant, Braden Allenby, Ronald Arkin, Edward T. Barrett, Jason Borenstein, Lyn M. Gaudet, Orde Kittrie, Patrick Lin, Geo. International Governance of Autonomous Military Robots . 12 Colum. Sci. & Tech. L. Rev. 272 (2011)

[5] Gregory P. Noone Dr. and Diana C. Noone Dr., The Debate Over Autonomous Weapons Systems, 47 Case W. Res. J. Int’l L. 25 (2015)

[6] Acm.org. (2016). ACM Code of Ethics and Professional Conduct. [online] Available at: http://www.acm.org/about-acm/acm-code-of-ethics-and-professional-conduct

[7] Unmanned systems integrated roadmap: FY2017–2041. (2018). Washington: Department of Defense.

--

--