The Current State of AI in Warfare

Ellie Harris
Warwick Artificial Intelligence
3 min readFeb 23, 2022

--

Artificial Intelligence is here, and there are few areas of modern civilization that aren’t going to feel the transformational effects.

Some of these exciting developments are being seen in medicine with the design of The Da Vinci robotic surgical system (Amisha et al., 2019) — which mimics a surgeon’s hand movements to assist in complex and precise surgeries — and in drug development — using pattern recognition to find biological effects of chemicals by matching properties of similar structures (Chemical Reviews, 2019). However, with the rise of AI being used in more systems that will have a greater impact on our society, we can see how AI will likely become a very divisive and controversial topic.

Perhaps most concerningly, AI is now being used for military applications, with weapons being developed that can autonomously target and attack a person or group based on given predetermined criteria (Lee, 2021). These are named Lethal Autonomous Weapon Systems (LAWS) and are being developed by governments globally; however we are only just starting to see them being used — the first of these being reported by the UN in March, 2020 following an autonomous drone strike in Libya (Hambling, 2021). These weapons could revolutionise the world of military operations — for better or for worse — as it is becoming easier to make weapons act on their own for long periods of time. This idea is elucidated by a LAWS specialist who we have interviewed leading up to this essay series; “there needs to be some kind of target set and engagement criteria. After that, it’s basically autonomous.”

The benefits of LAWS could greatly challenge the existing ethical norms and legality of war with their precision and ability to work on a much smaller scale, preventing unnecessary civilian death (Piper, 2019). However, moving the killing outside of human hands is ethically ambiguous — who would we blame if something were to go wrong? An example of this problem can be clearly seen with the issue of alignment; if we cannot demonstrably prove that AI systems are doing what we mean and not what we say, how can we be sure that the system will not take actions beyond our intentions (Shaabi, 2022). Further, bias within datasets could lead the algorithm to target the wrong person or group of people. Who would be to blame if this occurred? Does it matter that no one would be to blame? As the development of AI is becoming cheaper, we also run into the issue of LAWS becoming more accessible to society as a whole — enabling terrorism and boosting the speed of war as more countries have access to this potentially deadly machinery.

Warwick AI have anonymously interviewed one person familiar with the use of Lethal Autonomous Weapons in warfare, alongside conducting a survey into the opinions of students at the University of Warwick. The result of this exploration is the following series of 4 short essays, this being the first. The following three will explore LAWS in greater detail, looking into the topics of the military arms race, the politics, and the uses of LAWS outside of the military.

References/ Resources

Amisha, Malik, P., Pathania, M. and Rathaur, V. (2019). Overview of artificial intelligence in medicine. Journal of Family Medicine and Primary Care, 8(7), p.2328. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6691444/.

Chemical Reviews. (2019). Concepts of Artificial Intelligence for Computer-Assisted Drug Discovery. Available at: https://0-pubs-acs-org.pugwash.lib.warwick.ac.uk/doi/10.1021/acs.chemrev.8b00728.

Hambling, D. (2021). Drones may have attacked humans fully autonomously for the first time. New Scientist. Available at: https://www.newscientist.com/article/2278852-drones-may-have-attacked-humans-fully-autonomously-for-the-first-time/?utm_source=rakuten&utm_medium=affiliate&utm_campaign=2116208:Skimlinks.com&utm_content=10&ranMID=47192&ranEAID=TnL5HPStwNw&ranSiteID=TnL5HPStwNw-03TjMf88Nog4hsrLvOk3FQ .

Lee, K.-F. (2021). The Atlantic. The Atlantic. Available at: https://www.theatlantic.com/technology/archive/2021/09/i-weapons-are-third-revolution-warfare/620013/.

Piper, K. (2019). Death by algorithm: the age of killer robots is closer than you think. Vox. Available at: https://www.vox.com/2019/6/21/18691459/killer-robots-lethal-autonomous-weapons-ai-war.

‌Shaabi, N. (2022). An Introduction to AI Safety: AGI and Superintelligence. Medium. Available at: https://medium.com/warwick-artificial-intelligence/an-introduction-to-ai-safety-agi-and-superintelligence-458d80ca109d .

--

--