The Autonomous Imperative

Peter Lucido
b8125-fall2023
Published in
4 min readNov 19, 2023

With the increasingly breakneck speed of AI, robotics and drone technology development, now is an essential time to make societal decisions surrounding their ethical deployment. In no area are these decisions more pressing than that of warfare.

Unmanned warfare is not new. The first registered kill by a United States UAV dates back to October 7th, 2001, more than 22 years ago as of the writing of this piece, and it has been a hotly discussed topic since. From the conventionally aircraft sized MQ-9 Reaper, capable of carrying and launching munitions to the handheld, short range, reconnaissance focused RQ-11 Raven, UAVs have evolved to come in all shapes and sizes. No matter their size or domain, the thread that connects the current generation of unmanned is the continued need for active human operation or decision making. What separates the new era of autonomous warfare from previous, semi-autonomous iterations is the fundamental removal of this man in the loop.

This can be a scary premise. As described by Paul Scharre in a 2019 talk at Stanford (which I highly encourage readers to look further into), there can be a stark difference between what is legal and what is right in warfare. Should a child who is acting in support of enemy combatants be targeted? Should a high-profile, terrorist leader be bombed at a place of worship or funeral, risking the lives of possible innocent civilians? These are questions that humans struggle to deal with using a clear set of rules or experiences, let alone algorithms.

With the black-box nature of the most common modern-day machine learning algorithms, how can we be sure that these systems will act in accordance with our principles? Are we, as a society, comfortable with offloading our moral burden to an unknown and at first untested algorithm? These are some of the primary concerns currently surrounding the full implementation of military autonomy, and are questions that we as a society will inevitably be faced with in the near future.

In a previous life, I would have said the implementation of fully autonomous weapons systems was unacceptable. The possible gain from the development and deployment could not and equal to the moral hazard of their misuse. For me, this was not just an ideological position, it was one that I acted upon. Early in my career, I had the opportunity to pursue a position working directly on the development of autonomous military systems. Despite the allure this new position presented, at the time I decided that I was ethically unwilling to play a part in building a world of “killer robots.”

Since then my perspective has shifted. While initially I saw autonomous warfare as something that could be averted, I now believe that its development is inevitable. Even if the United States decided not to pursue autonomous weaponry, other countries inevitably would, putting the people and the interest of the United States at risk.

A government’s primary responsibility on the global stage is to protect the lives and interests of its citizens. Through this lens, war can be seen as sacrificing one of a government’s essential commitments for the other; the people for their interests.

Per the Defense Casualty Analysis System, as of November 14, 2023, the United States has had a total of 53,411 service members wounded in action and 28,212 killed in action since the year 2000. In my mind, each casualty listed represents a failure of the United States government to protect its citizens. This failure is even more reprehensible in countries with mandatory military service, where those citizens did not even engage in risks voluntarily. As such, if implementing fully remote warfare systems could reduce these numbers even by a fraction, in my mind it is an ethical requirement for governments to do so.

To take this a step further, I argue that it is a moral imperative for governments to transition as quickly as possible to fully autonomous systems. Per Jamal Campbell’s paper Psychological Effects on UAV Operators and Proposed Mitigation Strategies to combat PTSD, operators of remote, semi-autonomous systems still remain “vulnerable to permanent psychological damage.” PTSD and other detrimental psychological conditions are well documented among active military service members and veterans, causing reduced quality of life and increased rates of suicide. To more easily quantify the above impact, of the 28,212 active duty deaths noted above, a total of 6,283 were self-inflicted, making up around 22% of total deaths.

While the country is in desperate need of improved mental health treatment for active military service members and veterans, any steps that can alleviate this burden now, or into the future, are required if you subscribe to my asserted governmental guiding principle. Only by fully removing the man in the loop, and thereby the soldier from direct physical or psychological harm is this truly being adhered to. Despite its nascent nature and despite its other ethical concerns, the only answer to this is implementation of autonomy.

--

--