Three Topics in the Ethics of Military AI

A seminar on military AI ethics this weekend focuses on the many ethical and regulatory issues emerging as AI is embedded in the battlefield and in soldiers’ bodies and brains

Photo by Jimi Malmberg on Unsplash

Our colleagues in Philosophy at UMass Lowell have a project funded by the National Science Foundation and the Air Force to examine ethical questions around the “near-future uses of artificial intelligence for performance enhancement both inside and outside the military.” The UMass Boston Center for Applied Ethics, which the IEET collaborates with and of which I am a Fellow, is facilitating the meeting tomorrow at UMass Boston. We will go through twenty issues that the project has proposed, discussing and ranking their importance. Each of us is assigned three of the twenty topics to study and address.

This project is timely since the IEET is working on a special issue of the Journal of Ethics and Emerging Technologies on “The Ethics of Emerging Technologies and Their Role in Geopolitical Conflict.” The deadline for this special issue is October 1, 2022, and the call for papers is here. The ethics of soldier enhancement is one of the topics this special issue will address.

The three topics I’ve been preparing to help address for tomorrow’s session are

  • Training for soldiers in working with AI-assisted systems
  • Management of AI-embedded exoskeletons and bodysuit sensors
  • The hacking of brain-computer interfaces that could compromise personal information and mental privacy

Training for soldiers in working with AI-assisted systems

One big problem for the incorporation of algorithms into everyday life is that we need those systems to present information to us in a way we find useful. Too little information and we can feel like we are losing our decision-making autonomy, and too much information can overwhelm and paralyze us. The military and non-military research on “augmented cognition” has been trying to figure out this sweet spot, and personalize in real-time it to the individual. (The 17th international conference on augmented cognition will be held next week online.) DARPA began a program on augmented cognition or the “Warfighter Information Intake Under Stress Program” more than two decades ago, which explored wearable displays and sensors that stepped up or down the flow of information to a battlefield headset depending on the level of focus and stress it sensed in the analyst. In a battlefield some soldiers may be able to comfortably handle two or three targeting options, for instance, but some may perform better if they only see one target at time.

AI-assisted decision-making presents a similar difficulty. For medical diagnostics, for instance, presenting a complete differential diagnostic tree, with the fifty possible diagnoses down to the one in a thousand possibilities, will likely be more than a doctor wants or needs, and could even increase the possibility that they land on less plausible diagnoses. Even the reporting of probabilities could lead people astray, as there is extensive evidence of variation in how people interpret and act on something like a “5% chance.” A recent review discusses twenty such cognitive biases in the interpretation of machine-learning algorithmic data, from the tendency to weight negative outcomes more than positive ones, to the effect of presenting information in different orders, for instance good news first versus bad news first.

One conclusion of this review was that “training can significantly improve statistical reasoning and help people better understand the importance of sample size (‘law of large numbers’), which is instrumental for correctly
interpreting statistical properties such as rule support and rule confidence…Several studies have shown that providing explicit guidance and education on formal logic, hypothesis testing, and critical assessment of information can reduce fallacy rates” (Kliegr, Bahnik, Furnkranz, 2021). The paper also recommends presenting algorithmic information with frequencies, sample sizes and confidence intervals. For instance, being told “there is a 50% chance you have X disease” would have a different impact than being told “of the 6 patients that matched your symptoms and age, 3 had disease A, and 2 had disease B, and that means you are almost as likely to have B as A.” The second statement tells you how small the dataset was that derived the conclusion, and thus how large the confidence interval is.

Management of AI-embedded exoskeletons and bodysuit sensors

An article published in Nature this week reports on the development of a bodysuit with sensors for infants. After the babies crawl around in it for a while it can assess what level of physical maturity they are at, and whether they have a developmental delay. Its not hard to imagine that we will all soon be wearing intelligent sensors and communication devices embedded in clothing, and that people with disabilities or in occupations requiring strength will routinely use exoskeletons. What information about us would embedded sensors reveal, how vulnerable would be be to having our mecha body hacked, and are we ready for these complexities in civilian or military life?

In DoD-sponsored Cyborg Soldier 2050: Human/Machine Fusion and the Implications for the Future of the DOD the authors outlined four areas of soldier enhancement that might be possible by 2050: (1) superhuman vision through artificial eyes or eye-tracking displays, (2) auditory enhancement with cochlear implants, (3) “restoration and programmed muscular control through an optogenetic bodysuit sensor web” and (4) “direct neural enhancement of the human brain” i.e. brain-computer interfaces. The report addresses the ethical and political issues in developing and deploying these technologies, and even what to do with enhanced soldiers when they return to civilian life.

The “optogenetic bodysuit sensor web” described in the report would enhance muscle control

through a network of emplaced subcutaneous sensors that deliver optogenetic stimulation through programmed light pulses. This enhancement is best described as an implanted digital sensing and stimulation system that is coupled with external sensors (e.g., boot inserts and wearables), which are linked to a central computational controller. In effect, the human body would have an array of small optical sensors implanted beneath the skin in the body areas that need to be controlled. These sensors could be manifested as thin optical threads that are placed at regular intervals over critical muscle and nerve bundles and are linked to a central control area designed to stimulate each node only when the muscles below it are needed…to decrease injury and mortality rates for soldiers through automated hazard avoidance. The network would also enhance their physical capabilities on the battlefield. (Emanuel et al, 2019)

Regarding the cybersecurity of AI and sensor-embedded bodysuits and exoskeletons they say: “If command and control are hacked, the human/machine will be compromised. Hackability by external forces could generate the fear of control by others. Even if this risk can be mitigated through enhanced encryption methods, variable authentication requirements, or other methods, the perception that control could be subverted may lead to issues of trust among peers. For example, if a hostile actor could override an optogenetic body suit or neural implant that controls muscle movement, this could not only create a true threat to the individual, organization, and mission, but could promulgate fears among the ranks of non-enhanced and enhanced individuals” (Emanuel et al, 2019).

I think this might understate how scary it would be if hackers could monitor us or control our movements through something we wore everyday as an indispensible extension of our bodies.

The hacking of brain-computer interfaces

The Cyborg 2050 report also addressed the even more disturbing vulnerability of brain-computer interfaces. This Spring Fernick and Lewis proposed a “Security Threat Lifecycle for Brain-Computer Interfaces” starting from designing BCIs and their software, to monitoring them during use, to securing them after removal (if they ever can be removed). Is the process for downloading software updates secure? Can the inductive power charger be used to hack the BCI? All of these concerns are being addressed by medical regulators in Europe and U.S. for their use in severely disabled civilian patients. But given the secrecy of military research it is hard to know whether civilian medical device makers are ahead or behind the military in securing our future BCIs.

To be sure, experimental brain surgery on soldiers is not imminent (at least in the Western militaries), and the timeline for invasive BCIs for soldiers or healthy civilians is probably two decades away at least. But we are already surrounded by non-invasive BCIs — cellphones — and increasingly powerful wearable BCIs using EEG have been developing for more than a decade, and pose many of the same issues that implants will face.

Last year Losito and May called out the over-hyping of AI by the US Department of Defense which they predicted would lead to a disappointed “AI Winter” when the projects failed. They argued that fully exploiting the potential of AI requires rethinking military practice from the body of the warfighter up to generals in the war room. Hopefully projects like the UMass Lowell’s Horizon Scan will contribute to anticipating the issues with the integration of AI into our decision-making, bodies and brains for civilian life as well.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
James J. Hughes PhD

James J. Hughes PhD

James J. Hughes is Executive Director of the Institute for Ethics and Emerging Technologies, and a research fellow at UMass Boston’s Center for Applied Ethics.