AI in the Military

Explaining Explainable AI

Eoin Sands
The Cormorant’s Nest
7 min readJul 6, 2020

--

Through a Black Mirror, Darkly

The commander had reached an impasse. Her intuition, product of thirty years of experience, was telling her that the enemy were unlikely to attack within the next 24 hours. Her intelligence officers confirmed that they had seen nothing out of the ordinary to indicate an offensive was about to take place. Yet on the electronic screen that took up one full wall of the planning room, a machine was flashing up a 90% probability of an imminent strike in letters a foot tall. She was understandably sceptical — on her drive to the exercise headquarters that morning, her satnav had tried to take her down a closed road twice...

Previously, I’ve discussed how the value placed on intuition by the military can increase friction between AI decision-support systems and human decision-makers. If an experienced commander is asked to do something that goes against their ‘gut feeling’, it requires a significant leap of faith on their part to ignore that feeling and take the advice. One way to alleviate that friction is through understanding the shortcomings of human intuition. But in addition, more must to be done to build real trust between man and machine. When challenged, a human advisor can justify their recommendations, show their workings out. But for ‘black-box’ AIs, that’s generally not an option. And that means problems for more than just the military.

Not Just What, but Why?

Over the last few years, algorithms have become an increasingly prevalent part of our day-to-day lives, from recommending what we should watch next on Netflix to approving credit card applications. But just as humans can exhibit bias in their decision making, so too can machines. In November 2019, Steve Wozniak tweeted that his own company’s Apple Card algorithm had offered him 10 times the credit offered to his wife, despite all their assets being shared. Not only does this raise some serious ethical concerns, it potentially breaches laws which prohibit determining creditworthiness by protected characteristics such as age, sex or race.

The avoidance of such legal breaches is a big reason why ‘explainability’ has become such a hot topic in AI. In Europe, GDPR gives consumers a right to know not just when their data is used, but how:

The data subject shall have … access to the personal data and the following information:
(h) the existence of automated decision-making … [and] meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.

As well as legal and ethical concerns, explainability is key to developing trust between user and machine, especially for the ‘human-in-the-loop’ decision-making model required for military applications. Effective man-machine teaming requires that the human understands why recommendations are made so that they can spot where errors have crept in. Modern AI systems are not 100% reliable, but the mistakes they make are often things that humans would pick up on instantly.

Black Box Thinking (Why AI Explainability is difficult)

Unfortunately, by their very nature, the inner workings of most AI systems are far from transparent. Training a deep learning model involves exposing it to huge datasets and allowing it to establish connections between input and output via a series of hidden layers. Each of these layers may contain hundreds of ‘neurons’, each connected to every neuron in the neighbouring layers. The degree of complexity is immense.

This diagram represents a network with just 16 input nodes (equivalent to a 4x4 pixel photo).

In addition, input data is often manipulated beyond the point of recognition before it even enters the model. This is the case in particular with image recognition — images are filtered, compressed and then ‘flattened’ into a string of digits before they enter the ‘black-box’ of the neural network (Ryerson University have created a great online tool which visualises this process).

Tracing the route a model takes to get a result and then translating that route into terms a human can interpret is therefore extremely difficult. But with considerable attention focussed on Explainable AI from academia and industry, progress has been made. It is now possible for AI to not merely identify a particular sport from an image, but to explain why it’s arrived at its decision.

This AI can explain why it believes a photo represents baseball by picking out the relevant detail (the bat). From “Attentive Explanations: Justifying Decisions and Pointing to the Evidence”

The Benefits of Explainable AI to the military

“Explainable AI — especially explainable machine learning — will be essential if future warfighters are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners.” (DARPA — Explainable Artificial Intelligence)

We’ve already looked at one example of how Explainable AI can bring value: by increasing the trust between decision makers and AI, particularly when the latter’s predictions don’t make obvious sense to the human mind. However, there are other advantages.

  • Even with AI assistance, the vast amount of data generated in the ‘every platform a sensor’-age will lead to many false alarms for every genuine item of interest. Explainable AI could allow intelligence analysts to sift through a greater number of machine-generated leads by showing its reasoning and making spurious alerts much more obvious.
  • If AI is employed within the Personnel area (e.g. pre-boarding candidates for promotion or matching an individual’s skills, experience and preferences with potential postings), it will be important to be able to explain such decisions to the individual if required. Indeed, while the UK remains subject to GDPR, there is a legal requirement to be able to do so.
  • For situations where reaction speed is important, such as time-sensitive targeting, Explainable AI could provide decision-makers with the confidence needed to make critical decisions quickly. When response time is so important that having a human in the loop would create an unacceptable delay (eg automated defensive systems), Explainable AI could provide auditable reasons for activations. This would not only make the development and testing of such systems easier, it may also make them more palatable, legally and culturally.

I doubt anything on the list above comes as a surprise — it’s all pretty obvious and well-understood stuff. Less often mentioned are the potential downsides to Explainable AI. I’d like to consider three in particular.

  • Explainability vs Complexity — Inevitably, there needs to be a trade-off between how complex an AI system can be and how explainable it is. With sufficient data, a highly advanced AI system might pick up patterns that bear no relation to any concepts that a human could understand, resulting in predictions that are literally inexplicable. If we limit ourselves to decisions that can be explained to humans, then we rule out truly thinking outside the (black) box.
  • Gaming the System — Whenever the subject of AI came up at my previous job in Career Management, it would generate a lot of discussion at desk level. In particular, there was concern that if automated systems were used (in the promotion process, for example) people would soon work out which key words to put into an appraisal to get the result they wanted. The more explainable the algorithm, the easier this would become. Outcomes would be based not on merit, but on how well people understood the system — plus ça change, some might say!
  • Adversarial AI — This is a huge subject deserving of its own article, but in brief, adversarial AI describes the steps an enemy can take to attempt to trick friendly AI systems, allowing them to mask their true actions and intentions, or even spuriously trigger automated systems. As Open AI explains, “adversarial examples are inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake; they’re like optical illusions for machines.” Research is beginning to establish the link between the explainablity of a system and its susceptibility to adversarial attacks. Ultimately the more transparent your AI is, the easier it may be to fool.

The future

We are not yet at a stage where AI is reliable enough to be trusted without questions. In order for it to be fully and readily accepted in the military, it will therefore need to be able to answer those questions and explain its decisions. This will bring some issues as I have outlined above; we will need to mitigate those risks as best we can.

…The commander remembered that the HQ’s decision support software had been upgraded with a new feature just before the start of the exercise. Below its prediction hung a dialog box containing a single button: “Explain”. As she clicked, window after window popped up, each containing a piece of evidence that on its own fell below the threshold of interest. But in combination, the windows painted a convincing picture of an enemy ready to attack. Reassured, the commander issued her orders and the staff got to work.

--

--