Fake Intuitive Explanations in AI

Carlos E. Perez
Intuition Machine
Published in
6 min readNov 19, 2018
Photo by rawpixel on Unsplash

Cassie Kozyrkov has just written a good take on why “Explainable AI won’t deliver”. Her take is one of the better surveys of ideas I have seen on the unlikelihood of delivering explainable AI. In the beginning of this year (2018), one of my predictions for Deep Learning was the following:

Explainability is unachievable — we will just have to fake it

I wrote that this was an unsolvable problem and that what will happen will be machines will become very good at “faking explanations.” The objective of these explainable machines is to understand the kinds of explanations a human will be comfortable with or can understand at an intuitive level. This indeed is a hard problem, it requires an understanding of the background of a human and then crafting a narrative to accommodating the listener ( Does this sound like teaching?). We need models of human understanding to be able to build narratives that appeal to understanding models that humans use. However, a complete true explanation will rarely be accessible to humans in the majority of cases.

Thus AI explainability is a human-computer interface (HCI) problem. The primary motivation for the need for explainability revolves around the question of trust. Can we trust the decision of a cognitive machine if it cannot explain how it arrived at the explanation? This was, in fact, the heart of a discussion I had previously on Human compatible AI. Lack of understanding leads to a lot of anxiety as revealed by Vyacheslav Polonski a UX researcher at Google:

interacting with something we don’t understand can cause anxiety and make us feel like we’re losing control

Korzykov’s recommends explaining complex behavior through the use of examples. It is the AI’s responsibility to provide examples to explain behavior. This recommendation is, in fact, an example of what I had an intuitive level explanation. We must understand what it means to have an intuitive explanation and then design our AI to deliver these intuitive explanations. Examples are just one method for an intuitive explanation. We can gain inspiration from the study of teaching to understand what kinds of explanations work best for humans.

Why do we trust complex machinery like airplanes? The people who study aerodynamics and the wings that provide the lift to planes will tell us that the mathematics are intractable and that the shapes of the foils are discovered by happenstance. Yet, we all comfortably get on planes every day with the confidence that we’ll make it alive to our destination. We trust this machinery despite the physics that allows them to actually fly isn’t as tractable as we are led to believe. The reason here is that our proxy for trust is the rigorous testing that was performed on these planes over the decades and its track record of reliability.

With AI, we have the problem of Goodhart’s law. Goodhart’s law implies that any proxy measure of performance will be gained by intelligent actors. What is needed of AI are more intelligent tests that ensure that what we expect to be learned is actually learned. This new methodology of teaching will absolutely be essential if we are going to deploy AI in tasks demanding human safety.

Then there’s the question of whether we select a solution that we understand well but doesn’t perform or the alternative, one that performs well but we don’t understand. It is likely, a majority of people will select the latter. A majority of people do not understand the details of how their mobile phone even works. In fact, most people can’t even differentiate between WiFi and mobile signal. Even worse, people assume that the wireless signal is a natural part of the environment.

To illustrate this point about a lack of understanding and effective capabilities. A recent study (Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning) is able to recognize gender from information from an image of the retina. Doctors have no idea how a Deep Learning system can do this, but it does so with extremely high accuracy:

There are financial derivatives out there that have been purposely created by quants skills in the most obscure and complex mathematics. Yet financial firms have no issue selling these products to their customers. The only thing that apparently is important is the track record and future performance of these obscure financial products. The firms are thus placing their trust in the skills of their quants. Despite this lack of understanding, financial derivatives are a very lucrative business.

A majority of human society trusts the working of machinery without ever having an understanding of it. This trust issue perhaps is also a liability issue. Perhaps that is why financial derivative products exist. That is, the firms that sell it have plausible deniability that they did not fully understand what they sold and thus can’t be accountable for its unintended consequences. There is a kind of information asymmetry being played here. That is the employment of knowledge of human fallibility as a way to game the problem of trust. That is why the use of the word ‘fake explainability’ is appropriate.

It is about the mechanisms of gaining trust that take into consideration the human biases that exists in humans.

Humans explain the world through their prior knowledge of the world. In short, an intuitive explanation is one that appeals to a person’s biases.

This nicely segues into the notion of cognitive bias and decision making. In many applications that employ AI or data science, the primary purpose of an AI application is to aid the decision process. In my work on the Deep Learning canvas, the method to extract value from AI technologies is to identify cognitive load in business processes. In the identification of this load, we can discover opportunities where we can employ AI to assist in improving human performance. Ultimately, the culmination of many AI processes is to drive toward supporting decision making.

Introducing the Deep Learning Canvas

Daniel Kahneman, author of “Thinking Fast and Slow”, is exploring how bias and noise contribute to the decision making process. Kahneman argues as to why machine algorithms are superior to humans in decision making. The reason is that the algorithms are noise-free. In contrast, human decision making is driven by the bias (a byproduct of intuitive thought) that they have in attention at the time a decision is made. Human thought is not actually noisy, rather human thought is algorithmic. How what a person is thinking is highly dependent on context and this leads to a lot of variances. The benefit of automation, in general, is to reduce variance, this is why factory floors are automated. In an analogous way, cognitive automation should drive to reduce variance in human decision making. Said differently, we strive towards consistent decision making.

In summary, explainability is a feature that is demanded as a requirement for trust. Establishing trust is, of course, a complex subject and has many aspects that have to do with human psychology. In general, though, trust in machinery is achieved through reliability that is achieved only through comprehensive testing. Ultimately, in the end, people will trust products that are known to deliver over ones that we understand. It’s only human nature.

This final conclusion does not mean that explainable AI is unnecessary or that we can’t create better explainable systems. The two areas that I mentioned above, how to create intuitive explanations and how to create AI testing (and also curriculums) are specific areas that require greater research. Where I see the mistake being made is this assumption that DARPA’s third wave “Contextual Adaptation” AI or what I would call “Intuitive Causal Reasoning” leads automagically to explainable AI. The limitation is not in our machines, the intrinsic limitation is in our own ability to understand complex subjects. Therefore, it is the job of AI not only to explain but rather to improve human decision-making capabilities.

Further Reading

Fast and Frugal Trees

Exploit Deep Learning: The Deep Learning AI Playbook

--

--