The State of Explainable AI

Jillian Schwiep
4 min readMay 3, 2017

--

I don’t need to know exactly why Netflix recommends certain movies to me — if it looks like a fit, I’m happy to take their recommendation. On the other hand, if your AI tells me that I should undergo an invasive medical treatment because a deep neural network (DNN) recommends it — well, I’m going to want to understand why before I take your recommendation.

Explainable AI (XAI) matters when you’re optimizing for something more important than a taste-based recommendation. AI deployed in military tools, financial tools such as loan assessments, or self-driving cars may use DNNs without being able to establishing culpability — if we can’t understand how an algorithm works, who’s responsible when something goes wrong? — and without being able to audit and double-verify that the models aren’t relying on bad information.

The State of XAI

As long as breakthroughs in artificial intelligence (AI) are common, researchers and startups will probably focus most of their effort into making new, flexible AI models. Maybe we can’t explain how these models work, but if x.ai’s Amy or Andrew can miraculously figure out how and when to schedule meetings for me, do I even care? However, once we really hit diminishing returns in DNNs, explaining how these DNN produce their results will be an area of intense focus.

For text-based AI systems, logical entailment is about explaining fact checks and arguments in general. Companies like Factmata are working on this by logically explaining the contents of knowledge graphs.

“Explaining” images is a lot trickier. DARPA has begun this work with a 5-year program to develop XAI. The DARPA proposal mentions two academic works which are generating buzz right now: UC Berkeley’s Generating Visual Explanations and University of Washington’s Why Should I Trust You? (the LIME paper).

Describing Birds

“Generating Visual Explanations” can explain the decisions of an image-to-wild-bird-name classifier with sentences like “This is a Laysan Albatross because this bird has a large wingspan, hooked yellow beak, and white belly”. This kind of model falls into a highly supervised and structured kind of AI, where they’ve fixed the number of words and adjectives that can be used as explanations. Down the road, the goal is to move to a flexible, unsupervised yet explainable AI (which will be a very hard task).

Via “Generating Visual Explanations”

In this case, the XAI doesn’t extensively cover how the deep neural net (DNN) made the decision, but the ability to generate an explanation from a visual image that was categorized using a neural net is pretty cool, and has a ton of wide-ranging applications (healthcare, military, etc.).

The model chooses to capture and explain only certain variables, but allows its DNN to classify based on a wider array of factors. If it instead forced a model to classify solely based off human-defined terminology, it would be giving up predictive machine learning (ML) power. On the other hand, if your XAI doesn’t capture all internal DNN features, are you just giving pseudo-explanations?

Pixels and Superpixels

University of Washington’s LIME paper focuses on producing model-agnostic explanations, explaining the results of any ML system by looking only at its inputs and outputs. This is cool because it doesn’t depend at all on human-defined terminology, like the “Generating Visual Explanations” paper does, but it also makes it harder to understand. It’s not giving you a full-blown explanation, but rather it gives you hints as to how the ML made its decision.

The simplest way to understand this method is to envision “hiding” certain pixels in an image and seeing how that influences the ML’s decision, then narrowing in on certain clusters of pixels and flagging them as important. The LIME system would be really good at evaluating competing models, and would help you detect and improve bad models. Keep an eye out for iterations of the next-gen model, aLIME, which already can outperform LIME and produce more flexible explanations.

What’s Next for XAI

The bottom line trade-off here is: trying to articulate the decision boundaries created by the DNN is stupid hard — a DNN will create very complex decision boundaries to classify stuff, probably accounting for the interaction of 1,000s-100,000s of variables in large models simultaneously, which is difficult to explain to a human. It might be more important to have human-interpretable decision boundaries, but then you either must give up predictive power to capture every input, or risk giving a pseudo-explanation that doesn’t really capture every aspect of how the DNN made its decision.

Lawmakers have recently come under fire for using algorithmic risk-assessment tools to sentence suspects without disclosing how these tools think. Law is the first of many industries that really needs XAI development, and the methods in papers discussed above would each be incredibly useful in beginning to help us crack the dark secret at the heart of AI.

--

--