The road to ML explainability is foggy…

source: Kees Smans @ flickr

…and (almost) everyone is looking for fog lights in the middle of this trip!

If you have been around machine learning and/or data science for long enough (2–4 years), you probably have noticed the growing number of papers and events related to eXplainable Artificial Intelligence (XAI). This trend is no coincidence:

  1. new regulations like the General Data Protection Regulation (GDPR);
  2. lack of stakeholder trust on complex black box decision support systems in key areas like banking and medicine;
  3. new vulnerabilities discovered in black box models can lead to catastrophic results [6, 7, 8];
  4. human induced bias in models and data that lead to unfair treatment of citizens [8]. This one will get you fined.

These 4 reasons are enough to drive the need for predictive systems introspection methods and tools. In credit risk, strict regulatory guidelines on model evaluation limited credit risk officers modeling choices, for decades, to a small set of models: expert (i.e. rule-based) systems, generalized linear models, decision trees and, sometimes, fuzzy logic models.

New research on model behavior introspection opens the door for more expressive models to be used. In this blog post, we’ll introduce you to this “new” hot topic as well as the new regulation

Is XAI actually a new thing?

Before the deep learning hype, people implemented AI (or fake AI, depending on which “tribe” you cheer for), using classical search methods. These are usually called expert systems. Using algorithms like Depth-First Search and Unification, researchers developed systems that encoded decision procedures in human-readable rules. One of those systems is the famous Prolog.

A set of facts and rules in Prolog

In Prolog, your predictive system was based on a set of facts and rules. The rules are fired based on what facts or queries you make. In order to inspect decisions, you can check what, and why, rules were fired.

Oh the nostalgia

In expert systems, you can identify the cause of unexpected predictions. Those causes include things like buggy rules (e.g., infinite loops) and inconsistent knowledge base. There are many reasons that led people away from expert systems as the main tool to implement predictive systems or AI. But let’s leave that for another blog post. Nowadays, the most impressive predictive systems in productions use, one way or another, machine learning (if you are a statistician, you might prefer the term “statistical learning”). It is the standard approach for 3 reasons:

  1. there is a lot more data available than there was when expert systems were king. And the available amount of data matters in machine learning models [4, 5].
  2. hardware is getting increasingly cheap and fast. Fast CPUs and GPUs allow us to train complex models like (very) deep neural nets faster.
  3. easy access to open source machine learning tools and libraries (scikit learn, tensorflow, pytorch, etc).

But, unlike expert systems, many predictive systems based on machine learning algorithms, especially the non-parametric ones have a terrible flaw: people can’t understand the “rationale” behind each prediction. In other words, people are unable to create a “mental map” of the decision process in a way that resembles something that humans would base their decisions on. You can’t extract an argument from a neural network (except for some niche use cases) nor are you able to pinpoint which decision steps lead to a bad prediction. So, how can you even begin to “understand” a neural network or other black box models?

What is an explanation?

Try to answer the following questions:

  1. What would you, personally, accept as an explanation?
  2. Would it differ from explanations you request from humans?
  3. How do you want those explanations to be presented? Text? Image? Video? Diagram? Voice? A mix of these? Telepathy? A Hug?
  4. Why do you need the explanation?

To make it clear why these questions are so hard to answer, let’s look at 3 scenarios:

  • Scenario 1: Imagine you went out to watch the movie Interstellar. After watching it, a friend asks you two questions: Did you like it? If yes, what things held your interest? Your answer might be something like “I thought it was a really good movie. The soundtrack and the black hole scene caught my attention!” The answer to the first question is how you classify the movie, good or bad (i.e. the “prediction”) while the second question relates to “perception type” rank (in other words, feature importance).
  • Scenario 2: You were arrested for stealing a car. You are taken to the interrogation room and asked “Why was I arrested? I was just driving my car!”. In response, the police open a laptop and show you two things, without saying a word: (1) a video of you hacking the car door lock and (2) the ID of the car owner.
  • Scenario 3: You launched a quad-copter drone and you’re watching it fly on your tablet screen. Suddenly, there is a warning stating that the drone will probably crash in the next 2 minutes. You have no idea what’s going on but, suddenly, there are two notifications in the screen: (1) a small quad-copter icon with one of the rotors flashing red; (2) the battery icon flashing red, containing only a single bar (out of 3 bars). After seeing that, you know the source of the problem: a mechanical issue in one of the rotors and low battery. Also, there is another notification requesting a decision: Do you accept reducing the fan speed of all rotors to a safe level?

All 3 scenarios use very different approaches to explain a decision or predictions. The approach taken on scenario 2 wouldn’t make sense to use in scenario 3. One of the main issues in ML explainability and XAI is the lack of a standard conceptualization of the field [6, 7, 9]. When we can’t even agree on the basic terms and definitions, it is kind of hard to cooperate or communicate at all. Maybe it is time for researchers, AI practitioners and policymakers to look into what philosophy [10, 11, 12] and psychology [14] have to say about explanations.

What about the regulation?

Taking all this into consideration, it is no surprise that even new regulations, like GDPR, offer very vague guidelines to what is acceptable as an explanation for decisions made/supported by machines. For example, Article 22 (1) states:

The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.

and Article 14 (2.g)

the existence of automated decision-making, including profiling, referred to in Article 22(1) and (4) and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.

I think we can all agree that, for ML practitioners, this is REALLY vague. What is meaningful in this context? What do they mean by logic? Are we talking about a formal logic? Due to all this uncertainty regarding what all of this even means, several law practitioners argued that GDPR risks being ‘toothless’ [20].

What’s next?

One of our main focus at James is to allow performant complex predictive systems to be used without compromising explainability. We believe life changing decisions like loan acceptance/refusal made by machines (or supported by machines) should be able to be inspected. As such, you can expect more blog posts on this topic in the near future. In the next blog post about this topic, we will explore two approaches to ML explainability: model specific and model agnostic. Also, we’ll look into what can we actually do, from both technical and business perspectives.

References

[1] Courts are using AI to sentence criminals. That must stop now, by Jason Tashea, Wired, March 17, 2017 (extracted on July 9, 2018)

[2] How artificial intelligence is transforming the criminal justice system, by Stephanie Weber, ThoughtWorks, January 10, 2018 (extracted on July 9, 2018)

[3] One pixel attack for fooling deep neural networks, Jiawei Su, Danilo Vasconcellos Vargas, Sakurai Kouichi, version 4, February 2018

[4] The Unreasonable Effectiveness of Data, by Alon Halevy, Peter Norvig, and Fernando Pereira, IEEE Intelligent Systems, Issue 2, Vol. 24, March/April 2009

[5] Revisiting Unreasonable Effectiveness of Data in Deep Learning Era, by Chen Sun, Abhinav Shrivastava, Saurabh Singh, Abhinav Gupta, Proceedings of ICCV 2017

[6] The Mythos of Model Interpretability, by Zachary C. Lipton et al, ICML Workshop on Human Interpretability in Machine Learning (WHI 2016), New York, NY

[7] The Doctor Just Won’t Accept That!, by Zachary C. Lipton et al, NIPS 2017 Interpretable ML Symposium

[8] Courts are using AI to sentence criminals. That must stop now, by Jason Tashea, Wired, March 17, 2017 (extracted on July 9, 2018)

[9] Towards A Rigorous Science of Interpretable Machine Learning, by Finale Doshi-Velez and Been Kim, arVix, 2017

[10] Abduction, Stanford Encyclopedia of Philosophy, 2017 (extracted on July 25th, 2018)

[11] Reasons for Action: Justification, Motivation, Explanation, Stanford Encyclopedia of Philosophy, 2017 (extracted on July 25th, 2018)

[12] Explanation in Mathematics, Stanford Encyclopedia of Philosophy, 2018 (extracted on July 25th, 2018)

[13] Explainable Artificial Intelligence (XAI), DARPA-BAA-16–53, DARPA Broad Agency Announcement, 2016

[14] Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation, by Sandra Wachter, Brent Mittelstadt and Luciano Floridi, International Data Privacy Law, 2017