Explainable AI: Making Sense of the Black Box

sid dhuri
The Startup

--

The Black Square is an iconic painting by Russian artist Kazimir Malevich. The first version was done in 1915. The Black Square continues to impress art historians even today, however it did not impress the then Soviet government and was kept in such poor conditions that it suffered significant cracking and decay.

Complex machine learning algorithms can be mathematical work of art, but if these black box algorithms fail to impress and build trust with the users, They might be ignored like Malevich’s black square.

Dramatic success in machine learning has led to a surge of Artificial Intelligence (AI) applications. Continued advances in machine learning and compute capacity have led to the development intelligent systems that are being used to recommend your next movie, diagnose malignant tumours, make investment decisions and drive cars without a driver.

However, the effectiveness of these systems is limited by the machine’s current inability to explain their decisions and actions to human users. Many of the AI applications using ML operate within black boxes, offering little if any discernible insight into how they reach their outcomes

A prevalent and century old problem

The problem of explaining machine learning is a very prevalent problem experienced by most data scientists.

In Kaggle’s survey of 7000 data scientists, Four of the top seven “barriers faced at work” were related to last-mile issues, not technical ones:

  • Lack of management/financial support
  • Lack of clear questions to answer
  • Results not used by decision makers
  • Explaining data science to others

And we have faced this problem for over a century. In 1914, before coding and computers, Willard C. Brinton began his landmark book Graphic Methods for Presenting Facts by describing the last-mile problem:

“Time after time it happens that some ignorant or presumptuous member of a committee or a board of directors will upset the carefully-thought-out plan of a man who knows the facts, simply because the man with the facts cannot present his facts readily enough to overcome the opposition….As the cathedral is to its foundation so is an effective presentation of facts to the data.”

Interpretability-Accuracy trade-off

Data scientists have to deal with trade off between between interpretability and accuracy, i. e. between complex models that can handle large and versatile data sets and less complex models that are easier to interpret, but usually also less accurate.

When dealing with large versatile datasets, you want to make use of all those variables to build your model and arrive at accurate outcomes. As you use more variables their relationship with the target variable and the interaction between different independent variables make the model increasingly complex.

non-linear relationships are impractical to model with linear models and expect accurate results

So should we build an accurate model or sacrifice on accuracy and build an interpretable model?

There are strong drivers on either side.

For high volume, relatively benign decision making applications such as an movie recommendations a black box model with higher accuracy is a viable option. However, for critical decision making systems such as investment decision or medical diagnosis explainability might be more important.

Regulatory requirements could also shape your solution. If bank credit and fraud models are black boxes, then regulators can’t review or understand them. If such algorithms don’t accurately assess risk, then the financial system could be threatened as we saw during the 2008 financial crisis. Not surprisingly, many regulators are insisting that credit and risk models be interpretable.

DARPA XAI Programme

The Defence Advanced Research Projects Agency (DARPA) of the United States Department of Defence responsible for the development of emerging technologies has an ongoing Explainable AI programme.

XAI Concept

The Explainable AI (XAI) program aims to create a suite of machine learning techniques that:

  • Produce more explainable models, while maintaining a high level of learning performance (prediction accuracy); and
  • Enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.

Eleven XAI teams are investigating a diverse range of techniques and approaches for developing explainable models and effective explanation interfaces.

XAI System Developers, Approaches, and Challenge Problem Areas

This is an ongoing program and it will take some years for the outcomes to be understood, adopted and available in popular programming languages such a Python and R.

What do we have now

There are multiple techniques and tools that we can use to improve the explainability of our complex model.

LIME (Local Interpretable Model-agnostic Explanations) explains the classifier for a specific single instance and is therefore suitable for local consideration. Intuitively, an explanation is a local linear approximation of the model’s behaviour. While the model may be very complex globally, it is easier to approximate it around the vicinity of a particular instance.

LIME explainer approximates a local linear surrogate of a comlex model

SHAP (SHapley Additive exPlanation) is another method to explain individual predictions. SHAP is based on the game theoretically optimal Shapley Values. The technical definition of a Shapley value is the “average marginal contribution of a feature value over all possible coalitions.”

SHAP explainer attributes marginal contribution of each feature in reaching the output value starting from a base value

In other words, Shapley values consider all possible predictions for an instance using all possible combinations of inputs. Because of this exhaustive approach, SHAP can guarantee properties like consistency and local accuracy

LIME vs SHAP

LIME creates a surrogate model locally around the unit who’s prediction you wish to understand. Thus it is inherently local. Shapely values ‘decompose’ the final prediction into the contribution of each attribute.

So why would anyone ever use LIME? Simply put, LIME is fast, while SHAP values because of their exhaustive calculations take a long time to compute.

Other options

Besides LIME and SHAP there are other explainable AI tools to improve explainability of your complex models,

IBM’s AIX360 is an extensible open source toolkit that offers comprehensive set of capabilities to help improve the explainability of your models. The AI Explainability 360 Python package includes algorithms that cover different dimensions of explanations along with proxy explainability metrics

What-if tool provides tools to visually probe the behavior of trained machine learning models, with minimal coding.

How can we build trust with business stakeholders

Drawing parallel from prescription drug development, we take medicines prescribed by a doctor because we trust our doctor’s qualifications and we trust the rigor of the clinical trial process that the drug has been through before coming to market.

Regulatory agencies such as the FDA, have defined processes and protocols to ensure the safety and efficacy of drugs.

Similarly if we follow a rigorous process and ensure that every model has been through the same stages of hypothesis generation, validation and testing then we can build confidence with business stakeholders about the efficacy of our machine learning models. Thus making case for our model to be accepted and integrated into Business As Usual processes with confidence.

--

--

sid dhuri
The Startup

I am data scientist by trade. I love to write about data science, marketing and economics. I founded Orox.ai a marketing ai, analytics and automation platform.