Why your AI needs to explain itself
In a recent article, my colleague looked at the impact that privacy legislation such as the GDPR and CCPA could have on data science. In that post, he drew particular attention to the need for data practitioners like ourselves to ensure algorithmic fairness, accountability and transparency.
This need is becoming all the more pressing and goes beyond regulatory compliance. With automated decision-making gaining popularity in many sectors, interest in the science that sits behind it will only grow stronger. As a result, the data science community needs to get ready for an era in which it is able to quickly — and simply — explain the parameters that govern AI-based decision-making.
Explainable AI (XAI) could go a long way towards helping us reach that goal. In this post, I want to take a short look at XAI in the context of Machine Learning: what it is, why it’s important and how it works.
What it is
Building and training an artificial intelligence system can be a deeply technical and complex process. So complex, in fact, that there is a real danger that the decisions made by that system can’t even be explained by the people who designed it. Naturally, that’s a big problem when you’re dealing with decisions that might affect the lives of thousands, perhaps even millions of people.
XAI is, for all intents and purposes, exactly what it says. XAI seeks to counter the problem above by utilising inclusive, more easily interpretable systems that can help flag issues such as bias. In essence, XAI seeks to keep the human operator and decision-making machine closer together by providing human-style interpretations of how the machine is making decisions.
While this doesn’t guarantee that a layman will understand how the AI has come to the conclusions, it does at least help to avoid that worst-case scenario where not even the expert understands.
Why it’s important
“Human explainability” for regulatory compliance isn’t the only benefit of XAI. It can also help with the following issues:
Detecting and avoiding bias
By design, machine learning models evolve by analysing the data they’re given. Much like a child picking up the bad habits of a parent though, this also means that any biases in that data could also be absorbed into the model.
Let’s suppose a business struggles with gender balance at an executive level, for instance. A recruitment model built upon gender-biased historical data is probably going to learn to be equally discriminatory. Instead of helping to solve the problem it was designed to fix, it just compounds it further.
User acceptance and trust
XAI isn’t just about being able to justify why decisions were made — proactive transparency can also help to nurture trust. In retail, where personalized recommendations can be a powerful sales tool, it’s becoming increasingly important to explain why certain products are being surfaced to a particular customer. Business stakeholders need to be confident that the algorithm can be deployed without putting their reputation at risk.
Models that provide simple explanations such as “You are seeing this product because you purchased/viewed these other ones” go a long way to building trust with consumers and business stakeholders alike.
Domain-led performance improvement
Ideally data scientists should have an understanding of the domain for which they are building solutions, but that’s not always the case. By providing explanations in a language that the domain expert understands, XAI can help to detect questionable decisions much faster, providing a feedback loop for the improvement of the solution.
While the VP of Sales for a grocery chain might not understand the nuances of a decisioning algorithm, for example, they’re much more likely to spot when a highly profitable value proposition has been marked for discontinuation due to an error in the model. XAI helps to strengthen the connection between scientist, expert, and machine.
Quality checks and debugging
There’s no such thing as a perfect model. While data scientists tend to rely on accuracy metrics for evaluation, even a model with 99% accuracy can still be dangerous if it isn’t thoroughly interrogated to ensure that it will perform under ‘real world’ conditions.
The well-known ‘wolves vs huskies’ study is a good example where it was found that the model with high accuracy rate was actually relying on the presence of snow in the background to classify an image as a wolf and not a husky. In retail, image recognition algorithms are commonly used for catalogue tagging and product discovery (e.g. an online retailer using images viewed by a customer to display other items they might like). Explainability of these algorithms is important to determine if they can stand up when exposed to real world data, otherwise the customer experience powered by this science could be sub-optimal.
How it can work
Data scientists can look at model explainability in one of two ways:
· Global explainability
This approach seeks to explain the overall behavior of the model. What features are important to the overall performance of the algorithm?
· Local explainability
This approach investigates a model at a more granular and specific case level. Why did the model make this decision about this particular customer?
Regardless of which of these we want to achieve, two approaches are commonly used:
“Inherently interpretable” models
Some models are ‘glass-box’, providing rules and metrics that can be interpreted easily. Examples include linear regression, logistic regression and decision trees. Just because a model is classified as inherently interpretable, though, it doesn’t mean that it is always interpretable. Over-engineered models with hundreds of features will quickly reduce human understandability.
Model-agnostic Machine Learning explainers
Complex models such as Deep Neural Networks, Random Forests, Gradient Boosted Trees and so on provide high accuracy at the cost of interpretability. This tradeoff can be dealt with by building simpler, surrogate models to provide post-hoc explanations of the predictions made by the more complex model.
Several methods are available for generating post-hoc explanations. The most popular approaches are:
· ELI5 (Explain Like I’m 5)
ELI5 is essentially a debugging tool that digs into machine learning classifiers (the parameters used to make decisions) and helps to explain why certain choices have been made.
· LIME (Locally Interpretable Model-Agnostic Explanations)
As it says in the name, LIME is model-agnostic — meaning that it can be applied to any machine learning model. LIME works on the premise that for a given prediction, smaller localized linear and explainable models can be built on neighboring data points to explain the behavior of the complex model.
· SHAP (SHapley Additive exPlanations)
SHAP applies game theory (the analysis of interdependent decisions) and Shapley values (a branch of game theory problem solving) to help explain the output of a machine learning model.
Regardless of the models used and the tools employed to explain them, the guiding principle for any data scientist must be to be as user-centric as possible, providing explanations simple enough for non-technical stakeholders to understand.
While the focus has primarily been on accuracy optimization, now is the time to dial up advancements in explainable systems. The data science community as a whole needs to invest in learning how to build systems which offer a seamless blend of accuracy and explainability. It is great to see research programs such as those sponsored by DARPA (Defense Advanced Research Projects Agency) aimed at building transparent and explainable machine learning techniques while maintaining high levels of accuracy.
Lifting the lid on the AI black box to reveal the decision-making process as something easily comprehensible will not only increase the level at which our work is accepted, adopted and embraced, but help ensure that automated decision-making respects the customer experience.