What is Explainable AI (XAI) and Why Does It Matter?

Ezequiel Lanza
Intel Tech
Published in
6 min readNov 28, 2023

Learn the key to building trustworthy models.

Photo by Ivan Torres on Unsplash

Presented by Ezequiel Lanza — AI Open Source Evangelist (Intel)

In the last five years, we’ve made big strides in the accuracy of complex AI models, but it’s still almost impossible to understand what’s going on inside. The more accurate and complicated the model, the harder it is to interpret why it makes certain decisions.

Explainable AI (XAI) techniques provide the means to try to unravel the mysteries of AI decision-making, helping end users easily understand and interpret model predictions. This post explores popular XAI frameworks and how they fit into the big picture of responsible AI to enable trustworthy models. Watch the full talk here.

Four principles of responsible AI

Ever wonder how AI creators think about ethics in AI? Responsible AI is a core building block. Let’s use the process of making a pizza to explain responsible AI.

· Fairness: Just like the perfect pizza has toppings that are equally distributed, responsible AI requires models to train on diverse data. This prevents your AI model from generating biased results around attributes like ethnicity, gender, or age.

· Transparency: When deciding where to buy a slice, you want to pick a restaurant that is open about how they prepare the pizza and what ingredients are in it. Likewise, responsible AI models publish information about the models and how they work with the data.

· Accountability: If you ask for a Margherita pizza but you receive a pizza with pineapple, you need to know you can speak with someone to help get another pizza. In AI, the question becomes who is responsible for each model — is it the person who trains the model, the person who implements the model, or the person or company who fine-tunes the model?

· Privacy and data protection: Responsible pizza places protect their employees’ privacy. In AI, it’s more about ensuring the data is handled responsibly. Sensitive data should always be protected.

Restaurants with responsible practices are more likely to earn your trust and your business. The same is true in the world of AI — you need to know a model is safe, fair, and secure.

Development phases: Building trustworthy models

Trust-building does not stop with responsible principles. Developers must weave trust-building practices into every phase of the development process, using multiple tools and techniques to ensure their models are safe to use.

· Remove disparities and biases in your dataset. Open source tools like the IBM* AI Fairness 360 toolkit (AIF360) and Google* What-If Tool (WIT) can help you identify and/or eliminate biases in your data before you train your model. Sometimes there’s no implicit bias in a dataset, but models use other variables as a proxy for discrimination. For example, a model may try to infer someone’s sexual orientation based on their marital status or guess their ethnicity via geographic patterns. Watch out for proxy variables in your dataset.

· Document data and model governance. Information about where your data comes from and how you treat it should be easy to find. For example, if you’re running a computer vision model, make it clear what dataset you used to train the model and whether you trained for a specific use case such as banking or healthcare.

· Protect data when implementing your model. Meeting privacy regulations is not enough; you also need to protect your data while you’re working with it. Privacy-Preserving Machine Learning (PPML) encrypts the machine learning process; OpenFL* protects the data of each particular node in federated learning models.

XAI: The missing piece of model trustworthiness

Responsible AI creates a strong foundation, but we need additional frameworks to help the user understand how the model makes decisions. For instance, suppose your AI model flags an email as fraudulent. Before you can confidently decide whether to open or discard the email, you may want to know more:

· Why did the model do that — what features or variables triggered this conclusion?

· How do you correct an error? If you didn’t develop the model yourself, you may wonder how to help your model understand the difference between safe and unsafe emails.

· Should you trust this decision? At the end of the day, models should be designed for humans to understand.

XAI can help answer these questions. By supplementing responsible AI principles, XAI helps deliver ethical and trustworthy models. Let’s look at key aspects of a good explanation.

XAI feeds into responsible AI principles to enable ethical and trustworthy models.

The right explanation for the right audience

As this paper detailing a meta-survey about XAI points out, not only should models be explainable, but they should also be easily interpretable to the target audience — the user of a healthcare AI application will have different domain knowledge than the user of a financial AI application, and the insights shared should be tailored to each audience.

If we drill down even further, there are multiple ways to explain a model to people in each industry. For instance, a regulatory audience may want to ensure your model meets GDPR compliance, and your explanation should provide the details they need to know. For those using a development lens, a detailed explanation about the attention layer is useful for making improvements to the model, while the end user audience just needs to know the model is fair (for example).

To make sure your explanations can be easily interpreted, tailor your insights to your audience

Types of XAI Explanations

There are three types of explanations. Data explainability focuses on ensuring there are no biases in your data before you train your model. Model explainability helps domain experts and end-users understand the layers of a model and how it works, helping to drive improvements. Post-hoc explainability sheds light on why a model makes decisions, and it’s the most impactful to the end user.

Of the three types of explainability, post-hoc explainability focuses on helping the end user understand the why behind a model’s decisions.

Post-hoc approaches: Two ways to understand a model

Let’s take a closer look at post-hoc explainability approaches, which typically fall into two families.

· Model-agnostic approaches: Treating the model as a black box, model-agnostic approaches do not attempt to explain the workings of a model, choosing instead to provide an explanation using only the inputs and outputs. For example, the Shapley additive explanations (SHAP)* approach provides the weight of each feature a model used to make a prediction, giving users a glimpse into which factors were most important.

· Model explainers: Model explainer approaches like Locally Interpretable Model-Agnostic Explainer (LiME)*, on the other hand, create a parallel model that is simpler and more understandable — such as a decision tree — to explain what the main model is doing. The idea is that instead of using complicated models (not explainable), such as transformers and neural networks, try to find explainable models that will most likely deliver the same or similar results.

Get started with Intel XAI tools

As AI grows in popularity, XAI provides essential frameworks and tools to ensure models are trustworthy. To simplify implementation, Intel® Explainable AI Tools offers a centralized toolkit, so you can use approaches such as SHAP and LiME without having to cobble together diverse resources from different GitHub repos.

For more information about XAI, stay tuned for part two in the series, exploring a new human-centered approach focused on helping end users receive explanations that are easily understandable and highly interpretable.

Acknowledgments
The author thanks Watkins, Elizabeth and, Nafus, Dawn for their incredible insights.

About the author

Ezequiel Lanza, Open Source Evangelist.Passionate about helping people discover the exciting world of artificial intelligence, Ezequiel is a frequent AI conference presenter and the creator of use cases, tutorials, and guides that help developers adopt open source AI tools like TensorFlow* and Hugging Face*. Find him on Twitter at @eze_lanza

--

--

Intel Tech
Intel Tech

Published in Intel Tech

The Intel Tech blog is designed to share the latest information on open source innovation and technical leadership.

Ezequiel Lanza
Ezequiel Lanza

Written by Ezequiel Lanza

AI open source evangelist at Intel . Passionate about helping people discover the exciting world of artificial intelligence.

No responses yet