How Explainable Artificial Intelligence (XAI) Can Help Us Trust AI

Mansour Saffar
AltaML
Published in
6 min readJul 22, 2019

Have you ever wondered how machine learning models work? Or what, exactly, goes on inside these models and whether we can trust them?

Well, you’re in luck, because I’m going to try to give you a very general overview of what XAI is and why we need it by answering a few common questions. After reading this, you should be able to understand the necessity of XAI and whether you need to start thinking about integrating it with your ML projects/products.

What is XAI?

Explainable AI (XAI) is a rather new field in machine learning (ML) in which researchers try to develop models that are able to explain the decision-making process behind ML models. XAI has many different research branches but, generally speaking, it either tries to explain the results of complex, black-box ML models or tries to incorporate interpretability into current ML architectures. The first approach is widely adopted by the researchers and there are many methods that try to explain what an ML model does regardless of the underlying architecture of the model. This is called model-agnostic XAI.

Why do we need XAI?

Let me give you an example. With the current advances in deep learning (DL), having a few million parameters is totally typical for a DL model! Comparing this number of parameters with the simple linear regression model we’ve been using for decades will give you a sense of just how complicated these DL models really are. It’s true that DL models have had a huge impact in many different industries, but many of them are still used as black-box systems. This is not a good thing, especially when they are used in critical scenarios in which their decisions have a huge societal impact.

What questions does XAI answer?

XAI tries to address three different types of questions: Why? When? and How? When developing ML products, whenever you encounter questions that start with these three main keywords, you may need to start exploring XAI. Here are some typical questions that can occur during an ML project:

  • Why did that ML model make that prediction?
  • When can I trust the predictions of this model?
  • When will this model fail to make the right prediction?
  • How can I correct the errors of this model?

The answer to all these questions lies in integrating XAI models/concepts into your ML project/product.

When do we need XAI?

In every ML project, at some point, you will probably need to provide the decision-making procedure of the deployed ML models to your clients or your colleagues. XAI is crucial in applications of ML in which the decision of the ML system has a direct impact on people’s lives or a huge impact on society. You may claim that the final decision is made by humans in most scenarios, but many experts use complex ML systems to help them make such decisions. If the ML system is not able to explain how it reached that decision, then it would be very hard and risky for that expert to trust the ML system!

What are some common use cases of XAI?

The use cases of XAI are all the fields in which AI/ML is being used right now! Instead of listing use cases, I am going to give you two examples of how and why XAI is needed in ML models used in scenarios when the decision of the ML model heavily impacts people’s lives. These examples will be in medicine and finance domains.

Why do we need XAI in medical ML applications?

Consider a scenario in which a patient goes to a doctor to see if she/he has epilepsy. The doctor feeds the patient’s brain MRI images into a complex ML model and the generated report diagnoses the patient with epilepsy with an 85% confidence level. Here are some of the questions that the doctor may ask:

  • How can I trust the report of this ML model?
  • Based on what features of the MRI image did the model reach this decision?
  • Does the way the ML model reached this decision make sense to me? How can I even know what the decision-making process of this model is?
  • What if this report is wrong and the decision-making process of the model is not accurate enough?

And the list goes on! You can see that the doctor can’t trust the ML model’s decision unless its decision-making process is presented to her so she can verify it.

Epilepsy Detection System used in medical application

Why do we need XAI in financial ML applications?

Imagine a scenario in which a person goes to a financial institution to get a home loan. The financial institution uses a complex ML model that takes the customer’s demographic and financial history and creates a report saying whether the customer is eligible for the loan or not.

Let’s say our customer was unlucky and the system decided that he/she is not eligible to get the loan. The problem that arises here is whether the business people using this system can trust the model’s decision. This is the same problem we faced in the previous example. Here are some of the questions that the business people using this model might ask:

  • What if the customer asks us why his/her loan application was rejected?
  • Can the ML model explain and substantiate its decision-making process so we could report it to the customer?
  • Under what circumstances does this model fail to make the right perditions? Are we going to lose a loyal customer because we have to trust the ML model decision?

And again the list goes on and on! You can probably see the kind of problems/questions that may arise if one company is using complex ML models and the decision of those models heavily impacts their customers.

Loan Models used in the finance applications

What would be the future of XAI?

It is hard to predict the future of XAI given that it’s a rather new field in AI/ML and lots of researchers are actively working on new XAI models. However, we can predict the outcome of such XAI models based on current research trends and the industry need for such systems. Here’s what could happen in a few years when XAI models are adopted in industry and become more mature:

  • ML models will be able to explain their results! (think of it as saying ‘Analysis’ to robots in Westworld)
  • More interpretable models which you can interact with and modify (or improve) their results
  • You might be able to inject your knowledge into the model since it is interpretable and you know how it makes the decision!

I am so excited about XAI! Where can I learn more about it?

Glad to hear that! There lots of online material and I’ve also put some of the best ones in the References section. The Interpretable ML book will give you a general overview of current methods in XAI and is a good start if you are not familiar with this field. DARPA has a nice roadmap on XAI usage which they published publicly and it shows their plan on working on different XAI models and methods and how they will be integrated with current ML models.

If you have other questions, feel free to reach out to me by email and I’ll be happy to help!

References

--

--

Mansour Saffar
AltaML
Writer for

Machine Learning / Software Engineer. UAlberta CS alumni. Interested in ML, XAI, NLP, and Conversational AI.