ML Model Explainabilty(XAI)

Chituyi
4 min readSep 4, 2023

--

Make sense of your ML/DL models in production.

Prediction Explainer

Introduction

Machine learning models are often referred to as “black boxes” because it can be difficult to understand how they make predictions. However, there are several libraries available that can help explain the predictions of machine learning models. Explainable AI (XAI) is a field of research (gaining momentum) that seeks to make machine learning models more transparent and understandable. There are many different XAI methods available, each with its own strengths and weaknesses.

In this blog post, we will compare three popular XAI libraries in Python: Shapash, SHap and Eli5. We will discuss what each library is good for, their unique features, and the organizations behind them.

XAI in production https://pragnosisexplainer.onrender.com/ (This Demo highlights the advantages of understanding model outputs and how helpful this feature can be in productionizing models to reduce Bais, improve fairness and accuracy while at the same time increase business efficiency and customer service)

Shapash.

Shapash is a Python library that aims to make machine learning interpretable and understandable to everyone. It provides several types of visualization, including SHAP plots, ICE plots, and LIME explanations. Shapash is compatible with many machine learning models, including Sklearn, Catboost, and XGBoost.

SHap.

SHap is a Python library that implements the SHAP (SHapley Additive exPlanations) algorithm. SHAP is a game theoretic approach to explain the output of any machine learning model. It connects optimal credit allocation with local explanations using the classic Shapley values from game theory.

Eli5.

Eli5 is a Python library that provides a variety of methods for explaining machine learning models. These methods include SHAP, LIME, and partial dependence plots. Eli5 is compatible with many machine learning models, including sklearn, scikit-optimize, and Keras.

All three libraries provide methods for calculating feature importance and explaining individual samples.

Here is an example of how to calculate feature importance using each library:

Here is an example of how to explain an individual sample using each library:

Summary of strengths and weaknesses.

Shapash:

Strengths:

  • Easy to use
  • Provides a variety of visualization tools
  • Compatible with many machine learning models
  • Can be used to explain models to non-technical users

Weaknesses:

  • Can be computationally expensive to explain large models

SHap:

Strengths:

  • Produces accurate and interpretable explanations
  • Compatible with many machine learning models

Weaknesses:

  • Can be computationally expensive to calculate SHAP values for large models

Eli5:

Strengths:

  • Can explain a variety of models, including those that are not interpretable by other methods
  • Relatively inexpensive to explain models

Weaknesses:

  • Not as easy to use as some other XAI libraries
  • Does not provide as many visualization tools as some other XAI libraries

Unique features.

Shapash:

  • Provides an interactive web application that can be used to explain machine learning models to non-technical users
  • Can be used to explain models that are trained on image data

SHap:

  • Uses a game theoretic approach to explain the output of machine learning models
  • Can be used to explain models that are not interpretable by other methods

Eli5:

  • Can explain models that are trained on text data
  • Can be used to explain models that are trained on time series data

Organizations behind the libraries.

Shapash:

  • Developed by MAIF, a French insurance company
  • Open source and available on GitHub

SHap:

  • Developed by Christoph Molnar, a data scientist and machine learning researcher
  • Open source and available on GitHub

Eli5:

  • Developed by Jake VanderPlas, a data scientist and software engineer
  • Open source and available on GitHub

Summary.

All three libraries provide powerful tools for explaining machine learning models. Shapash provides easy-to-read visualizations and a web app for exploring global and local explainabilty. SHAP uses game theory to provide local explanations for any machine learning model. ELI5 provides support for several machine learning frameworks and packages and allows for debugging classifiers and explaining their predictions. Ultimately, the choice of library will depend on the specific needs of the user.

Conclusion.

Understanding how machine learning models make predictions is important for building trust in their outputs and ensuring that they are fair and unbiased. Libraries like Shapash, SHAP, and ELI5 provide valuable tools for achieving this goal. By using these libraries, data scientists can gain a better understanding of their models and communicate their results more effectively to non-technical stakeholders. So, it’s always good to have these tools in your arsenal as a data scientist or ML engineer. I hope this comparison was helpful! Let me know if you have any questions or need further clarification on anything mentioned above.

Check out free ML projects with Code to get started here!

https://dallo7.github.io/

#MLDemocratizer!

😊🤗

--

--

Chituyi

Building data Pipelines for ML and AI to aid Supply Chain Agility and improve Customer Intimacy. https://dallo7.github.io/