Explain Your Model with the SHAP Values

Chris Kuo/Dr. Dataman
Dataman in AI
Published in
13 min readSep 14, 2019

--

Better Interpretability Leads to Better Adoption

Is your highly-trained model easy to understand? A sophisticated machine learning algorithm usually can produce accurate predictions, but its notorious “black box” nature does not help adoption at all. Think about this: If you ask me to swallow a black pill without telling me what’s in it, I certainly don’t want to swallow it. The interpretability of a model is like a label on a drug bottle. We need to make our effective pill transparent for easy adoption.

How can we do that? The SHAP value is a great tool among others like LIME (see my post “Explain Your Model with LIME”), InterpretML (see my post “Explain Your Model with Microsoft’s InterpretML”), or ELI5. The SHAP value also is an important tool in Explainable AI or Trusted AI, an emerging development in AI (see my post “An Explanation for eXplainable AI”). In this article, I will present to you what the Shapley value is and how the SHAP (SHapley Additive exPlanations) value emerges from the Shapley concept. I will demonstrate how SHAP values increase model transparency. This article also comes with Python code in the end for you to produce nice results in your applications, or you can download the notebook in this github. The SHAP api has more recent development, as documented in “The SHAP with More Elegant Charts” and “The SHAP Values with H2O

--

--