The SHAP with More Elegant Charts

Chris Kuo/Dr. Dataman
Dataman in AI
Published in
13 min readApr 12, 2021

--

I hope “Explain Your Model with the SHAP Values”, “Explain Any Models with the SHAP Values — Use the KernelExplainer” and “The SHAP Values with H2O Models” have helped you greatly in your work. In this article, I will cover more novelty in the SHAP graphs. If you have not read the previous post, I suggest you read it first and come back to this article.

A professional realtor once inspired me with his way of house presentation. He first showed me the outlook of the house, the quiet neighborhood, and the green lawn, and explained the accessibility of the stores. Then he led me to the house to see each room. In the master bedroom, he encouraged me to open the drawers and closets to be amazed by the recessed lights. I started to imagine how I could host a group of guests around the fireplace, the pool table, and kids roasting marshmallows at the backyard firepit.

We similarly present our machine learning models. We explain to the users that the entire model makes sense. The relationships of the predictors with the target variable are consistent with the business domain knowledge. This is called global interpretability. Next, we explain that the individual predictions by the model also make sense. We can explain why each case gets the prediction according to the values of predictors. This is called local interpretability. The SHAP values can show both.

--

--