BASIC XAI

BASIC XAI with DALEX — Part 5: Shapley values

Anna Kozak
ResponsibleML
Published in
3 min readDec 13, 2020

--

Introduction to model exploration with code examples for R and Python.

By Anna Kozak

Welcome to the “BASIC XAI with DALEX” series.

In this post, we present the Shapley values, the model agnostic method, which is the one we can use for any type of model.

Previous parts of this series are available:

So, shall we start?

First — What Shapley values deliver to us?

Shapley value is a model agnostic method, we can use it for any type of model. The benefit of Shapley values is additive feature attribution property. It is a local explanation. What is a single observation, think about a patient, bank, or telecommunication client. In the previous part of the BASIC XAI SERIES, we introduced a Break Down method. The Shapley value is a generalization because in Break Down method represents one of all variable orders. Now, we consider all orders for variables, so if we have the p features in our dataset, then we have p! orders. The output is averaging the possible orders.

Second — Intuition of Shapley values

SHapley Additive exPlanations (SHAP) are based on “Shapley values” developed by Shapley in the cooperative game theory.

The Shapley value method is based on Break Down predictions into parts. This is a slightly different approach than in the Break Down method. It is based on the idea of averaging the input value of a given variable overall or a large number of possible orders.

An important practical limitation of the general model-agnostic method is that, for large models, the calculation of Shapley values is time-consuming. In specific situations, they can be calculated very quickly. For example, for additonal models and for models based on trees.

Third — let’s get a model in R and Python

Let’s write some code. We are still working on the DALEX apartments data. To calculate the Shapley value method we use the predict_parts() function with type = ‘shap’. We need the explainer object and the observation for which we want to calculate the explanation.

Code to created Shapley values predict_parts object in Python and R

Let’s see now on the plot for apartment consider in the previous blog. The biggest influence on the price of the apartment has the “Ochota” district, it is close to the city center. However, the price is negatively impacted by the fact that the apartment is not in the “Srodmiescie” district — city center. Moreover, the floor number equal 7 and the 93 meter squared area have a negative contribution to price.

Shapley values plot for observation from apartments set and random forest model. The green and red bars correspond to the contribution of the variable to the prediction. The green ones take positive values, i.e. increase the prediction values, while the red ones take negative values, i.e. decrease the prediction value. Purple boxplots show the distribution of the attribution of a variable from every possible combination of variable layouts. On the x-axis we have model prediction value, on the y-axis, we have variables and their values for the observation.

Many thanks to Przemyslaw Biecek and Jakub Wiśniewski for their support on this blog.

If you are interested in other posts about explainable, fair, and responsible ML, follow #ResponsibleML on Medium.

In order to see more R related content visit https://www.r-bloggers.com

--

--

ResponsibleML
ResponsibleML

Published in ResponsibleML

Tools for Explainable, Fair and Responsible ML.

Anna Kozak
Anna Kozak

Written by Anna Kozak

Data Scientist | Data Visualization | Responsible Machine Learning

No responses yet