Explain Your Model with LIME

Chris Kuo/Dr. Dataman
Dataman in AI
Published in
8 min readFeb 25, 2020

--

(Revised on June 16, 2022)

Why Is Model Interpretability so Important?

In this article, I will introduce the LIME approach. I will start with the questions that the inventors of LIME were concerned with, then walk you through their solutions. You may be interested in knowing their thought process or even adopting their problem-solving approach.

People typically agree that a linear model is more interpretable than a complicated machine learning model. Is it true? Do you think the following linear model is easily interpretable? Probably not. It has too many variables.

Besides the above concern, the second concern is that this model has many variables. For an individual prediction, only a few variables play significant roles in the prediction. The rest variables are almost irrelevant to an individual prediction.

These two questions are what Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin, the authors of the paper “Why Should I Trust You?” Explaining the Predictions of Any Classifier (KDD2016), concerned about. Let’s see their augments.

(A) “Why Should I Trust You?”

The authors of LIME argue that we should build two types of trusts for a user to adopt a model:

  • Trusting a prediction: a user will trust an individual…

--

--