Member-only story
A Deep Dive on LIME for Local Interpretations
The intuition, theory, and code for Local Interpretable Model-agnostic Explanations (LIME)
LIME is the OG of XAI methods. It allows us to understand how machine learning models work. Specifically, it can help us understand how individual predictions are made (i.e. local interpretations).
Although recent advancements means LIME is less popular, it is still worth understanding. This is because it is a relatively simple approach and is “good enough” for many interpretability problems. It is also the inspiration for a more recent local interpretability method — SHAP.
So we will:
- Discuss the steps taken by LIME to get local interpretations.
- Discuss in detail some of your choices related to these steps including how to weight samples and which surrogate model to use.
- Apply the lime Python package.
Along the way, we will compare the method to SHAP. This is to better understand its weaknesses. We will also see that, although LIME is a local method, we can still aggregate lime weights to get global interpretations. Doing so will help us understand some of the default choices made by the package.