Member-only story

A Deep Dive on LIME for Local Interpretations

The intuition, theory, and code for Local Interpretable Model-agnostic Explanations (LIME)

Conor O'Sullivan
Towards Data Science
13 min readJun 26, 2024

--

(source: DALL.E)

LIME is the OG of XAI methods. It allows us to understand how machine learning models work. Specifically, it can help us understand how individual predictions are made (i.e. local interpretations).

Although recent advancements means LIME is less popular, it is still worth understanding. This is because it is a relatively simple approach and is “good enough” for many interpretability problems. It is also the inspiration for a more recent local interpretability method — SHAP.

So we will:

  • Discuss the steps taken by LIME to get local interpretations.
  • Discuss in detail some of your choices related to these steps including how to weight samples and which surrogate model to use.
  • Apply the lime Python package.

Along the way, we will compare the method to SHAP. This is to better understand its weaknesses. We will also see that, although LIME is a local method, we can still aggregate lime weights to get global interpretations. Doing so will help us understand some of the default choices made by the package.

--

--

Towards Data Science
Towards Data Science

Published in Towards Data Science

Your home for data science and AI. The world’s leading publication for data science, data analytics, data engineering, machine learning, and artificial intelligence professionals.

Conor O'Sullivan
Conor O'Sullivan

Written by Conor O'Sullivan

PhD Student | Writer | Houseplant Addict | Follow me for articles on IML, XAI, Algorithm Fairness and Remote Sensing

No responses yet