Member-only story

Interpretable vs Explainable Machine Learning

The difference between an interpretable and explainable model and why it’s probably not that important

Conor O'Sullivan
Towards Data Science
7 min readSep 17, 2020

--

Updated 23 April 2023

When you first dive into the field of interpretable machine learning you will notice similar terms flying around. Interpretability vs explainability. Interpretations vs explanations. We can’t even seem to decide on the name for the field — is it interpretable machine learning (IML) or explainable AI (XAI)?

We’re going to discuss one definition and, hopefully, clarify some things. That is the difference between an interpretable and an explainable model. Although, we should warn you…

There is no consensus!

Part of the problem is IML is a new field. Definitions are still being proposed and debated. Machine learning researchers are also quick to create new terms for concepts that already exist. So, we’ll focus on one potential definition [1]. Specifically, we will:

  • Learn how to classify a model as either interpretable or explainable
  • Discuss the concept of interpretability and how it relates to this definition
  • Understand the issues with the definition and why it’s probably not necessary to classify models…

--

--

Towards Data Science
Towards Data Science

Published in Towards Data Science

Your home for data science and AI. The world’s leading publication for data science, data analytics, data engineering, machine learning, and artificial intelligence professionals.

Conor O'Sullivan
Conor O'Sullivan

Written by Conor O'Sullivan

PhD Student | Writer | Houseplant Addict | Follow me for articles on IML, XAI, Algorithm Fairness and Remote Sensing

Responses (2)