Member-only story
Interpretable vs Explainable Machine Learning
The difference between an interpretable and explainable model and why it’s probably not that important
Updated 23 April 2023
When you first dive into the field of interpretable machine learning you will notice similar terms flying around. Interpretability vs explainability. Interpretations vs explanations. We can’t even seem to decide on the name for the field — is it interpretable machine learning (IML) or explainable AI (XAI)?
We’re going to discuss one definition and, hopefully, clarify some things. That is the difference between an interpretable and an explainable model. Although, we should warn you…
There is no consensus!
Part of the problem is IML is a new field. Definitions are still being proposed and debated. Machine learning researchers are also quick to create new terms for concepts that already exist. So, we’ll focus on one potential definition [1]. Specifically, we will:
- Learn how to classify a model as either interpretable or explainable
- Discuss the concept of interpretability and how it relates to this definition
- Understand the issues with the definition and why it’s probably not necessary to classify models…