Sitemap
TDS Archive

An archive of data science, data analytics, data engineering, machine learning, and artificial intelligence writing from the former Towards Data Science Medium publication.

Interpretable or Accurate? Why Not Both?

Building interpretable Boosting Models with IntepretML

8 min readMay 27, 2021

--

Press enter or click to view image in full size
Image by Kingrise from Pixabay

As summed up by Miller, interpretability refers to the degree to which a human can understand the cause of a decision. A common notion in the machine learning community is that a trade-off exists between accuracy and interpretability. This means that the learning methods that are more accurate offer less interpretability and vice versa. However, of late, there has been a lot of emphasis on creating inherently interpretable models and doing away from their black box counterparts. In fact, Cynthia Rudin argues that explainable black boxes should be entirely avoided for high-stakes prediction applications that deeply impact human lives. So, the question is whether a model can have higher accuracy without compromising on the interpretability front?

Well, EBMs precisely tries to fill this void. EBMs, which stands for Explainable Boosting Machine, are models designed to have accuracy comparable to state-of-the-art machine learning methods like Random Forest and Boosted Trees while being highly intelligible and explainable.

This article will look at the idea behind EBMs and implement them for a Human Resources case study via InterpretML, a Unified Framework for Machine Learning Interpretability.

Machine learning…

--

--

TDS Archive
TDS Archive

Published in TDS Archive

An archive of data science, data analytics, data engineering, machine learning, and artificial intelligence writing from the former Towards Data Science Medium publication.

Parul Pandey
Parul Pandey

Written by Parul Pandey

Prev - Principal Data Scientist @H2O.ai | Author of Machine Learning for High-Risk Applications

Responses (1)