Member-only story
Interpretable or Accurate? Why Not Both?
Building interpretable Boosting Models with IntepretML
As summed up by Miller, interpretability refers to the degree to which a human can understand the cause of a decision. A common notion in the machine learning community is that a trade-off exists between accuracy and interpretability. This means that the learning methods that are more accurate offer less interpretability and vice versa. However, of late, there has been a lot of emphasis on creating inherently interpretable models and doing away from their black box counterparts. In fact, Cynthia Rudin argues that explainable black boxes should be entirely avoided for high-stakes prediction applications that deeply impact human lives. So, the question is whether a model can have higher accuracy without compromising on the interpretability front?
Well, EBMs precisely tries to fill this void. EBMs, which stands for Explainable Boosting Machine, are models designed to have accuracy comparable to state-of-the-art machine learning methods like Random Forest and Boosted Trees while being highly intelligible and explainable.
This article will look at the idea behind EBMs and implement them for a Human Resources case study via InterpretML, a Unified Framework for Machine Learning Interpretability.

