Explaining Your Model with Microsoft’s InterpretML

Chris Kuo/Dr. Dataman
Analytics Vidhya
Published in
9 min readFeb 27, 2020

--

Model interpretability has become the main theme in the machine learning community. Many innovations have burgeoned. The InterpretML module, developed by a team in Microsoft Inc., offers prediction accuracy, and model interpretability, and aims at serving as a unified API. Its Github receives active updates. I have written a series of articles on model interpretability, including “Explain Your Model with the SHAP Values”, “Explain Any Models with the SHAP Values — Use the KernelExplainer”, “Explain Your Model with LIME”, “The SHAP with More Elegant Charts”, and “Creating Waterfall Plots for the SHAP Values for All Models”.

In this article, I am going to introduce a new method other than the SHAP. I will provide a gentle mathematical background and then show you how to interpret your model with InterpretML. If you want to do the hands-on practice first, You can jump to the modeling part, then come back to review the mathematical background.

The InterpretML Python Module

--

--