Model Interpretation Frameworks

Which interpretation framework should a data science team pick?

Vimarsh Karbhari
Acing AI
5 min readMay 7, 2020

--

The breath of data science and AI field leads to different models which serve as solutions to different problems. As the use cases and the problems become more complex, models become complex to interpret. Despite widespread adoption, these models remain mostly black boxes. Model interpretation frameworks helps a data science teams to understand different models. These frameworks also help in further investigation to understand how the models are making predictions to identify potential overfit or bias that was not found during training. From machine learning models to deep learning models it is important for teams to adopt these frameworks which aid in the inspection, explanation and refinement of these models.

LIME

LIME

LIME is a superset of many interpretation frameworks. It is a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally around the prediction. It does this by accessing the behavior of the any base estimator(model) using local interpretable surrogate models. These, surrogate models are interpretable models (like a linear regression model or decision trees) that learn on the predictions of the original black box model. Instead of trying to fit a global surrogate model, LIME focuses on fitting local surrogate models to explain why single predictions were made. LIME is also available as an open-source framework on GitHub.

ELI5

ELI5 is a Python library which allows to visualize and debug various Machine Learning models using unified API. It has built-in support for several ML frameworks and provides a way to explain black-box models. ELI5 can perform at a micro as well as a macro level in a classification or a regression model. It can go micro where it will inspect an individual prediction of a model and try to figure out why the model makes the decision it makes or macro where inspect model parameters and try to find out how the model works globally. In simple terms ELI5 shows weights for each feature depicting how influential it might have been in contributing to the final prediction decision across all trees. ELI5 is also available on Github as a framework.

AllenNLP Interpret

NLP is a subset of data science that is growing very fast. AllenNLP Interpret,is a flexible framework for interpreting NLP models. AllenNLP Interpret provides a toolkit built on top of AllenNLP for interactive model interpretations. The toolkit makes it easy to apply gradient-based saliency maps and adversarial attacks to new models, as well as develop new interpretation methods. AllenNLP interpret contains three components: a suite of interpretation techniques applicable to most NLP models, APIs for developing new interpretation methods (e.g., APIs to obtain input gradients), and reusable front-end components for visualizing the interpretation results. AllenNLP team has made various high end demos, code and tutorials available. It has used one of the best ways to present a model interpretation framework.

Skater

Skater — Selection

Skater is a unified framework to enable Model Interpretation for all forms of models to help one build an interpretable machine learning system for real world use-cases. Model Interpretation could be at a highlevel it could be broadly classified into:

1. Post-hoc interpretation: Given a black box model trained to solve a supervised learning problem(X –> Y, where X is the input and Y is the output), post-hoc interpretation can be thought of as a function(f) ‘g’ with input data(D) and a predictive model. The function ‘g’ returns a visual or textual representation to help understand the inner working of the model or why a certain outcome is more favorable than the other. It could also be called inspecting the black box or reverse engineering.

2. Natively interpretable models: Given a supervised learning problem, the predictive model(explanator function) has a transparent design and is interpretable both globally and locally without any further explanations.

Skater is also available on Github as a framework and has a gallery to show its usecases.

SHAP

Source: SHAP Github

SHapley Additive exPlanations is a game theoretic approach to explain the output of any machine learning model. It connects optimal credit allocation with local explanations using the classic Shapley values from game theory and their related extensions (see papers for details). SHAP assigns each feature an ‘weight’ for a particular prediction. The components of this framework include, identifying a new class of additive feature weight measures, and theoretical results showing there is a unique solution in this class with a set of desirable properties. Then, SHAP values try to explain the model output (function) as a sum of the effects of each feature being introduced into a conditional expectation. For non-linear functions the order in which features are introduced matters. The SHAP values result from averaging over all possible orderings. SHAP supports fast C++ implementations for XGBoost, LightGBM, CatBoost, scikit-learn and pyspark tree models.

Recommendations

There are many frameworks which could be used for model interpretation. Each of these frameworks has to be build upon and leveraged by data science teams to make model building and delivery more robust. These frameworks have their own characteristics and highlights which should help teams pick the right framework for the right model. There is no silver bullet. Smaller more younger data science teams usually pick a framework during model building to make model interpretation easier for management and non technical folks. As the team becomes more mature, they come up with more diverse and complicated models and grow out of some of these frameworks. At that point these frameworks may not provide enough leverage to these teams and hence, they might build their own interpretation frameworks. Uber’s manifold is a step in that direction.

Subscribe to our Acing Data Science newsletter for more such content.

Thanks for reading! 😊 If you enjoyed it, test how many times can you hit 👏 in 5 seconds. It’s great cardio for your fingers AND will help other people see the story.

--

--