Whitening the black box

Manuel Morán Peláez
Geoblink Tech blog
Published in
5 min readApr 29, 2019
The Antikythera mechanism is an Ancient Greek analogue computer used for calendar and astrological purposes decades in advance. (source)

Due to the recent developments in the data world, Data Scientists can now use powerful tools to develop predictive models. However, with these new opportunities come new challenges.

One of the most critical challenges presented is the ability to interpret the predictions generated by these models. An obvious solution is to develop basic models that can inherently be interpreted. A second, much more powerful option involves introducing general mechanisms that allow us to explain any type of predictive model, regardless of their nature.

In this post we will explain why do we need to use such mechanisms and how do we implement them.

Building a business predictive tool

At Geoblink, one of the modules we have included in our app is the Sales Forecast. This module is intended to help retailers in the network expansion process.

For a given location, which simulates a new opening, they will get among others:

  • A prediction of the revenue the store will make during the first year.
  • The features that drive that prediction and their positive/negative impacts.
Prediction of a given location
Impact of drivers

As we can see, the business value that a customer can get from this module is huge, but also huge are the technical difficulties the Tech Team at Geoblink is facing to achieve this.

For the module to be useful we have to develop:

  • An accurate predictive model …
  • … defined with features that are business-aligned.
  • A methodology to get the impact of those features.

The second step depends on the internal variables the client provides, but also in collecting external variables defining the demographics, economics, competitors, of the catchment area. After collecting hundreds of variables we use a semi-automatic approach to select features that have actual business impact.

For the first one, we use advanced techniques and models which are usually known as black-box models, because of its weak interpretability power: it is hard to get insights into how they are achieving a given prediction. Thus, the third step should be generalized to any model regardless of its nature and this is where we want to focus our attention in this post.

Feature impact

Before we take a deep dive into the technical details of what we call the explainer component (or simply the Explainer), we have first to clarify what we understand by an interpretation of a prediction.

Perhaps the reader has its own definition of interpretability and surely it is widely shared, but as it is pointed out here, there are many different views related to this concept. For that reason, we have decided to stick to the following definition:

An interpretation of a predicted value will be an intuitive explanation of the relationships between the input variables and the prediction.

Many different approaches can be used to get a mechanism to get interpretations, one of them is to use ML models that are inherently interpretable such as decision trees or linear regression.

Our Explainer uses the Python library Local Interpretable Model-agnostic Explanations (LIME) with some modifications to adapt it to work with our use case (LIME was developed to explain classifiers and has to be modified to work properly with regressors).

The most interesting elements of LIME are:

  • The explanations are local: per prediction, and not for the whole model.
  • It is model agnostic and thus, it can be used for any model regardless its complexity.
  • The model used locally is a linear regression, which is easy to interpret and provides a decomposition of the impact of the drivers measured as a magnitude and a sign, so we can know which driver has impacted more and if it has impacted positively or negatively to the prediction.

Finally, we want to sketch a simplified explanation of how LIME works, but for a detailed understanding we encourage you to explore the code and the paper, but as said before, it is prepared for classifiers.

For a given point the algorithm works as follows:

  1. A new dataset of perturbations of the point is created.
  2. The model is used to predict all these new points (including the original one).
  3. A linear regression is fitted with the perturbations and the predictions as outputs.
  4. For each variable we multiply the coefficient obtained by the linear model and the value of that variable for the original point to get the impact.

Conclusions

It is very important to understand that Data Analytics is not only about super accurate models and complex optimization approaches to get them. We have to understand that we are using a data-driven approach to solve a business problem and, for that reason, we have to focus on the insights we can take from the data or the model we are building. Sometimes, a simple data exploration can solve the problem. In fact, we can think of many more complex approaches to attack the problem of interpretability, but a simple local regression is enough and powerful.

These techniques can also be applied to other kind of problems, such as Credit Scoring, where regulators require the models to be interpreted. For that reason, the most widely used models for this classification problem are decision trees and logistic regression. However, with this approach, more profitable (accurate) models can be used.

Finally, it is not always the case where we need to focus on explaining our model. There are some cases where accuracy is more important, for example in Automated Driving Systems or Financial Trading, where the error takes a lot of relevance. In those cases, it is preferable to work on complex feature engineering techniques which obscure any possible comprehension of the final result.

--

--

Manuel Morán Peláez
Geoblink Tech blog

Mathematician and Computer Scientist interested in Advanced Data Analytics and Machine Learning. Currently working as Data Scientist at Geoblink.