Transparency in Decision-Making: Advantages of a Rule-Based Approach — Part 1/2

Tobias Schaefer
StratyfyAI
9 min readAug 31, 2019

--

by Tobias Schaefer, Zachary Hervieux-Moore, Norvard Khanaghyan, and Dmitry Lesnik

Data and Decisions — What about Explanations?

We live in the age of data-driven decisions. With easy access to large amounts of data and efficient off-the-shelf algorithms for analysis, automated systems decide on fraud alerts, creditworthiness, and even who is going to be called in for a job interview. Often, however, there is a need — or even a requirement due to regulations along with the basic human desire to understand — to be able to explain how a decision was made. Due to lack of transparency, off-the-shelf machine learning methods can be confusing, and sometimes even misleading.

Using the clarity of rules in advanced approaches, like those developed by companies like Stratyfy (for details see the follow up paper), it is possible to develop models that enjoy built-in transparency. Rule-based models are human-interpretable and make it easy to explain decisions — whether for external use (e.g. customers or regulators) or for internal use (e.g. strategy and product development).

Recently, novel tools have been developed to help off-the-shelf machine learning approaches address their biggest weaknesses: transparency and interpretability. One of these tools is LIME (Local Interpretable Model-Agnostic Explanations) which can be used to create explanations for non-transparent classification methods such as GBM (Gradient Boosting Methods) or ANNs (Artificial Neural Networks). In this two part article, we look at a simple example that illustrates how LIME is working and the implications of the local explanations it provides. We also introduce a new, distinct approach to solve the issue of explainability in machine learning.

Local Explanations can be Confusing

Let us look at data sampled from a distribution corresponding to a two-dimensional sigmoid function (details about the theory behind this classifier can be found in the technical appendix). The distribution is constructed such that 1st and 3rd quadrants correspond to the positive class (‘one’), and 2nd and 4th to the negative class (‘zero’). The figure below shows a histogram based on 10,000 data points showing clustering of positive outcomes in quadrant I and III. Red indicates a greater likelihood of a positive class and blue a negative class.

Example data with clustering of positive outcomes in quadrants I and III

A common approach to analyze this data is to run a classification algorithm, such as GBM or ANN, and then to use LIME to create explanations for particular decisions. When using this methodology, we train our classification algorithm to create an approximate, non-transparent classification model (e.g. GBM or ANN). Next we run LIME on this trained classifier in order to create explanations. In particular, we are interested in quantifying how important the variables (x or y) are in order to make a decision about the output (one or zero in this example).

The problem with this approach is that LIME can only generate local explanations, meaning that the explanation depends on the point (x,y) on the plane and does not produce a globally valid explanation. The figure below shows the local explanations provided by LIME as bar graphs with two columns, with x on the left and y on the right. The colors (red or green) illustrate in which way the explanatory variable has impacted the output variable. The figure shows different explanations on the (x,y) plane together with the histogram of the output variable.

Local explanations provided by LIME

For example, the explanation provided at the point (1,0) is entirely different from the explanation at the point (-1,0) because the role of the variable y is exactly the opposite at these two points: At (1,0), the impact of y is marked ‘green’ and at (-1,0), the impact of y is marked ‘red’. In words, the likelihood of the ‘one’ class increase with y at (1,0) but decreases with y at (-1,0). At both points, the variable x is not important. On the other hand, at the point (0,1), the role of y is negligible. Here, x dominates the decision, pulling it positively as indicated by the solid green bar. At (0,-1), however, the bar for x is ‘red’. And for other points on the plane, the impact of the variables is again different: At (1,-1), for example, both variables impact the decision, and we have ‘red’ for x and ‘green’ for y. At (-1,1) the roles are exactly opposite. As you can tell, and is noted in other case studies, this local character of the explanations provided by LIME can be confusing. In our next post, we introduce a new, distinct approach to solve the issue of explainability in machine learning.

Why does explainability really matter?

When it comes to decision-making, three aspects matter most: speed, accuracy, and transparency. Speed is a simple expression for how long it takes to make a decision. Accuracy is a measure of “making good decisions”. In the context of modeling credit risk, an accurate model is a model that will classify low-risk clients as very creditworthy and high-risk clients as less creditworthy.

Transparency, on the other hand, is a far more complicated requirement of decision-making. In many situations, aside from the decision itself, we also need to provide an explanation for why a decision was made in a certain way. And which kind of explanation is acceptable depends on the situation. If a client is declined a credit, an obscure explanation like “the computer said so” is clearly insufficient — from both a regulatory (Fair Lending) and an ethical point of view. Plus, in today’s market, borrowers have options and require more information from their banking partners. Details of the decision process are mandatory. This raises the question of which automated decision-making processes are actually suited to deliver explanations that are acceptable. If the decision can be explained in an appropriate way, we say that the process is transparent.

This goes hand in hand with how we, as humans, feel about decision-making in our daily lives. If we make an important decision, people might ask us for reasons or factors that had an impact on our decision. An acceptable explanation of a decision often includes a list of various pros and cons and a conclusion. This way of justifying decisions is a reflection of the high-level functioning of our brain; namely, to be conscious about how we make decisions. And, as a side note, consider the idea of how much complexity the brain needed to develop to actually achieve such an advanced level of operation.

Artificial Neural Networks (ANN) as Black Boxes

Why is it so difficult to achieve transparency with off-the-shelf methods for making complex predictions based on data? To answer this question, let us briefly review one particular algorithm that is very successful in terms of accuracy: artificial neural networks (ANNs). The computational algorithm behind ANNs is inspired by our knowledge of brain biology as a network of neurons. In a basic neural network, we have an input layer, one (or several) hidden layers, and an output layer. Each neuron receives input signals from the neurons of the previous layer and produces an output signal as a sigmoid envelop on the total input signal. Typically, the inputs of neurons are combined as a weighted sum where the weights can be calibrated in a supervised learning environment. Once calibrated, the ANN is able to make predictions regarding the output variable for new data inputs. Over the recent years, ANNs have become popular and powerful tools in machine learning. They are becoming easier to implement and routinely produce highly accurate results in reasonable computational times.

A major drawback of neural networks, however, is the fact that the models represented by calibrated ANN are rarely explainable. It is given by the structure of the network together with the weights between its nodes which, in almost all cases, lack any intuitive interpretation. Therefore, ANNs are often referred to as “black boxes” to represent the obscurity of their decision making process. While an ANN can be able to produce very accurate predictions, its knowledge remains hidden and cannot be extracted in any simple way. As we have seen, local explainability provided by LIME often gives very unsatisfactory mitigation to this problem.

In certain applications, this black box character of the ANN does not impede their usefulness. A common example is object recognition where it is usually only important that the object is correctly identified and it is often less important to know the details of why the computer completed this task correctly. Or take chess as an example — if one cares only about winning, it might be of lesser importance to develop a human-interpretable model of how the computer actually decided on a certain move.

Stratyfy’s Rule-Based Models: Transparency and Accuracy

The benefits of a rule-based model that is accurate and transparent are tremendous. Rule-based models can assist in making long term business decisions as they help to assess the needs of the client base. Moreover, they provide full control over the model. Rules can be easily edited, removed, or added to a model and they make it possible to combine models of various origins.

Everyday, transparency in automated decision processes becomes more important. While LIME and similar technologies help to shed some light on the decisions made by black box algorithms, they yield only local explanations. And, as we saw previously, this type of explanation may be unsatisfactory or insufficient. Stratyfy’s rule-based models are entirely transparent from the start to finish and provide global explanations. This makes it easy to build machine learning models based on existing systems (e.g. score cards), explain decisions to clients and regulators, and to develop human-interpretable models for marketing and research.

CONTACT INFO: If you’d like to learn more about how Stratyfy can help you develop transparent rule-based models, please reach out to me or info@stratyfy.com to see how we can help.

Technical appendix: Details of the Sigmoid Classifier

In this appendix, we provide more details on the sigmoid classifier used as an example. The underlying probability distribution for the classifier depends on two variables x and y and is given by p(x,y) = 1/(1+e^(-5xy)).

Note that this probability function is usually unknown and that the data allows to only approximately infer it. In this study, however, the precise knowledge of the probability function enables us to theoretically understand what LIME is doing and how to interpret the local explanations.

In the following, we assume that the threshold is always set to 0.5 meaning that if for a point p>0.5, we classify the point as positive (a “one”) and if p<0.5 it will be classified as negative (a “zero”). Due to the presence of the saddle point at (0,0), we have p>0.5 in quadrant I and III, whereas p<0.5 in quadrant II and IV. Therefore, if we pick a point with x>0, then the effect of y will the following: a positive y will increase p, and a negative y will decrease p. If, on the other hand, x<0, then a positive y will actually decrease p and a negative y will increase p. It is useful to plot the explanations provided by LIME together with a contour plot of the probability distribution:

Figure 2: Local explanations provided by LIME for the perfect classifier.

As expected, the explanation provided at the point (1,0) is different from the explanation at the point (-1,0) as the role of the variable y is exactly the opposite at these two points.

In this case, at (1,0), the strength of classification of a positive sample is positively related with y. That is, if one goes to (1, -0.1), the probability of a positive example should decrease and if one goes to (1, 0.1), the probability should increase.

--

--