Explaining Decision Optimization Prescriptions

AlainChabrier
6 min readMar 29, 2019

--

Explainability is a hot topic. Data Science models are used to trigger recommendations such as “accept” or “refuse” a loan, and the need to answer “why this recommendation?” is arising. Customers impacted by recommendations are asking “why”, and so some companies are willing to answer, and in some geographies, regulations even prevent applications being used in some areas when explainability is not available.

While some explanation techniques are beginning to appear for Machine Learning models, we illustrate here how explainability works with Decision Optimization models.

Explanations

An explanation is, according to Wikipedia “a set of statements […] to describe a set of facts which clarifies the causes, context, and consequences of those facts”. Indeed, when we ask for some explanation, we expect a set of statements, and in particular a set of causalities. Let’s take an example. Imagine a loan acceptance system that uses a rules management system. A large set of rules is used including one rule stating:

If age is lower than 25, then the loan is refused”

When John, aged 22, asks for a loan, it gets refused, and this rule can be used, along with his age, as an explanation for the recommendation. This type of explanation is very easy to obtain if the recommendation has been given by a forward chaining rules system.

With other types of algorithms, this is more difficult. Let’s now take the shortest path example with the graph represented below. We want the shortest path from A to G. An algorithm like Dijkstra’s will easily provide the optimal solution A-B-D-G. But “why this recommendation?” In this case, there is no such causality rule you can expose as an explanation. However, you can easily explain that “If the path starts with A-C, then we would never reach G”, and “if the path starts with “A-B-E” then the path to G would be longer”.

Therefore the explanation here consists of showing that alternatives are wrong. And in this case, we can enumerate all alternatives. In some real-life cases, it might be enough to counter some of the alternatives.

Decision Optimization

For Decision Optimization (DO), newcomers also want explanations. In particular, as we tell them that with DO we get optimal solutions to their problems, they want to understand the outcome. They look for an explanation.

Decision Optimization is a technique for Prescriptive Analytics which prescribes decisions that solve problems where models are formulated with constraints and objectives. You provide some input data along with the model to the DO engine and you get a solution. If we give the engine enough time, it can tell you that the returned solution is optimal. Someone who is not familiar with underlying technology will ask “why?”, and request some explanation. The way that the DO engine “solves” the problem may appear quite obscure, and almost impossible to state explicitly as a list of individual reasons for the solution. But, on the other hand, it is very easy to answer questions like “why not this other solution?” by interacting with the problem and the engine.

Portfolio Allocation example

Let’s take an example and illustrate how this works.

We will use a portfolio allocation problem. The input is a list of possible portfolio investments with their current stock price, some recommendations and some other characteristics including the possible expected return ( obtained using predictive methods).

The problem is to prescribe a set of investment allocations, with the objective of maximizing the expected return, but at the same time respecting some business constraints such as the maximum total value of the portfolio and some diversification constraints.

Using a tool such as the Modeling Assistant, available in Decision Optimization for Watson Studio, you can formulate this problem using natural language having chosen the selection domain. You get a model similar to this:

Note that here we have disabled the other objective is to minimize the dependencies between the investments. It is particularly useful to enable and disable constraints and objectives when you want to do some interactive ‘what-if’ analysis as we do here.

If you solve this problem (using this input data and this model), you get an optimal solution with an expected return value of 1770.58.

Tell me why invest in Invest11?

We see in the solution tab that Invest11 is allocated 259.6 euros.

So someone could ask, “why should I invest in Invest 11?” “what is the explanation?”.

A way to answer this question with optimization is to solve another problem with all the same constraints except the additional constraint stating that I will NOT invest in Invest 11. With the Modeling Assistant capability, this is easy to do some natural language query to get and add this constraint to the model:

If I solve this model, I get an optimal solution that is worse (1723.36 instead of 1770.58).

So if you don’t invest in Invest11 you get a smaller expected return. And therefore the answer to the question “why” is that “you invest in Invest11 because you maximize expected return and not investing in Invest11 would lead to a worse expected return”.

Tell me why not invest more in Invest7?

In this new solution the investment for Invest 7 is 108.74 euros.

Another question could then be: “why don’t I invest more on Invest7?”

Again, the answer to this question can be found by solving yet another version of the problem (this is what we call another scenario), where we also state in the model that the investment for Invest7 should be e.g. at least 200:

When we solve, the DO engine tells us that the problem is infeasible!

A nice feature of Decision Optimization for Watson Studio, is that in such a case an infeasibility analysis is run which returns a minimal set of constraints in the model that makes the problem infeasible. We call this a conflict set.

In this case, we get the following list of conflicts:

Indeed, Invest7 is a french (FR) investment, and we cannot invest 200 in Invest7 if we also limit the French investment to a total of 150. This is the explanation “why we don’t invest more in Invest7”.

We could continue and explain why we don’t have a better solution adding a constraint stating the objective should be greater than the objective value found. We would get a conflict highlighting the constraints which are limiting the objective.

Conclusion

With Decision Optimization, you can, indirectly at least, explain the solutions by explaining why another solution is not preferred and returned. You either get worse feasible solutions or infeasible solutions with a refined conflict.

Sometimes, such interactions with models lead to the identification of errors in the formulation of the business model. At other times it leads to the identification of how the business process can be improved by modifying the rules.

You can start now with Decision Optimization for Watson Studio Cloud using the notebook beta.

Note that the Modeling Assistant functionality shown here is currently only available in Decision Optimization for Watson Studio Local.

alain.chabrier@ibm.com

https://www.linkedin.com/in/alain-chabrier-5430656/

@AlainChabrier

--

--

AlainChabrier

Former Decision Optimization Senior Technical Staff Member at IBM Opinions are my own and I do not work for any company anymore.