What is an Ideal Explainable AI- Decision system?

Naveen Kumar
Brillio Data Science
5 min readNov 7, 2022
Understanding True Nature
Photo by Manuel Liniger on Unsplash

Abstract:

With the rise of productionizing AI/ML solutions, the MLOps framework is rescuing different types of debt like technical debt, model debt, and data debt. Building AI/ML model with a responsible AI framework is becoming essential in every organization. Organizations not only consume the predictions coming out of AI/ML solutions but rather put more effort into understanding the rationality behind the AI/ML model that is brought to a particular prediction. Business stakeholders and human decision-making highly anchor on the rationale explanation created within AI/ML models to make their decision. An ideal explanation for a decision is a chronicle breakdown of logical steps used to arrive at the prediction. In this article, we will discuss the impact & requirement gap between AI decisions & human decision-making process and propose a design for Ideal Explainable AI Decision System.

Introduction:

Business depends on AI/ML application to take high dollar-value decision every day and very often they explicitly ask for an explanation or evidence to substantially support their decision. Analysts & Engineers try to capture convincing examples along with model performance. Human intuition is to gather convincing evidence to support their claim that a prediction is correct, even though ground truth says otherwise. Just understanding how the algorithm works will not help in understanding the rationale for the prediction, and the human brain is not configured to understand more than hundreds(limited) of chronological steps towards the explanation.

Related Work:

There have been many discussions on Explainable AI vs Interpretable ML and unsaid rules have been laid out on possible scenarios based on its application. Explainable AI is a model-agnostic algorithm that deduces the prediction into a sequence of logical steps which might be 100% accurate as it is derived from a majority of selected samples. On the other hand, we have interpretable ML where such models are chosen to build AI/ML which by nature are interpretable. Both of these methods have their own drawbacks, explainable AI suffers from accuracy in rationale explanation whereas interpretable ML suffers from accuracy in prediction performance. The below image shows a tradeoff between interpretability & performance.

Interpretability versus performance trade-off given common ML algorithms Source- [2]

Intuition:

With the existing challenges and solutions available below are the points that need to be kept in mind:

· Just by knowing how the AI model operates does not help the Data Scientist to justify the rationale around the decision.

· An explanation is just a short algorithm or model where rules are generalized over the data

· Analysts & Engineers often tend to bring rationale factors through explanation to support the ML model prediction rather than understanding the true nature of it.

· Unintentionally analysts synthesize convincing explanations for ML model prediction/decision.

· Having such an XAI model in place often results in improving the performance of the AI/ML model with synthesized convincing explanations rather than interpreting model behavior.

The image above shows that, if the AI model has predicted someone as a “Convict” then a study by Herb Simon’s “bounded rationality” says the human brain can handle only limited algorithmic complexity and is configured to find or synthesize convincing evidence to prove the decision is correct.

So far we have discussed what technology can do, it is equally important to understand how humans (Data Scientists, Business Analysts & stakeholders) consume and respond to such insights. We can broadly categorize the human decision-making process into two groups:

1. Quick & Inexplicable Decision making (going by the guts)

2. Rationale Decision Making

Quick & Inexplicable Decision making (Going by the guts)

We as a human, many a time take decisions based on our gut feeling, which is very quick and very difficult to explain to ourselves or others the reason behind that step. The only benefit of following your gut decision is very quick and often the best.

Rationale Decision Making

Sometimes we make rational decisions by considering several factors and analyzing the pros & cons related to that decision which takes a decent amount of time.

How should an ideal Explainable AI system be?

An ideal Explainable AI Decision system should be able to deliver Quick and Rationally explained decision-making capability.

Ideal Explainable AI Decision system can be built in three folds:

1. Build an Explainable AI Model

2. Explain the decision and recommend action items

3. Perform the recommended task & psychological evaluation of model effectiveness

Defense Advanced Research Projects Agency (DARPA) formulated a program to enable users better understand and trust artificially intelligent systems. To provide a psychological understanding of the explanation in a prescriptive manner –

1 — We need to produce a summary of all theories of explanation

2 — Develop a model of explanation from the produced theories

3 — Validate the explanation model against evaluation results

Psychological model of explanation. Yellow boxes illustrate the underlying process. The green boxes illustrate measurement opportunities. White boxes illustrate potential outcomes. Source — [1] DARPA

Conclusion:

· Business needs to understand the impact of the psychological shift towards synthesized explanation before demanding it. Explainable AI can backfire when synthesized.

· XAI is not a silver bullet that will cater to everyone’s needs. Different user types require different types of explanations, for example — a doctor needs to explain a diagnosis to a fellow doctor, a patient, or a medical review board

· Ideal Explainable AI decision systems can improve the trust, adoption, and efficiency of end users.

· Ideal explainable AI decision system should be aligned and tested with the user’s mental model and feedback from tasks performed based on system recommendation

Reference:

[1] — https://doi.org/10.1002/ail2.61

[2] — https://docs.aws.amazon.com/images/whitepapers/latest/model-explainability-aws-ai-ml/images/interpretability-vs-performance-trade-off.png

--

--