XAI components

Explainable AI and Design

Jasmine E
xaipient
Published in
5 min readOct 13, 2020

--

The most useful and accurate AI models are also more complex, and the more complex a model is, the more challenging it is to comprehend and trust. This is the black box dilemma:

Why did it make that prediction?

AI is not infallible, and it increasingly operates in an opaque way. This severely limits the adoption of advanced AI models in critical settings. The goal of Explainable AI (XAI) is to develop techniques to help users better understand and trust AI models. These users include: (a) those developing the models, such as data scientists or ML engineers, (b) those using the models to assist in their decisions, such as loan underwriters or doctors, (c) those impacted by the models, such as loan applicants or patients, and (d) those regulating or auditing the models to ensure they are ethically sound and complying with legal requirements.

In order to be effective, explanations need to be human-friendly, and this requires a blend of XAI techniques and human-centered design. In this article, we will explore some design ideas to enhance human-friendliness of AI model explanations.

A good example of the black box dilemma can be found in the medical field. Let’s say you build a model that can help predict lung cancer based on a patient’s symptoms and records. You build it into a web product where you efficiently and effectively train, test, and scale your model to get better accuracy and compelling predictive value. You now want to sell it to a clinic, but they are hesitant to purchase your new groundbreaking product, because you and your model cannot explain why it’s predicting that a person is likely to develop lung cancer.

The lack of trust, transparency, and potential for bias is a problem for the doctor who wants the model’s assistance in making a prediction; it’s also a problem for the patient who needs a reason why they are more susceptible to get lung cancer. What if the model was primarily trained on certain races or geographies and hence is inaccurate for those who are outside of those populations?

In the absence of a human-friendly design, a typical explanation for a model predicting lung cancer risk might look like the following. On the left is what are known as feature attributions, and on the right is a rule-based explanation.

Typically, attributions are visually represented by a table to breakdown what factors collectively contribute to the total risk. Such charts and technical language may be suitable to a technical audience but are not easily approachable for non-technical users involved with the models. Even for technical users, there are ways to more effectively convey the same information so it is readily perceived. Now let’s explore some design ideas to significantly enhance the human-friendliness of these explanations.

Narratives and visuals make explanations more approachable

Using a combination of storytelling narrative and visuals, we can make explanations more approachable using micro interactions and micro visualizations. In these examples, this user is able to see that they have a 30% risk of developing lung cancer within 5 years, with two supplementary XAI components to breakdown that prediction: Top Factors and Similar Cases.

The Top Factors component is a visually enhanced way to present the “feature attributions” above, whereas the Similar Cases component presents the above “Rule-based” explanation in a way that is more immediately understandable. Importantly, these components are interactive, and how the user interacts with these is not only valuable to them but also valuable to us as designers. Interaction design can inform the explanation algorithm what types of explanations the doctor or patient finds more valuable, and this can allow the explanation modules to adaptively improve over time.

Interaction design can also allow the users to drill down from the top contributor for more details. Taking the factor — smoker as an example, the user can scan the different categories of smokers and read that being an active smoker for 10+ years contributes 12% to the model prediction.

Top Factor — Smoker For 10+ Years

Rule-based explanations are especially valuable to allow domain experts to influence model development: rules are readily comprehensible to them, and they can judge whether a rule accords with their intuition, and perhaps they can edit the rule to improve it. The Similar Cases component (see below) presents the rule shown above using an interaction that makes it readily understandable: Here we catch the audience’s attention by the aesthetic representation of the data — the scattered dots illustrate the cases similar to the persona, and the red-orange color indicates the cases who develop the lung cancer in the next five years.

105 Cases Similar to You

Interaction design can help highlight anomalies

In the above example we showed the top contributions to a specific model prediction. Below we show the aggregate global contributions of various factors. This is especially useful for domain experts (e.g., doctors in this case) to “sanity-check” a model. For example a doctor may find it suspicious that the age bracket 25–34 has a disproportionately large impact. This may indicate an issue in the underlying training data or labels (e.g., certain races or geographies may be over-represented), and the doctor may flag this as incorrect. This feedback can be captured by the explanation system, and help improve the training labels, leading to a re-trained, more accurate model.

In summary, Explainable AI algorithms are needed to generate the core, raw explanation artifacts (e.g., attributions, patterns, rules, etc) that reliably show the true rationale behind model predictions. But to be truly effective in gaining human trust, these must be augmented with design to make the explanations approachable, easily understandable, and allow domain experts to provide feedback. Interaction design can help inform key decision makers what explanations are valuable and open the door to automatically improving explanations and even model accuracy.

XaiPient is fundamentally re-imagining AI explainability with the human end-user in mind. Partner with us: xaipient.com. Follow us on Twitter: @XaiPient

Follow our Dribbble for more XAI design components.

--

--