Building Explainable AI (XAI) Applications with Question-Driven User-Centered Design

A large metal door with a question mark painted on the wall next to it. Above the door, there is a relatively small light fixture
Photo by Jac Alexandru from Unsplash

Who needs explainability and for what?

The short answer might be “anyone who comes in contact with AI”. Here are some common user groups that may demand explainability and what they may use AI explanations for:

  • Model developers, to improve or debug the model.
  • Business owners or administrators, to assess an AI application’s capability, regulatory compliance, etc.
  • Decision-makers, who are direct users of AI decision support applications, to form appropriate trust in the AI and make informed decisions.
  • Impacted groups, whose life could be impacted by the AI, to seek recourse or contest the AI.
  • Regulatory bodies, to audit for legal or ethical concerns such as fairness, safety, privacy, etc.
Example tasks people perform with AI and questions they ask to understand AI

Question-Driven Explainable AI

These examples demonstrate that, while there are many types of users and many types of tasks requiring XAI, we can understand users’ explainability needs by the kind of question they ask. In our human-computer interaction (HCI) research, published as a paper at the ACM CHI 2020 conference which won a Best Paper Award Honorable Mention, we looked across 16 AI products in IBM and summarized common questions their users would ask. Based on them we developed an XAI Question Bank, listing common user questions for AI that are categorized into 9 groups (bolded in the examples above):

  • How: asking about the general logic or process the AI follows to have a global view.
  • Why: asking about the reason behind a specific prediction.
  • Why Not: asking why the prediction is different from an expected or desired outcome.
  • How to change to be that: asking about ways to change the instance to get a different prediction.
  • How to remain to be this: asking what change is allowed for the instance to still get the same prediction.
  • What if: asking how the prediction changes if the input changes.
  • Data: asking about the training data.
  • Output: asking what can be expected or done with the AI’s output.
  • Performance: asking about the performance of the AI.
A suggested mapping chart for user questions and example XAI techniques to answer these questions

Question-Driven User-Centered Design for XAI

One reason we wanted to map out the space of user questions and corresponding XAI techniques is to encourage product teams to follow a user-centered design (UCD) process — to start with understanding user needs, and using that to guide the choice of XAI technique. With UCD we can prioritize user experience and avoid paying technical debt. Towards this goal, working with IBM Design for AI, we developed a UCD method and a design thinking framework, following IBM Design’s long tradition of enterprise design thinking practices. Below we give a brief overview of this UCD method, which you can follow to build explainable AI applications. More details are described in our recent paper, with a real use case of designing an explainable AI application for patient adverse event risk prediction.

XAI application for healthcare adverse event risk prediction based on question-driven XAI design

--

--

IBM Data Science in Practice is written by data scientists for data scientists to gain hands-on and in-depth learning and to read about inspirational applications and conceptual understanding for challenging topics in the field. Discuss and network: community.ibm.com/datascience

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Vera Qingzi Liao

HCI researcher@Microsoft Research FATE group, studying human-AI interaction