Explaining system intelligence

Empower your users, but don’t overwhelm them

Vladimir Shapiro
Experience Matters
7 min readApr 11, 2018

--

This blog belongs to the SAP Design series about intelligent system design. You might also be interested in our previous post, 5 Challenges to Your Machine Learning Project.

Copyright: Vladimir Shapiro, SAP SE

One of the guiding design principles for intelligent systems is to empower end users. If we want people to trust machines, we must share information about the underlying models and the reasoning behind the results of algorithms. This is even more vital for business applications, when users are held accountable for every decision they make.

It’s widely accepted that intelligent systems must come with a certain level of transparency. There’s even a new term for it: explainable AI. But, that’s just the beginning. As designers, we need to ask ourselves how explainable AI is tied to user interaction. What do we need to think about when we explain the results and recommendations that come from built-in intelligence? And how can we make it a seamless experience that feels natural to users?

Does the user always need an explanation?

Before we dive in, let’s take a step back and ask ourselves if designers really need to explain everything we display on the UI.

What my team has been learning from recent user tests, is that if the quality of a prediction is high and the stakes are low, users probably won’t expect comprehensive explanations.

Our test scenario: Paul works for a large corporation and has an issue with his emails. When he opens an IT support ticket, the system helps him pick the correct category, based on his problem description.

Design Exploration: Explaining input recommendations to the user. Copyright: SAP SE

We did our best to make the system recommendation as transparent as possible. But, in the end, none of our test participants were interested in the explanation. When we investigated further, we discovered three factors that explained the user response:

  • Low level of risk. The consequences of selecting the wrong category were not that dramatic. A wrong choice can be easily changed, corrected at a later stage, or even ignored.
  • High prediction quality. The quality of the predictions offered by the system was good enough to ensure that all users found an appropriate category within the top 3 proposals.
  • Good system performance. It was quick and easy for users to correct the chosen category, or even to experiment with the input on the fly (“learning by doing”).

In short, if users can easily eliminate or circumvent the negative impact of inaccurate system recommendations, they might not be that interested in explanations. But what are we going to do in other situations?

What do I need to explain?

To start , designers need to break down our explanations into “global” and “local” components. AI experts call this the scope of interpretability for a model:

  • Global interpretability helps the user understand the “big picture” — the entire logic of the model.
  • Local interpretability helps the user understand small regions of the data (such as clusters of records, or single records).

Example: Let’s get back to Paul, who is actually a purchaser in a large company. Paul needs to find a new supplier for a material. Our intelligent system can propose a ranked list of suppliers for this specific material.

Design Exploration: List of suppliers ranked by specific criteria. Copyright: SAP SE

Here are some questions Paul might ask himself when he looks at this list:

  • Why do I see only these suppliers? What are the criteria for inclusion vs. exclusion in the ranking list?
  • Why does supplier B have this score?
  • Why is ____ not on the list?

Global Scope: Why do I see only these suppliers?

This question is an example of global interpretability scope. Paul wants to understand the logic behind ranking on a general (global) level to gain an initial sense of trust. I.e: is the system competent enough to help me?

Design Exploration: Paul understands the basic components of supplier ranking and their relative importance. He may want to adjust them. Copyright: SAP SE

Local Scope: Why does supplier B have this score?

Paul wants to understand the details of the ranking system, based on the ranking for a concrete supplier. This may be a supplier he already knows, giving him another chance to verify the competence of the system. Or, it could be a supplier that Paul hasn’t dealt with before. In this case, Paul really wants to learn something new from the system.

Design Exploration: Paul can see the breakdown of the rating for supplier B in comparison to competitors. Copyright: SAP SE

Mixed: Why is my favorite supplier, XYZ, not on the list?

At first glance this seems to be a local question. But, Paul needs to understand both the global rules and the local effect to interpret the situation.

Design Exploration: Paul can search for supplier XYZ and check the rating, as he did previously for supplier B. In the detail view, he sees that his favorite supplier does not have favorable conditions for the material he needs to purchase. Copyright: SAP SE

How much can I explain at once?

Paul is overwhelmed by the explanations provided.

Providing explanations to end users is a perfect scenario for applying progressive disclosure — the design technique we use to avoid overwhelming the user with too much information at once.

Let’s explore how it could work in our IT ticket example (assuming an explanation is required):

Example of progressive disclosure for explanations. Copyright: SAP SE

The main elements of the explanation are displayed in a concise form on the main screen, with an option to drill down to more detailed information on secondary screens. The benefit of this approach is that the users enjoy a more simplified and compact UI, and only needs to concern themselves with the details if needed.

How can I apply this to my own application?

You might be asking yourself how many levels of progressive disclosure you need to design, and what kind of information you need to offer at each level. This will depend largely on your use case, persona, and chosen level of automation, so there’s no universal pattern. However, the questions below might help you understand the scope of your own explainable AI design requirements, or even prompt you to explore completely new ideas.

  • Does the user expect an explanation?
    If the risks of action are low and the results can be easily rolled back, users are not normally interested in an explanation of the system proposal.
  • What type of explanation can you provide?
    If the system generates a list of items using a specific machine learning algorithm, we have at least two things to explain: the model in general (global explanation), and the application of the model to each line item (local explanation).
  • Which level of explanation is expected in which context by which user?
    Depending on the use case, users can require different types of information in different contexts. The role of the user is key, and different user roles may require different types of explanatory information. If you need more than one level of detail, consider using the concept of progressive disclosure for explanations.
  • Are there other interactions that might extend explanations? Some interactions are natural extensions of explanations. For example, users who invest time in understanding the logic of the system might be interested in providing feedback. Or, users exploring a result at item level (local interpretability) might be interested in a comparison of the results on this level.
  • Is there a lifecycle for an explanation, and how might it look?
    If you are using progressive disclosure, ask yourself whether you need a time dimension. We assume that the need for a repeated (static) explanations of the model can decrease or even disappear over time as the user gains more experience with the system. For example, explanations on the global interpretability level could disappear or be hidden over time, once the user understands the main principle of the underlying algorithm.

In a nutshell

It goes without saying that designers need to explain overall AI logic to users. But this alone won’t be enough to make AI part of an engaging and empowering user experience that adds obvious value to our solutions. If designers want users to embrace the new intelligent capabilities, our explanations will need to be carefully designed as an integrated part of the UI. And this means telling users exactly what they need to know in their specific context — just enough information at just the right time.

Curious to learn more?

Of course, there’s much more to explainable AI than we’ve covered so far. What are the challenges for writing explanation texts? What is the role of explainable AI in building trust in intelligent systems? And how can explainable AI be integrated with user feedback functionality? I will be coming back to these topics in my upcoming posts.

In the meantime, I would be happy to hear about your own experiences and the challenges you face when designing explainable AI systems.

Stay tuned and please feel free to add your thoughts in the comments.

Special thanks to Susanne Wilding for reviewing and editing this article.
Several illustrations in this article were created with
Scenes™ by SAP AppHaus

Originally published at experience.sap.com on April 12, 2018.

--

--