UX challenges for AI/ML products [1/3]: Trust & Transparency

Design considerations in Human-AI interactions. Part 1 of the series.

Nadia Piet
AIxDESIGN
Published in
7 min readFeb 5, 2021

--

_________________________________________________________________

PSA: It is a key value for us at AIxDESIGN to open-source our work and research. The forced paywalls here have led us to stop using Medium so while you can still read the article below, future writings & resources will be published on other platforms. Learn more at aixdesign.co or come hang with us on any of our other channels. Hope to see you there 👋

This article was originally developed and released as Chapter 4 of the AI-Driven Design e-book series ‘Design Challenges in Machine Learning Products’ in collaboration with AWWWARDS, Adyen, and Joel van Bodegraven available for download here.

I repurposed it into a Medium article so it can be a ‘living document’ I can edit and add to as I learn, develop new insights, and find better examples.

Introduction

Every design material comes with unique opportunities and challenges. In the same way that designing an event poster is different from designing a mobile app, designing AI/ML-driven applications is different from designing mobile apps.

As we begin to see AI features popping up in our day-to-day products and services, its challenges begin to materialize. They range from UX problems, such as explainability and user feedback mechanisms, to greater ethical challenges, such as echo chambers and data bias. Designing the user experience of adaptive, intelligent, and semi-autonomous systems present a range of new challenges for us designers to take on.

When thinking or talking about AI, we often imagine utopian or dystopian futures. Rarely, we dare to acknowledge its impact as something we have a hand in shaping (or even: a design challenge). Technology may be neutral and deterministic, but its development is not. As designers, we can take the raw material of AI and turn it into user, business, and social value.

This piece is by no means all-encompassing and only scratches the surface on the complexities of designing for AI. Instead, it aims to provide a starting point for building a shared understanding around some of the complexities of designing AI/ML interactions, spark discussion, and invite everyone to take part in (re-)imagining how to design positive user/human experiences in algorithmic systems.

3 x 3 challenges

This article, which shares my research on designing Machine Learning Products, will address 3 different themes Trust & Transparency, User Autonomy & Control and Value Alignment, highlighting 9 of the challenges that can arise within them, all supported with real-life examples.

Theme 1: Trust & Transparency

Not all AI features are (in)visible to the user, nor should we want them to be. When confronting our users with new systems, it is our job as designers to help them understand how they work, be transparent about their abilities, construct helpful mental models, and make the users feel comfortable in their interactions. Transparency is key to building trust in the system and respecting user trust in your organization.

1. EXPLAINABILITY

Making sense of the machine and communicating to the user why the system acts the way it does.

Design considerations: Show users what input data or algorithms were used to train the model, explaining and visualizing a gradual detailing of the model’s logic and how to interpret its presented result.

As HAI’s faculty member Nigam Shah points out, there are three main types of AI interpretability: “how a model works, why the model input produced the output, and an explanation that brings about trust in a model”.

→ Mixpanel
Mixpanel, the business analytics service company, uses machine learning to uncover user insights. The anomaly detection feature helps pick up on unusual behavior. The image shows an example of an anomaly, together with what data the prediction is based on and which segments drive the anomaly so that the user can make an informed decision about the next steps. It even offers a “share” function to consult with a colleague for a 3rd opinion.

→ Airbnb
When Airbnb introduced ‘smart pricing’ based on supply and demand, adoption wasn’t as high as expected. They learned users were happy to be informed by the algorithm but they still wanted to make the final decision for themselves. Airbnb then built an interface where hosts can evaluate price changes and accept or reject each of the algorithm’s recommendations.

Questions around Explainability

  • What is the right level of transparency?
    Too little means the user doesn’t trust your system. Too much means the user might get confused with an overload of information.
  • What are good ways to explain predictions, confidence, and underlying logic underlying within the user interface?
  • What are useful mental models to help users understand and interact with the AI?
  • How to provide explainability to the user when model interpretability is low?

2. MANAGING EXPECTATIONS

Assisting the user to build helpful mental models of what the system can and cannot do by being transparent about abilities and limitations.

Design considerations: Proper onboarding during the first interaction with the application or feature where abilities and limitations are established.

→ Siri & Assistant
Each time the user calls upon Siri and it shows this screen, Siri has the opportunity to respond to queries and gradually introduce the user to its varied abilities. Onboarding and setting expectations become even more important in post-pixel interfaces because the user doesn’t have physical affordances nudging them where to go and what to do.

The Assistant chatbot doesn’t try to cover up its shortcomings but instead makes the most of its limited abilities by explicitly stating its abilities and how a user must communicate a query in order for it to be processed successfully.

Questions around Managing Expectations

  • Where and when do I communicate its abilities and limitations to the user?
  • What is the right level of trust?
    Too little trust means the user doesn’t get any value from the system. Too much trust might lead to automation bias and poses risks for both the user and the organization.
  • How do I (continue to) onboard my user and make sure the systems’ adaptivity and resulting unfamiliarity don’t result in loss of trust?

3. FAILURE + ACCOUNTABILITY

When it comes to designing AI-driven user flows; assume failure and design graceful recoveries. Take accountability for mistakes and minimize the cost of errors for your user.

Design considerations: Apologizing, minimizing automation bias, allowing the user to indicate a mistake, suggesting alternatives, allowing feature requests, and taking accountability for mistakes.

→ Google Home & Alexa
I make requests to my Google Home quite frequently that are returned with “I’m sorry, I can’t do that yet.’’ While disappointing, the apologies and promise of future improvement keep me from losing trust. In the example of Siri below, we can see it also recommends an alternative — a query it understands to be similar to the one initially called upon and one it can perform. Both of them recognized there is no way to user test against adaptive systems so they must be designed for failure.

→ Fail! Chatbots

While we can not predict every possible scenario and ML’s adaptive nature makes testing a bit more tricky, we can anticipate obvious failures and prevent them from leading to awkward user experiences like below. Test your systems in real-life, out-of-the-lab context to bring to the surface common and obvious mistakes.

→ Nest

One day as B.J. May approached his Nest doorbell, it wouldn’t let him in because the model thought he was Batman. Fortunately, the designers anticipated failure and designed 2 back-up ways for him to intervene and still get inside. Consequently, the failure didn’t have many consequences other than a funny Twitter thread.

Questions around Graceful Failure + Accountability

  • How can your user report a mistake?
  • Under which conditions, for which goals and which users, even edge cases, might the system return undesired results?
  • How to design an interface that minimizes the cost to the user when the AI makes a mistake?
    Considering the impact of mistakes in your use case, and how to retaliate from a bad prediction to not harm trust.
  • Who is responsible and liable for the consequences of mistakes?

I’m planning to launch an online course on UX of AI later in 2021. To be the first to know and receive an early-nerd discount, sign up here for updates 💌

About AIxDesign

AIxDesign is a place to unite practitioners and evolve practices at the intersection of AI/ML and design. We are currently organizing monthly virtual events, sharing content, exploring collaborative projects, and developing fruitful partnerships.

To stay in the loop, follow us on Instagram, Linkedin, or subscribe to our monthly newsletter to capture it all in your inbox. You can now also find us at aixdesign.co.

--

--

Nadia Piet
AIxDESIGN

Designer & researcher focussed on AI/ML, data, digital culture & the human condition mediated through computing