AIxDesign
Published in

AIxDesign

UX challenges for AI/ML products [1/3]: Trust & Transparency

Design considerations in Human-AI interactions. Part 1 of the series.

Introduction

Every design material comes with unique opportunities and challenges. In the same way that designing an event poster is different from designing a mobile app, designing AI/ML-driven applications is different from designing mobile apps.

3 x 3 challenges

This article, which shares my research on designing Machine Learning Products, will address 3 different themes Trust & Transparency, User Autonomy & Control and Value Alignment, highlighting 9 of the challenges that can arise within them, all supported with real-life examples.

Theme 1: Trust & Transparency

Not all AI features are (in)visible to the user, nor should we want them to be. When confronting our users with new systems, it is our job as designers to help them understand how they work, be transparent about their abilities, construct helpful mental models, and make the users feel comfortable in their interactions. Transparency is key to building trust in the system and respecting user trust in your organization.

1. EXPLAINABILITY

Making sense of the machine and communicating to the user why the system acts the way it does.

  • What are good ways to explain predictions, confidence, and underlying logic underlying within the user interface?
  • What are useful mental models to help users understand and interact with the AI?
  • How to provide explainability to the user when model interpretability is low?

2. MANAGING EXPECTATIONS

Assisting the user to build helpful mental models of what the system can and cannot do by being transparent about abilities and limitations.

  • What is the right level of trust?
    Too little trust means the user doesn’t get any value from the system. Too much trust might lead to automation bias and poses risks for both the user and the organization.
  • How do I (continue to) onboard my user and make sure the systems’ adaptivity and resulting unfamiliarity don’t result in loss of trust?

3. FAILURE + ACCOUNTABILITY

When it comes to designing AI-driven user flows; assume failure and design graceful recoveries. Take accountability for mistakes and minimize the cost of errors for your user.

  • Under which conditions, for which goals and which users, even edge cases, might the system return undesired results?
  • How to design an interface that minimizes the cost to the user when the AI makes a mistake?
    Considering the impact of mistakes in your use case, and how to retaliate from a bad prediction to not harm trust.
  • Who is responsible and liable for the consequences of mistakes?

About AIxDesign

AIxDesign is a place to unite practitioners and evolve practices at the intersection of AI/ML and design. We are currently organizing monthly virtual events, sharing content, exploring collaborative projects, and developing fruitful partnerships.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Nadia Piet

Designer & researcher focussed on AI/ML, data, digital culture & the human condition mediated through computing