UX for AI: Trust as a Design Challenge

What could a trustful relationship between a business user and the digital assistant look like in practice.

Vladimir Shapiro
Experience Matters
6 min readJul 12, 2018

--

Copyright: Vladimir Shapiro, SAP SE

Smart assistants already have the potential to make our lives much easier. Why, then, a significant number of people are still using Amazon Alexa as just a kitchen timer? Like them, I bought Alexa full of expectations, excited by the opportunity to try out my first personal AI assistant. Now, a year later, Alexa has become little more than a media player for my son. Though we do use it every now and again as a kitchen timer.

So why has Alexa, like Siri and Google Home, still not managed to become my real digital assistant? Does it lack the skills for complex tasks? Not at all. Or maybe I’m expecting artificial general intelligence? But that’s not it either.

It’s more that interacting with digital assistants often doesn’t feel quite right. For example:

  • A digital assistant that’s pretending to be human-like has smart answers to some questions (hello, Easter Eggs), but runs into trouble with much easier questions.
  • A digital assistant doesn’t understand me and just leaves me hanging. I’m not given any options, or offered a fallback solution. All I can do is repeat again, check the app, or search the web.
  • A digital assistant doesn’t react when I need it, but randomly jumps into our conversation during a family dinner.

Now, say I’m thinking about letting my digital assistant handle my business calendar. Given my past experience, I have my doubts: Will my digital assistant wake me up in time if I have an important meeting? Will it interrupt while I’m talking and broadcast confidential business to everybody in the room? How does it know if a meeting is important? And what happens if I have Wi-Fi issues at home?

In my case, it’s all about whether I can trust a digital assistant to perform tasks that are important to me. And this goes far beyond the obvious concerns about how my personal data is used.

No Trust — No Adoption.

If I already have doubts about handing over the management of my calendar to an AI, how should I, as a designer, work on more complex use cases for intelligent enterprise systems?

Obviously, we need to help users not to fear intelligent systems, but work with them. Without trust, however, there is little chance that such systems will be adopted, which can potentially jeopardize the whole future of human and machine collaboration. Because of this, we see the problem of trust at the very heart of designing good UX for AI.

Components of Trust

In her TED talk “What we don’t understand about trust”, the British philosopher Onora O’Neil points to the three components of trust: competence, honesty and reliability. What if we try to look at each of these from a UX point of view?

Imagine a digital assistant that uses a conversational UI, and can learn and provide reasonable solution proposals in different business situations. What could a trustful relationship between a business user and the digital assistant look like in practice? To give you an idea, let’s walk through three stories.

Story 1: Showing Competence

Paul is a material controller, and is responsible for securing all the components required for production. He is in the process of adopting a new digital assistant, but doesn’t really trust it yet. One day, Paul is alerted about a new material shortage situation. His digital assistant is confident enough to provide a solution proposal based on similar cases in the past, and can explain the proposal by citing past examples.

Design Exploration: The digital assistant proposes a solution for the problem based on similar cases in the past, and assists the user in executing the next step. Copyright: SAP SE

By exploring similar cases, Paul:

a) understands the logic behind the proposal, and

b) can confirm from his experience that this logic is correct.

Overall, he is convinced that the system is competent enough to find the relevant information and proposes a reasonable solution. If he continues to experience the same level of competence, Paul will be ready to hand over simple decisions to his digital assistant and adopt the next level of automation.

Story 2: Being Authentic and Honest

Mike is a manufacturing engineer in the aerospace industry. One of his tasks is to deal with constant changes to the production schedule that are triggered by customers, quality assurance or product design. He is already familiar with his digital assistant and relies on its proposals in simple situations. In some cases, Mike proactively asks his digital assistant for a recommendation. But what if there is not enough information for a reasonable proposal?

Design Exploration: The digital assistant guides the user towards a solution by asking relevant questions. Copyright: SAP SE

Instead of pretending to be “smart” (aka “here is what I found in Google, maybe it helps”), our digital assistant exposes the limits of its own competence in an honest and authentic way. It asks relevant questions and eventually finds a solution together with the user.

Story 3: Being Reliable

Our last story is about Sarah, a service engineer. Sarah is alerted about an issue with the robotic equipment at one of the company’s locations. Even her digital assistant doesn’t have a concrete plan to guide her towards a solution. Is it time for the digital assistant to activate its “I don’t know” skill? Wait a minute.

Design Exploration: The digital assistant proposes a workaround by connecting the user with a relevant expert. Copyright: SAP SE

As we learned from the previous story, being fair to the user is an important skill, but we shouldn’t stop there. Our digital assistant doesn’t give up at the first hurdle. Rather than searching for similar issues, it adjusts the criteria and searches for experts instead. So even though the digital assistant wasn’t able to propose a specific solution, it was still able to connect Sarah with a contact person to follow up with. In other words, Sarah can rely on her digital assistant to exhaust all the available options to get as close to a solution as possible.

* * *

I hope these stories have given you a sense of how the principles of trust could be applied to real use cases, where the human meets AI. Of course, there are other definitions of human trust and many more things to explore. For example, how does the level of trust change with the successful adoption? Or, should I adjust my UIs as the level of trust increases by offering more automation and fewer explanations?

I’ll continue to share my experiences around trust and other AI topics in this blog series. In the meantime, feel free to add your thoughts in the comments.

And remember: If people don’t trust AI systems, they won’t adopt them. And if there’s no adoption, we lose the chance to establish an AI system that augments, but doesn’t replace, human capabilities. The alternative is a future where AI is out of human hands. And this is the future we want to avoid, isn’t it?

Want to learn more?

Special thanks to Susanne Wilding and Annette Stotz for reviewing and editing this article.

--

--