Design for Trust: Principle #3

When building trust, motivation is more powerful than demonstration or explanation.

Paul Chang
The Dish
Published in
3 min readOct 21, 2019

--

Co-authored with Manish Kothari, President, SRI International

In our introductory article “In AI we trust?,” we introduced three principles to help product designers, developers and managers begin to think about ways they can design intelligent systems that inspire trust with users.

Here, we dive deeper into the third principle that explores visceral trust, how it is ingrained and how it can make interaction more smooth.

Why? Not what or how.

Principles #1 and #2 address trust building logically: examining user expectations and perceived risks and exploring approaches to mitigate dissonance between those and the system’s behaviors and capabilities. But what is it that predisposes and motivates us to trust, or mistrust? And, why are we more forgiving of some and willing to give them the “benefit of the doubt?”

In his popular TED talk, Simon Sinek draws three concentric circles starting in the middle with “Why,” then “How,” and finally “What.” His claim is that great companies inspire their customers by starting with “Why”—their reason for existing. What they’re doing physiologically is targeting the limbic system — the part of our brain associated with trust and loyalty. In contrast, focusing on “What” or “How” targets the neocortex — the part of the brain associated with rationale. While the latter is important, it doesn’t hold the power of the former to motivate.

Similarly, with intelligent systems, most people talk about the “What” and the “How” (data and models), neglecting a powerful component of trust that has little to do with the inner workings of the system.

By properly understanding the “Why” — the ultimate mission — one can begin to build a stronger foundation of trust and, in effect, buy the system time to gather more data to learn more about the user on its way to improving performance.

An alluring promise of AI is its ability to address big and even wicked problems that would otherwise be intractable. For example:

  • While self-driving cars still face technical and societal challenges, they will someday make our roads safer and provide the freedom of mobility to the physically impaired.
  • While AI may jeopardize certain jobs today, it frees us from rote tasks and allows us to focus on higher-order functions that involve human creativity and ingenuity.
  • And, while AI is susceptible to bias based on flawed training data, it is conversely immune from making poor decisions based on emotion, confirmation bias, and other flaws inherent in our human psychology. This results in better medical diagnoses, fairer loan assessments and unbiased courtroom verdicts.

We mustn’t forget to share the bigger picture and educate our users on the true potential of AI. People are more motivated and trusting when they understand the mission of a company and feel that they are contributing to the greater good.

Where to go from here?

Artificial Intelligence is just the tip of the iceberg in the digital transformation space — robotics, distributed sensing, and distributed computing allow for even higher levels of productivity, but with increased levels of distrust.

As the capabilities of machines continue to evolve toward synthesis and creativity, it becomes ever more incumbent upon us to not only build reliable, safe, and ethical systems, but also to consider the feelings and perceptions of the people who will use them. This only happens if we integrate trust considerations into our overall design process:

  • Include trust-related questions in user interviews.
  • Consider mental models, expectations and perceived risks when mapping the user’s current and future experiences.
  • Integrate trust risk into your product management process.

Our hope is that the user-centered, design-for-trust principles presented here, and the design patterns that emerge from them, will remain relevant even as technology evolves. Let’s work towards awareness for the human side of the human-machine interaction — and towards a mastery of these principles — in order to build intelligent systems that better manage a person’s expectations, perceptions and fears to ultimately gain their trust.

For more information, visit the Design for Trust website.

About SRI International
SRI International creates world-changing solutions making people safer, healthier, and more productive. Visit sri.com to learn more.

--

--

Paul Chang
The Dish

SVP, Design and Experience at Medidata Solutions