The Dish
Published in

The Dish

Design for trust: principle #1

User expectations and perceived risks drive trust requirements.

Co-authored with Manish Kothari, President, SRI International

In our introductory article “In AI we trust?,” we introduced three principles to help product designers, developers and managers begin to think about ways they can design intelligent systems that inspire trust with users.

Here, we dive deeper into the first principle, grounded in empathy for our user with respect to how their mental models manifest as expectations and perceived risks.

Trustworthy vs. trusted

With the introduction of new technologies, we naturally focus our attention on reliability and safety, identifying and minimizing potential failure modes. The result: more trustworthy systems.

It’s important to note though that a trustworthy system — reliable, safe and ethical — doesn’t imply that it will be trusted by its users. This is a common misperception. Trust is an emotional, social contract, built upon the user’s mental models that reflects his or her expectations and perceptions of risk. So, we must not only design trustworthy systems, we must also help our users realize their virtues.

It’s our responsibility to understand these mental models in order to convey the trustworthiness of our systems and to assuage feelings of apprehension arising from unfamiliar technologies. Simply put:

If we aren’t aligned with our audience’s expectations and perceived risks, no matter how useful the system is, we’re just going to build something that no one feels comfortable using.

Understanding the user

The first step, as with many design challenges, is to research and understand our users. In this case, we want to identify differences between the user’s expectations of the system (mental models) and the true system behavior. We also want to uncover perceived risks and their underlying causes. It’s at the intersection of false expectations and perceived risks that we’ll find opportunities to build trust.

It’s important to note that a user’s perceived risks often go beyond the system’s performance and should be identified. Consider a survey conducted by Pew Research where more than 70% of Americans expressed wariness or concern about a world where machines will perform many of the tasks done by humans. In this case, the user isn’t concerned about poor performance of the system. On the contrary, they’re worried about good performance and how it affects their job security!

User research is an effective approach to uncover more personal and social goals including:

  • To feel safe
  • To feel useful and competent
  • To feel in control

With a practical understanding of our users’ mental models and an appreciation for their goals, we begin to identify opportunities to establish a baseline of understanding, more effectively manage expectations and address real and perceived risks.

Sounds great, but how do you do this?

Method: Trust empathy mapping

Inspired by Dave Gray’s empathy map, we’ve developed a trust empathy map that specifically focuses on trust considerations.

Persona

The center of the map contains the persona that encourages us to explicitly define our target user. A map should be created for each persona.

Goals

As described earlier in the article, understanding our users’ professional and personal goals helps us to contextualize their expectations and perceived risks.

Expectations

User research sheds light on users’ mental models which in turn drives their expectations. For example, they may expect their virtual personal assistant to be every bit as intuitive as a human personal assistant. Or they may think that an autonomous vehicle’s headlights are its eyes :)

Perceived risks

Although perceived risks may or may not be warranted, they’re equal in the mind of the user. Our goal as user researchers is to uncover as many perceived risks as possible.

Trust building opportunities

Trust building opportunities exist when the user has significant concerns and fears due to incorrect mental models or a misunderstanding of how the system works.

Case study: IBM Watson for Oncology

Positioned to “help doctors out think cancer, one patient at a time,” IBM Watson for Oncology is an example where gaining a doctor’s trust is critical for success. To date, Watson has not gained traction consistent with its promise and has received public scrutiny (1,2).

So, what went wrong? To start, IBM did not establish a baseline of understanding with their audience. Instead, marketers focused on the ultimate vision that extended beyond Watson’s current capabilities. Additionally, Watson’s deep-learning algorithms were not responsible for the actual treatment recommendations.

It was in service of a cadre of doctors at a single, although highly respected, U.S. hospital: Memorial Sloan Kettering Cancer Center. Doctors there made the final determination of the diagnostic rules. So, naturally, questions arose. How well would their experience apply globally? And, would they be able to keep the system current as new evidence became available?

The system also presented its recommendations in rank order by a percentage score that did not directly tie to clinical metrics. A doctor is trained to “trust but verify” in order to prevent liability, and more importantly, ensure patient safety.

This consideration alone requires that all recommendations from the doctor be supported by well-established rules or evidence from clinical trials and patient studies. And if one considers the psychology of a highly trained individual, recommendations that are in line with their intuition are of little value, while recommendations that are in opposition will be met with skepticism unless backed by solid evidence.

This example shows that regardless of whether a system has intrinsic value, recommendations will not be well received unless properly positioned. Could a trust-empathy map have helped avoid these issues?

Hypothetical trust-empathy map (based on secondary research)

In closing, the first step in designing intelligent systems that inspire trust is to understand and empathize with your users’ expectations and perceived risks.

In our next article, we consider the user’s journey with the product and how trust can be built along the way.

For more information, visit the Design for Trust website.

About SRI International
SRI International creates world-changing solutions making people safer, healthier, and more productive. Visit sri.com to learn more.

--

--

--

For people who want to make the world a safer, healthier and more productive place through innovation. Created and curated by the team at SRI International.

Recommended from Medium

(How to) Choose Your Future

Go-ahead for £40m Manchester spec office scheme

In the last article, I already talked about the purpose, investments and capabilities of the…

MomentX Community Update December 2021

Character Art School: Complete Character Drawing Course

Working on an Assignment at Home: A Reflection

IDEO CoLab’s Healthcare Explore Sprint

A Beginner’s Guide to Mind Mapping in Zenkit

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Paul Chang

Paul Chang

Vice President, Design at Medidata Solutions

More from Medium

Building better relationships with UX Research stakeholders

Stakeholders on a Zoom call during a research kickoff.

Resilience and the UX philosopher

Icon of a person pushing a square shape up a steep slope, in the style of sisyphus

The case for empowering organisations to do continuous research — The ‘Back to the Future’ Analogy

A fairground style sign that reads, “You must be this tall to talk to users”. The word Ride has been crossed out and talk to users has been written underneath. There is finger pointing next to the words to show how tall one must be.

Addressing the psychological challenges behind a paradigm shift.