Design for trust: principle #1

User expectations and perceived risks drive trust requirements.

Paul Chang
The Dish
Published in
5 min readSep 10, 2019

--

Co-authored with Manish Kothari, President, SRI International

In our introductory article “In AI we trust?,” we introduced three principles to help product designers, developers and managers begin to think about ways they can design intelligent systems that inspire trust with users.

Here, we dive deeper into the first principle, grounded in empathy for our user with respect to how their mental models manifest as expectations and perceived risks.

Trustworthy vs. trusted

With the introduction of new technologies, we naturally focus our attention on reliability and safety, identifying and minimizing potential failure modes. The result: more trustworthy systems.

It’s important to note though that a trustworthy system — reliable, safe and ethical — doesn’t imply that it will be trusted by its users. This is a common misperception. Trust is an emotional, social contract, built upon the user’s mental models that reflects his or her expectations and perceptions of risk. So, we must not only design trustworthy systems, we must also help our users realize their virtues.

It’s our responsibility to understand these mental models in order to convey the trustworthiness of our systems and to assuage feelings of apprehension arising from unfamiliar technologies. Simply put:

If we aren’t aligned with our audience’s expectations and perceived risks, no matter how useful the system is, we’re just going to build something that no one feels comfortable using.

Understanding the user

The first step, as with many design challenges, is to research and understand our users. In this case, we want to identify differences between the user’s expectations of the system (mental models) and the true system behavior. We also want to uncover perceived risks and their underlying causes. It’s at the intersection of false expectations and perceived risks that we’ll find opportunities to build trust.

It’s important to note that a user’s perceived risks often go beyond the system’s performance and should be identified. Consider a survey conducted by Pew Research where more than 70% of Americans expressed wariness or concern about a world where machines will perform many of the tasks done by humans. In this case, the user isn’t concerned about poor performance of the system. On the contrary, they’re worried about good performance and how it affects their job security!

User research is an effective approach to uncover more personal and social goals including:

  • To feel safe
  • To feel useful and competent
  • To feel in control

With a practical understanding of our users’ mental models and an appreciation for their goals, we begin to identify opportunities to establish a baseline of understanding, more effectively manage expectations and address real and perceived risks.

Sounds great, but how do you do this?

Method: Trust empathy mapping

Inspired by Dave Gray’s empathy map, we’ve developed a trust empathy map that specifically focuses on trust considerations.

Persona

The center of the map contains the persona that encourages us to explicitly define our target user. A map should be created for each persona.

Goals

As described earlier in the article, understanding our users’ professional and personal goals helps us to contextualize their expectations and perceived risks.

Expectations

User research sheds light on users’ mental models which in turn drives their expectations. For example, they may expect their virtual personal assistant to be every bit as intuitive as a human personal assistant. Or they may think that an autonomous vehicle’s headlights are its eyes :)

Perceived risks

Although perceived risks may or may not be warranted, they’re equal in the mind of the user. Our goal as user researchers is to uncover as many perceived risks as possible.

Trust building opportunities

Trust building opportunities exist when the user has significant concerns and fears due to incorrect mental models or a misunderstanding of how the system works.

Case study: IBM Watson for Oncology

Positioned to “help doctors out think cancer, one patient at a time,” IBM Watson for Oncology is an example where gaining a doctor’s trust is critical for success. To date, Watson has not gained traction consistent with its promise and has received public scrutiny (1,2).

So, what went wrong? To start, IBM did not establish a baseline of understanding with their audience. Instead, marketers focused on the ultimate vision that extended beyond Watson’s current capabilities. Additionally, Watson’s deep-learning algorithms were not responsible for the actual treatment recommendations.

It was in service of a cadre of doctors at a single, although highly respected, U.S. hospital: Memorial Sloan Kettering Cancer Center. Doctors there made the final determination of the diagnostic rules. So, naturally, questions arose. How well would their experience apply globally? And, would they be able to keep the system current as new evidence became available?

The system also presented its recommendations in rank order by a percentage score that did not directly tie to clinical metrics. A doctor is trained to “trust but verify” in order to prevent liability, and more importantly, ensure patient safety.

This consideration alone requires that all recommendations from the doctor be supported by well-established rules or evidence from clinical trials and patient studies. And if one considers the psychology of a highly trained individual, recommendations that are in line with their intuition are of little value, while recommendations that are in opposition will be met with skepticism unless backed by solid evidence.

This example shows that regardless of whether a system has intrinsic value, recommendations will not be well received unless properly positioned. Could a trust-empathy map have helped avoid these issues?

Hypothetical trust-empathy map (based on secondary research)

In closing, the first step in designing intelligent systems that inspire trust is to understand and empathize with your users’ expectations and perceived risks.

In our next article, we consider the user’s journey with the product and how trust can be built along the way.

For more information, visit the Design for Trust website.

About SRI International
SRI International creates world-changing solutions making people safer, healthier, and more productive. Visit sri.com to learn more.

--

--

Paul Chang
The Dish

SVP, Design and Experience at Medidata Solutions