Design for Trust: Principle #2

Trust is a dynamic relationship. It is tentatively granted, then tested over time.

The Dish
Published in
4 min readOct 21, 2019

--

Co-authored with Manish Kothari, President, SRI International

In our introductory article “In AI we trust?,” we introduced three principles to help product designers, developers and managers begin to think about ways they can design intelligent systems that inspire trust with users.

Here, we dive deeper into the second principle that addresses the dynamic and tenuous nature of trust during the customer journey.

Managing expectations and risk

Taking the user on a trust-building journey is a process that depends on timing and context. Consider the analogous phases of product adoption. There is the initial sales pitch to attract customers, followed by an onboarding phase (for both sides), and an ongoing engagement process where the product must continue to provide compelling value to remain “sticky.” Similarly, building trust involves initial claims, creating a baseline of understanding, and ongoing management of expectations and perceived risks.

In the early stages of the product experience, there’s no quicker way to lose a user’s trust than to overpromise and underdeliver. And trust, once lost, is very difficult, if not impossible to regain. To avoid this fateful misstep, we need to understand and act in accordance with the limitations of our system.

Staying within the knowledge and capabilities of the system is a prudent way to proceed. Doing so requires a balance between augmentation (helping with the task) and automation (performing the task):

  • Augment processes when a diversity of thought is advantageous, outcomes are risky, or if you’re unsure that your criteria are in line with those of your user. In these cases, augmenting the user’s own thought processes provides the best chance of arriving at the next best actions.
  • Automate processes when you have confidence that replacement of thought is beneficial, perceived risks are low, and when the user has previously confirmed similar actions. Regardless of these conditions, the user should be provided with a means to intervene as necessary.

Consider the following examples. Most personalization services, from financial planning to health advice to movie recommendation engines, require time to become familiar with the user’s behavior, tastes, and preferences before reaching a steady state.

The system-user relationship: A value exchange is assumed along the way.

“Lean trust”

The balance between augmentation and automation is a dynamic process that must be negotiated. As the system gains more domain expertise and a better understanding of user values and preferences, it can become more aggressive with recommendations. Therefore, it’s important to consider the evolution of the user experience over time and view each interaction as feedback that the system can use to improve.

In that sense, the system itself is evolving, learning and adapting from user feedback as well as gaining external domain knowledge as better information (new or improved data) becomes available.

If one thinks of augmentation as an experiment to gauge the user’s reaction, then this process has a direct parallel to “lean” or “agile” methodologies applied to product development. In the traditional sense, incremental product features are released to customers as small experiments and, based on their reaction, future decisions are made. This ensures an informed, more efficient path to providing real, as opposed to hypothetical, customer value.

Similarly, the intelligent system can act as the experimenter, presenting incremental assertions and taking note of the user’s reactions to adjust and gain more confidence in its future recommendations. With this approach, the risk of losing trust is minimized and any misstep only results in a small, recoverable setback.

Consider the example of the doctor following treatment recommendations generated by an intelligent system. If, out of the gate, the system were to present a list of so-called “recommendations” without explanation, it would be met with resistance. What was the reasoning for the recommendations? How were the recommendations ranked? A more prudent approach would be to present “suggestions” for consideration along with brief explanations or outcome predictions up front.

If the doctor were to establish a pattern of following the system’s suggestions, then the system could determine it safe to subjugate explanations to voluntary requests. With increasing confidence, the system might even take on a more advisory role by changing wording from “suggestions” to “recommendations.”

Conversely, if the doctor didn’t follow suggestions, this would be an opportunity to gather feedback to better understand why a different path was taken. In either case, forward progress is made based on user feedback.

It’s important to note that the negotiation between augmentation and automation is a delicate balance. Even with care taken, the evolution of the system may leave the user with a feeling that “something is different.” How might we provide natural cues that inform the user when the system has changed?

A natural response might be to explain everything. But a heavy-handed approach runs the risk of introducing friction into the user experience and may even erode trust. Is there a way we might avoid having to explain at every step along the way?

In closing, active management of each user’s expectations and perceived risks during the life of the product should be incorporated into the design of intelligent systems that inspire trust.

In our next article, we discuss ways to ingrain a sense of trust.

For more information, visit the Design for Trust website.

About SRI International
SRI International creates world-changing solutions making people safer, healthier, and more productive. Visit sri.com to learn more.

--

--

The Dish

SVP, Design and Experience at Medidata Solutions