In AI we trust?

How might we design intelligent systems that inspire trust?

Paul Chang
The Dish
4 min readSep 10, 2019

--

Co-authored with Manish Kothari, President, SRI International

We’re presently experiencing a major shift in technology led by developments in artificial intelligence (AI). While we place our trust in technology every day, with AI, we’re starting to receive recommendations by machines. As a result, our perceived risks, whether real or imagined, raise new questions that must be answered before we can trust these systems: Does the system value the same things that I do? How does it make decisions? Am I safe?

It’s important to note that a trustworthy system — reliable, safe and ethical — doesn’t imply that it will be trusted by its users. High-profile follies and failures have brought AI under scrutiny. Uber’s autonomous car accident, Northpoint’s racist crime-prediction and Facebook’s and Google’s ubiquitous “echo chambers” are just a few examples. As AI capabilities and capacities grow, so too does our responsibility to create safe, reliable and ethical systems that people feel comfortable using. To this end, we pose the question: “How might we design intelligent systems that inspire trust?”

Thought leaders have spoken out on the need for designing trustworthy systems with overarching design principles (e.g., Google’s principles for AI); however, few have attempted to develop methods to specifically build trust among users. Just as Apple extended and combined the principles of industrial design with interaction design to create better digital products, we propose an extension to user-centered design concepts that explicitly addresses the trust relationship between humans and intelligent systems.

“Design for Trust” is a set of principles that begins to codify process and methods to address our responsibility as designers and technologists to include trust in our processes. We’ve distilled our learnings into three overarching principles with a common theme: to always approach solutions from the perspective of the user.

In the coming weeks, we’ll take a deeper look into each of the design-for-trust principles presented here through a three-part series.

Principle #1 User expectations and perceived risks drive trust requirements.

Trust is an emotional, social contract, built upon the user’s mental models that reflects his or her expectations and perceptions of risk. As such, we must not only design trustworthy systems, but also help our users realize their virtues. Simply put, if we aren’t aligned with our audience’s expectations and perceived risks, no matter how useful the system is, we’re just going to build something that no one feels comfortable using.

Read the full article.

Principle #2 Trust is a dynamic relationship. It is tentatively granted, then tested over time.

Managing expectations happens throughout the product experience. Consider the phases of trust-building: the initial sales pitch to attract customers, onboarding, a learning phase (for both sides) and an ongoing engagement process where the product must continue to provide compelling value to remain “sticky.”

Transparency and clear communications throughout these phases are foundational elements of building trust. Clearly communicating and educating people on your system’s behaviors and capabilities will not only help dispel or confirm their mental models, it will also help them understand and appreciate the limitations, trade-offs and best practices of the underlying technologies.

Read the full article.

Principle #3 When building trust, motivation is more powerful than demonstration or explanation.

An alluring promise of AI is its ability to address big problems that might otherwise be intractable, yet many people get caught up in the potential pitfalls or ominous hearsay along the way. Introducing the company’s mission early and remaining true to that mission throughout the user’s experience is one way to build a stronger foundation of trust and, in effect, buy the system time to gather more data to improve performance.

Once you have the user “on your side,” the burden of proof and the need for verbose interactions is lessened. For example: While self-driving cars still face technical and societal challenges, they will someday make our roads safer and provide the freedom of mobility to the physically impaired. We can’t forget to share the bigger picture and educate our users on the true potential of AI. People are more motivated and trusting when they understand the mission and feel that they are contributing to the greater good.

Read the full article.

As the capabilities of machines continue to evolve toward synthesis and creativity, it becomes ever more incumbent upon us to consider the feelings, expectations and perceptions of the people who will use them. This can only happen if we integrate design-for-trust methods into user research, user modeling and product management.

Our hope is that the user-centered, design-for-trust principles presented here, and the design patterns that emerge from them, will remain relevant even as technology evolves. Let’s work towards awareness for the human side of the human-machine interaction — and towards a mastery of these principles — in order to build intelligent systems that better manage a person’s expectations, perceptions and fears to ultimately gain his or her trust.

Visit the design-for-trust website to learn more about SRI’s design-for-trust methods.

About SRI International
SRI International creates world-changing solutions making people safer, healthier, and more productive. Visit sri.com to learn more.

--

--

Paul Chang
The Dish

SVP, Design and Experience at Medidata Solutions