Designing for User Trust in The Era of AI

Roberto Ortiz
Sensai Group

--

As product designers and developers, our goal always starts and ends with people. We gather requirements, design, code, test assumptions, and iterate on solutions until they meet the needs of users and hits business objectives.

Unfortunately, what tends to be overlooked is user trust. As AI continues to change how software works, it becomes even more important to consider what measures to take to ensure user trust is part of the blueprint. Easier said than done, I know.

Trust is not a “component” to design for, but rather trust is part of both rational and emotional bonds in a social contract, this makes it hard to check off on a list of requirements.

Trust cannot be given, but it can be enriched with intentional design. Here are a couple of principles that will help you get there.

Always Add Value

As your organization begins to harness and deploy the power of AI within your software it’s tempting to rush or overlook the user experience. We tend to jump to conclusions of user value so we can hurry up and get it out front and center. That’s exciting, but it can also be dangerous.

Don’t rush or force it. Instead, ask yourself “how would this tangibly add value to the user?” Then, ask yourself “what’s the best way to surface this to the user?”

Start small. I know it doesn’t sound cool or transformational, but small wins will help users slowly gain trust with your product. Don’t risk false positives. Only deploy on the frontend the things that always add value to the user. Always.

Hold Their Hand

When you have an existing product the way you choose to introduce an AI-powered enhancement becomes especially important. As humans, we are creatures of habit. We see patterns, we optimize within them, and we establish processes that evoke safety and familiarity.

Therefore, we tend to be critical of anything new that will disrupt our habits, our flow. This is especially true with AI-powered features. These smart features tend to be predictive and autonomous in nature, which goes against our need for control and trust.

So, when deploying an AI enhancement to your product make sure to lead with empathy and design a smooth transition for your users. Less is more here as well. Show up in subtle ways. Don’t overwhelm the user with educational jargon, but rather design AI contextually into flows that the user is already familiar with.

Gmail’s Autocomplete with Smart Compose

Example: At first users may have thought Gmail’s Autocomplete Email with ‘Smart Compose’ was too accurate for comfort, however, most have come to accept it as their inbox assistant. The interaction is subtle, allowing the user to simply ignore the suggestion, or tab to accept it.

Gather Feedback & Improve

You’ll hear conflicting beliefs on this, but more options on a screen isn’t always a bad thing. In fact, when deploying ML models to help personalize experiences, it’s best to plan for feedback interactions.

Netflix’s rating interaction

The more feedback the better.

Direct user feedback trumps everything, so make it easy, and clear, for the user to provide your models with the right data.

We live in an era where AI has become a norm amongst customer software. From training your radio station on Spotify to rejecting a product recommendation on Amazon, users are ready and willing to help train their robot helpers.

So, don’t be afraid to ask for help.

Lastly, as your organization works on the next big thing remember to intentionally design for user trust. Always add value, hold your user's hand, and ask for feedback often. In the analog world trust takes years to build, and seconds to breakdown, in the digital space it’s magnified.

Need help with your ML strategy? We’d love to help. Contact us at hello@sensaigroup.com to set up a free consultation.

--

--

Roberto Ortiz
Sensai Group

Nested in Colorado. Building products, loving people, always growing. Previously led UX Design at Yahoo and Google.