How to improve trust when designing for AI?

Can AI design experiences maintain trust between the system and the user?

Yanbin Hao
May 25, 2020 · 4 min read

Alibaba Cloud Enterprise Application Team launched the Luban system in 2018. Most banners and promotional pictures that you see at Tmall are designed by Luban. Can AI design experiences maintain trust between the system and the user?

As IBM Design for AI mentioned, purpose (the reasons for the user to engage with the system), value(the augmented capabilities provided by the system that tangibly improves a user’s life), and trust(the willingness of a user to invest in an emotional bond with the system) are the three main and fundamental factors when designing for AI. But what can affect trust on the AI and how to improve users’ trust?

From Demo of Alibaba’s AI Designer, Luban

What factors affect trust in AI?

To explore more how AI affects trust between the system and the users, IBM Researcher Dakuo Wang and his colleagues conducted a “trust study” on data scientists with the IBM AutoAI/Auto machine learning tool. 30 data scientists were separated into two groups. Their task is to build the best model as they can. The whole study and its results are below.

The interesting result of the study is that people believe the AutoAI efficient, predictable, and liked it for decision making. But they still wondered about the reliability of AutoAI and its rationale behind AutoAI’s decision.

So how to improve trust in the AI environment? From the most recent Annual Review of Psychology, it revealed that several factors can affect human beings what to believe:

  1. Statements accompanied by a feeling of ease are more likely
    to be true than those that feel strange or difficult to understand.
  2. Statements with pictures are more likely to be true than those without images.
  3. Repetitive statements are more likely to be true that those fewer exposures. Even a single previous exposure to a claim proves powerful.
  4. Familiar statements are more likely to be true that those less memory relevant information.
From Brashier & Marsh (2020)

Overall, the easier for users to digest and less effort they make, the more trust they have.

How do we improve trust when designing for AI?

1. Build-in transparency.

Notebook of AutoAI

IBM AutoAI launched an excellent feature of transparency, such as visualizations of input data distributions, a graphic depicting the feature engineering process, and clear comments in the notebook of AutoAI, which can improve user trust and understandability of Auto Machine learning processes.

2. Build empathy with the users of the system.

From IBM Design for AI

Designers have a challenge is to grow artificial empathy at a pace with artificial intelligence. For example, IBM AutoAI designs with a clear process map and shows the algorithm step-by-step. It makes users understand what the AI system is doing and what status is now.

The process map of AutoAI

3. Build with the mistakes

Human beings are not perfect and make mistakes sometimes, which is the same with AI. But the trust would reduce between users and AI systems when AI makes mistakes. Then it is essential to maintain trust when AI makes some mistakes. Designing and taking users’ feedback to help the system learn and develop is essential. Based on IBM Design for AI fundamentals, designers can contribute more empathy at the stage of Expression, user reaction, learning, and outcome.

Thanks for reading! I’m happy to hear about your thoughts and comments.

If you liked this article or any suggestions, please follow me and see the next ones, or feel free to reach out to me on Linkedin!

Don’t hesitate to clap and it will encourage me for more contributions (•‿•)

Yanbin Hao is a Design Researcher intern at IBM based in Austin. The above article is personal and does not necessarily represent IBM’s positions, strategies, or opinions.


  1. Brashier, N. M., & Marsh, E. J. (2020). Judging truth. Annual review of psychology, 71, 499–515.
  2. Designing trust into AI systems:
  3. Drozdal, J., Weisz, J., Wang, D., Dass, G., Yao, B., Zhao, C., … & Su, H. (2020, March). Trust in AutoML: exploring information needs for establishing trust in automated machine learning systems. In Proceedings of the 25th International Conference on Intelligent User Interfaces (pp. 297–307).
  4. Heuristic Evaluation and Expert reviews of AutoAI within Watson Studio:

IBM Design

Stories from the practice of design at IBM

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store