How to improve trust when designing for AI?
Can AI design experiences maintain trust between the system and the user?
Alibaba Cloud Enterprise Application Team launched the Luban system in 2018. Most banners and promotional pictures that you see at Tmall are designed by Luban. Can AI design experiences maintain trust between the system and the user?
As IBM Design for AI mentioned, purpose (the reasons for the user to engage with the system), value(the augmented capabilities provided by the system that tangibly improves a user’s life), and trust(the willingness of a user to invest in an emotional bond with the system) are the three main and fundamental factors when designing for AI. But what can affect trust on the AI and how to improve users’ trust?
What factors affect trust in AI?
To explore more how AI affects trust between the system and the users, IBM Researcher Dakuo Wang and his colleagues conducted a “trust study” on data scientists with the IBM AutoAI/Auto machine learning tool. 30 data scientists were separated into two groups. Their task is to build the best model as they can. The whole study and its results are below.
The interesting result of the study is that people believe the AutoAI efficient, predictable, and liked it for decision making. But they still wondered about the reliability of AutoAI and its rationale behind AutoAI’s decision.
So how to improve trust in the AI environment? From the most recent Annual Review of Psychology, it revealed that several factors can affect human beings what to believe:
- Statements accompanied by a feeling of ease are more likely
to be true than those that feel strange or difficult to understand.
- Statements with pictures are more likely to be true than those without images.
- Repetitive statements are more likely to be true that those fewer exposures. Even a single previous exposure to a claim proves powerful.
- Familiar statements are more likely to be true that those less memory relevant information.
Overall, the easier for users to digest and less effort they make, the more trust they have.
How do we improve trust when designing for AI?
1. Build-in transparency.
IBM AutoAI launched an excellent feature of transparency, such as visualizations of input data distributions, a graphic depicting the feature engineering process, and clear comments in the notebook of AutoAI, which can improve user trust and understandability of Auto Machine learning processes.
2. Build empathy with the users of the system.
Designers have a challenge is to grow artificial empathy at a pace with artificial intelligence. For example, IBM AutoAI designs with a clear process map and shows the algorithm step-by-step. It makes users understand what the AI system is doing and what status is now.
3. Build with the mistakes
Human beings are not perfect and make mistakes sometimes, which is the same with AI. But the trust would reduce between users and AI systems when AI makes mistakes. Then it is essential to maintain trust when AI makes some mistakes. Designing and taking users’ feedback to help the system learn and develop is essential. Based on IBM Design for AI fundamentals, designers can contribute more empathy at the stage of Expression, user reaction, learning, and outcome.
Thanks for reading! I’m happy to hear about your thoughts and comments.
If you liked this article or any suggestions, please follow me and see the next ones, or feel free to reach out to me on Linkedin!
Don’t hesitate to clap and it will encourage me for more contributions (•‿•)
Yanbin Hao is a Design Researcher intern at IBM based in Austin. The above article is personal and does not necessarily represent IBM’s positions, strategies, or opinions.
- Brashier, N. M., & Marsh, E. J. (2020). Judging truth. Annual review of psychology, 71, 499–515.
- Designing trust into AI systems: https://medium.designit.com/designing-trust-into-ai-systems-545f1fb93263
- Drozdal, J., Weisz, J., Wang, D., Dass, G., Yao, B., Zhao, C., … & Su, H. (2020, March). Trust in AutoML: exploring information needs for establishing trust in automated machine learning systems. In Proceedings of the 25th International Conference on Intelligent User Interfaces (pp. 297–307).
- Heuristic Evaluation and Expert reviews of AutoAI within Watson Studio: https://medium.com/yanbinhao/heuristic-evaluation-and-expert-review-of-autoai-within-watson-studio-5df478d8ae92