Design for AI: Understanding Mental Models

by Armon Burton (IBM), Gabby Hoefer (IBM), Claudia Richard (IBM), Milena Pribic (IBM), Hal Wuertz (Amazon), Alex Baria (UL)

Armon Burton
IBM Design
8 min readAug 9, 2024

--

When designing products that are infused with AI, it’s essential that experiences not only meet users’ needs but also enable them to understand how the AI system works and how it’s affected by their actions. However, there lies a key challenge in understanding a user’s mental model at the start of an AI experience and how their understanding and use of an AI system evolves. How do designers evaluate their users’ mental models? How might designers accurately analyze mental models across various user groups? With the rising popularity of generative AI, understanding mental models is more relevant than ever and a vital tool for designers of all disciplines.

We developed a new method for UX researchers to unpack their users’ mental models and ultimately design more responsible AI systems. When designers understand their user’s mental model, they can leverage that knowledge as a dynamic asset to create responsible, transparent, and personalized experiences. Our work was presented at UXPA International in June 2024.

In this article, you will learn…

  • The importance of evaluating mental models for AI design
  • How to visualize and analyze mental models across users in order to drive design recommendations
  • The UX researcher’s role in translating a mental model into a dynamic asset
  • How to redefine the design process by bringing together technical and nontechnical roles

Utilizing mental models

In our day-to-day life, we’re constantly making assumptions and judgements and taking action based on what we know. It’s the bread and butter of being. We’re consuming information about the world around us, which shapes how we view it and the actions we ultimately choose or don’t choose to take. Our thoughts, experiences, and opinions — are building mental models of the world in which we live and how we affect it.

A mental model is a blueprint our brain uses to understand, make sense of, and navigate the world. It’s a cognitive framework built from our past experiences, knowledge, and beliefs.

For example, let’s say there are two doors. One door has a traditional door knob, and the other door has a touchpad interface. When someone sees a round doorknob, their mental model tells them to twist the knob to open the door. When they encounter a touchpad, their mental model might tell them to tap or swipe it.

Mental models form bridges between what we know and what we encounter. They help us bridge the gap between information, understanding, and interaction. When a mental model doesn’t match the intended use or interaction with an object, it can lead to misinterpreting the intended purpose of the object or cause misuse of the object entirely.

As a designer, you are crafting the digital interfaces and experiences that directly affect our users’ mental models of our products. Understanding a user’s perception and matching their mental model is a crucial step in a user-centered design process and in avoiding frustration, misuse, or product abandonment.

The first step to understanding your users’ mental models is understanding their habits, prior experiences, perspective, and expectations. You might employ UX research methods, such as interviews or usability studies. Our team created a resource to help you unpack it.

How we started

In 2020, the HCAI team in IBM Research wrote a CHI 2020 award-winning research paper in which they examined users’ perception and belief about AI systems. As members of the explainability work stream in our cross-IBM Design for AI guild, we were intrigued by the paper. After reading it, we contacted the HCAI team about turning what they learned into an actionable framework that could change how designers think about AI experiences.

We started with a sketching exercise. The goal was to align on how our own mental models worked. In the exercise, we visualized our own mental models for an AI prompt: “Asking Siri about the current weeks’ weather forecast”. We wanted to define how we thought Siri was building the forecast and responding to us with it.

We gave everyone the freedom to use any approach that they wanted to showcase their thinking, from digital diagrams with popular memes to pen-and-paper sketches.

Figure 2. Our team’s hand-written diagrams, supporting that often mental models and language of expression differ across users.

After sketching our mental models, we regrouped to review and compare our diagrams. In some areas, our mental models were very similar. In others, the models were very different. We embraced the differences and unlocked new ways of thinking and problem solving. The success of this exercise led us to create an activity that was dedicated to users drawing out mental models and providing UX researchers with best practices for analyzing them.

What we created

We’ve developed a 4-par mental model activity to capture, unpack, and understand your user’s unique perspective and expectations about your product.

Part 1. Unstructured doodling

First, UX researchers unpack the user’s unique perception and expectations of their product. Set the stage by developing a scenario for the user to frame their mental model around. Imagine yourself in their shoes. What are they doing? How are they benefiting from your product? Think about every interaction from their perspective and use this information to craft prompts that outline these touch points. Then, invite your user to visually illustrate their expectations for interacting with your product.

Initially, users might struggle to visualize what they think is going on. Encourage them to think broadly. Try asking questions, such as about the steps involved or on making connections with inputs and outputs. You want them to form a tangible representation of their understanding and expectations.

Remember, this is just one approach to encourage users to visualize their mental model. A mental model can take many forms, such as a diagram filled with memes and pictures or a paper sketch with simple keys. The choice is entirely theirs! Whatever the form may be, encourage your user to think through the nuances of the process and what interactions they feel are important, confusing, or expected.

By the end of this section, you have a visual representation of your user’s perception and expectations of your product, as well as insights into their unique perspective. Use these insights to consider what the user anticipates when engaging with your product.

Part 2. Structured doodling

The Structured Mapping section translates the freeform sketches into uniform, structured representations of the user’s mental model. It allows you to translate different mental models to the same “language”. This structured mental model can take various forms. It might involve an overhaul to their unstructured doodle or modifications to overlay predefined flow keys (see below) to their drawing.

Users iterate on their diagram by using the Structured Mapping key flows.

This isn’t just about translating and refining the mental model; it’s an opportunity to align your understanding of it as a product team. Through this process, you’ll uncover gaps in understanding or potential issues that could arise in the differences between how the product is designed and the users’ expectations on how to use it.

Figure 3. Better understand a user’s beliefs and expectations of how a system works by overlaying our set of keys to their mental model.

During our own user testing, users enjoyed labeling their diagrams with flow keys because it helped those with a less technical background understand how their perspective mapped to the system as a whole. They seemed to enjoy revisting their experience with the system after creating their initial diagram. This formalization also provides context into why your users have their perspective and empowers them to break the system down into more precise steps. Having a common structure across users also streamlines the analysis and aids in the next section, in which your team will analyze findings and make concrete recommendations.

Part 3. Analyzing user expectations with the AI System

In this last section, you’ll engage with your stakeholders, such as designers or developers, and discuss whether your user’s expectations match how the product is designed to work. This reality of how the AI system makes its decisions is also known as the conceptual model.

Figure 4. Excerpt from exercise. Validate the conceptual model and analyze the similarities and differences from the user’s mental model in order to design a more user-centered experience.

After you’ve compared the mental model to the conceptual model, highlight areas that are especially mismatched and brainstorm solutions that further help the two models align. The goal isn’t to do a complete redesign. Focus on finding ways to bridge the gap between the user’s perception and the reality of how the product functions. Ask yourself the following questions:

  • What are the implications for users perceiving the system differently from how it actually works?
  • What explanations can you give to help users better understand the system?

Continuous delivery

Continuous delivery is key to ensuring product teams are up-to-date on their users’ expectations. Mental models aren’t fixed and will ideally continue to evolve with product use. Refer to this activity when needed and implement additional methods to validate your designs, assumptions, and hypotheses about the user. By framing these exercises as an iterative process, you can enable your teams to truly bridge the gap between their designs, system requirements, and your users’ expectations.

Who we are

We are a crossfunctional team of designers and researchers working all across IBM. Our day jobs look very different from one another; we work in CIO Design, IBM Sustainability Software, and IBM Research. We’re cross-discipline, with representation from design research, content design, and user experience design. We are passionate about responsible AI design, and we reflect that passion by contributing to the cross-IBM Design for AI guild.

The IBM Design for AI guild creates tools and templates to help IBMers design world-class experiences that involve AI. Guild teams address a multitude of AI-related topics, such as foundation models, UX for AI, data literacy, and AI heuristics. The guild is a team of builders, focused on learning a little and building a lot. Work from the guild has been shared far and wide at IBM, from IBM AI governance and ethics groups to individual design teams.

Special thanks and acknowledgement of the IBM Research HCAI team for their mental models framework research, and Zahra Ashktorab for her research and collaboration in building this exercise.

--

--