Beyond “Average” Users: Building Inclusive Design Skills with the CIDER Technique
In today’s connected world, everyone should be able to effectively interact with technology so they can access the benefits it provides. Technology creators should consider different kinds of user needs as they’re making software and hardware interfaces. To get there, tomorrow’s technology creators — today’s computing and tech design students — should be equipped with the skills they need to practice not only effective technology design, but inclusive design. Prior work indicates that teaching these skills can be difficult, though necessary if we want to build a more critical computing workforce.
In response to this need, we created the CIDER (Critique, Imagine, Design, Expand, Repeat) assumption elicitation technique. CIDER is a five-stage analytical design evaluation method that helps learners start thinking beyond “average” users by highlighting the many ways that technology can fail to meet diverse user needs and how it might be made more inclusive.
This post is written mainly for educators who are looking to integrate inclusive design topics into their computing or technology-related courses — for instance, people leading human-computer interaction (HCI), user interface/experience (UI/UX) design, or software engineering courses. We define a few key terms, describe and motivate the CIDER technique, and give an example of how an activity might look.
However, the basic CIDER framework has been used informally in other contexts — from pre-service teacher training courses on instructional design to workshops for youth critical digital literacy — and it seemed to work well there too!
Defining our terms
We use certain terms in specific ways in this post. Here’s what we mean:
- assumption: In this context, we mean designers’ (often implicit) assumptions about users’ capabilities, contexts, knowledge, or access to resources. If an interface design rests on an assumption that a user can’t fulfill, it can exclude people from effectively interacting with the technology.
- design bias: The ways that assumptions might make it disproportionately difficult for particular (groups of) users to interact with technology effectively.
- inclusive design: We broadly define “inclusive design” as an approach that tries to mitigate design biases. There are other definitions, but this is the one we’re using here.
- “average” user(s): The misconception that there is some standard set of traits, characteristics, capabilities, and/or knowledge possessed by “most” users, and that a design that works for the “average” user should work well for everyone. As it turns out, there’s no such thing as an average user. Designers that create tech for “averages” often just end up making things that only work well for socio-culturally dominant populations, disadvantaging folks from marginalized groups.
So, what is the CIDER technique?
As mentioned above, the CIDER (Critique, Imagine, Design, Expand, Repeat) technique is a five-stage analytical design evaluation method that helps learners think beyond stereotypical (and nonexistent) “average” users.
It’s one of the first educational techniques to use the lens of assumptions about users to help learners critically consider technology usability, leveraging a strategy we call assumption elicitation to help learners recognize and respond to design bias in existing technologies.
The goal of the CIDER technique is to make implicit, exclusionary assumptions embedded in designs visible to learners — and along the way give them some concrete practice doing inclusive design activities.
Why use CIDER to teach inclusive design skills?
CIDER targets common learning difficulties
CIDER was created to specifically to address several common learning difficulties computing students face in introductory design courses, namely:
- Motivating the need to learn about critical design approaches and inclusive design by showing the pervasiveness of bias in everyday technologies.
- Connecting design features to designers’ choices and beliefs about their potential users by revealing how implicit assumptions can impact the design of an interface.
- Designing for human diversity while avoiding stereotyping of marginalized groups by involving learners’ own knowledge and having them engage with peers’ perspectives.
- Moving from abstract inclusion goals to concrete actions and skills by scaffolding the process of identifying and reasoning about different kinds of design bias.
CIDER activities increase confidence, support design bias recognition, & help build actionable inclusive design skills
We did an initial evaluation of the CIDER technique’s efficacy in an introductory interface design course with university computing students. Learners did a series of six CIDER-based activities over 11 weeks on different technological artifacts. Among our results, we found that:
- CIDER activities contributed to statistically-significant increases in learners’ confidence in their abilities to practice inclusive design.
- Learners identified nine different kinds of design bias, surfacing embedded assumptions about users’ physical and mental capabilities, access to resources, prior knowledge, and surrounding environments.
- In follow-up interviews, some learners even used the skills they had gained through CIDER activities in subsequent design work (e.g. at internships or portfolio-building) to make the technology they made more inclusive.
More details on all of the above can be found in CIDER’s foundational journal paper (open access).
How does a CIDER activity work?
Through guided critique and brainstorming, CIDER help learners practice recognizing design bias and thinking beyond “average” users.
There are two roles, generally correlating to the typical roles of instructors/TAs and students.
- Activity leaders choose the technology that will be the focus of the activity and are in charge of creating the shared list of assumptions in the fourth (EXPAND) stage.
- Learners practice identifying embedded assumptions, brainstorm redesign options to improve inclusiveness, and engage with their peers’ responses to gain new perspectives.
The basic structure of each activity includes one set-up task and five stages:
- Set-up task: Leaders choose an existing technology to be the focus of the activity. It can be software or hardware based, but it should have an evident interface or means of user interaction.
- C: Learners CRITIQUE the technology to identify implicit assumptions about users present in the design.
- I: Learners pick one assumption they identified and IMAGINE how it could lead to exclusion.
- D: Learners (re)DESIGN the technology by brainstorming ways to change the design so that it doesn’t rely on the chosen assumption.
- E: Leaders collect and create a shared list of assumptions from stage C; Learners use the list to EXPAND their knowledge of design bias using peers’ insights.
- R: Learners REPEAT stages I and D using a new assumption from the shared list, growing their knowledge base for inclusive design.
This framework is flexible. It can be used for a quick 10 minute warm-up at the beginning of a class, or expanded into an longer session, doing the REPEAT stage multiple times with several different assumptions to investigate different instantiations of design bias. Learners might brainstorm assumptions individually or together in groups. However they fit best into your situation, CIDER activities and their adaptations can be a source of valuable insights and inclusive design skill development for learners.
An Example Activity: QWERTY keyboard
To illustrate the above framework with a concrete example, here’s one activity we used in our case study of the CIDER technique’s efficacy.
We did this study during a quarter of online teaching necessitated by the ongoing COVID-19 pandemic, so the format of this activity was both digital (through the Canvas LMS) and asynchronous. Learners completed stages C, I, and D on their own; the leader compiled a list for stage E and posted it to the course page; and learners picked an assumption from the list to complete stage R. Even in this format, we saw some pretty promising outcomes!
Set-up: Choose a technology [Leader]
To begin with, the leader picks some piece of real-world technology as a focus In this example we chose a desktop English (US) character QWERTY keyboard. When creating the activity, leaders should provide (at minimum) one or more images depicting the technology’s interface as well as a textual or verbal description of important features. If you’re doing this activity in-person, you might even bring in the technology and let learners interact with it themselves. The goal is to ensure learners have a baseline understanding of what the object is and how one might interact with it.
Any kind of technology might be used here, but keep in mind that the artifact’s prevalent interaction style will impact what kinds of inclusion learners focus on. For instance, in our case study, critiquing a keyboard led learners to focus comparatively more on motor/mobility issues, while a Google Home smart speaker led to more identified assumptions about hearing ability, and a critique of the Zoom video calling software surfaced embedded assumptions about computer capabilities and Internet speeds.
Even with the above, it’s important that learners all share a single target technology for their critiques and brainstorms. A shared object of critique is what enables the EXPAND stage to work well so learners can engage with their peers’ perspectives. If you want to try the technique on different artifacts, it’s better to conduct a series of CIDER activities on various technologies, spread out over time.
Stage 1: CRITIQUE [Learners]
Our prompt: What assumptions does this keyboard’s design make about users’ potential interactions with it? List as many as you can think of in the next 3–5 minutes. Bullet points encouraged.
For example: One assumption embedded in this design is that users can visually recognize Latin/Roman alphabet characters.
The goal of this stage is for learners to practice recognizing as many different kinds of design bias as possible. They can draw on their own experiences and knowledge of the world to identify assumptions.
In early technique development, we were worried that learners with no design experience might find this impossible at first. Luckily, that wasn’t the case! Learners’ experiences are a rich knowledge base to draw upon — see, e.g., the Funds of Knowledge teaching approach.
Stage 2: IMAGINE [Learners]
Our prompt: Select one of the assumptions you just came up with that you think is important to address. Write a 1–2 sentence scenario where a user could not use the keyboard as expected because of the assumption you selected.
This represents one way the design could exclude certain kinds of users.
In the IMAGINE stage, learners narrow their focus from many different kinds of bias to one particular instantiation of bias by choosing a particular assumption to brainstorm about. Learners write a short scenario in which the assumption prevents someone from interacting with the technology.
The scenario learners come up with here provides a means of making the effects of design bias concrete, helping to connect assumptions and reality. It also enables learners to continue integrating their own knowledge into the activity — maybe they write about their own experiences with this kind of bias, or maybe a close friend or family member’s.
Stage 3: DESIGN [Learners]
Our prompt: Brainstorm ways you might change the keyboard’s design to avoid the scenario you just wrote. List as many different potential solutions you can think of over the next 3–5 minutes — aim for ten or more. Bullet points encouraged.
In the DESIGN stage of CIDER activities, learners brainstorm different ways to make the technology more inclusive by proposing changes to the technology’s design that avoid or mitigate the scenario they just came up with. By doing so, they come up with concrete proposals to address one specific kind of design bias.
As with the first (CRITIQUE) stage, the goal for this stage ideas is quantity over quality. Not all the design changes learners propose will be feasible or even desirable, and that’s perfectly fine! Different solutions will be more or less effective, and no one solution will work for every single user (resisting that pesky “average” user notion again). Sitting with these nuances is one way novice designers can begin to grasp and navigate design tradeoffs when prioritizing inclusion.
Stage 4: EXPAND [Leader & Learners]
Our procedure: After learners electronically submit responses to the first three stages, the leader manually reviews each learner’s list of assumptions generated in the CRITIQUE stage. The leader creates a document consisting of all the different kinds of assumptions from learners’ responses, preserving breadth but removing close duplicates. The leader posts this list in a place learners can access and integrates it into a second electronic assignment containing the REPEAT stage prompts.
In the EXPAND stage, the leader uses learners’ lists of identified assumptions (from the CRITIQUE stage) to create an overall list of embedded assumptions present in the artifact. Since learners all have different backgrounds and perspectives, no two assumption lists will be quite alike. This means the resulting shared list is usually quite extensive, covering many different categories of potential design bias.
The EXPAND-stage shared list plays two major roles. First, it provides learners exposure to a wide array of different embedded assumptions, exposing them to new manifestations of design bias and new perspectives. Second, it’s the main source of feedback for CIDER activities, revealing assumptions that they “missed” in their own critiques.
In an in-person course, the form of this “list” might be more lightweight: Learners could discuss assumptions as partners or in small groups, or leaders might ask learners to simply call out a few assumptions they came up with. For our online course, where this stage resulted in an actual document, we found that learners appreciated being able to access the shared list after the activity concluded, using it as a resource in their final design projects. Options abound! So long as learners can access peers’ responses, all can work.
Stage 5: REPEAT [Learners]
Our prompt: Select another assumption from the collaboratively-generated class list that you think is important to address. Make sure to choose a different assumption than you used for the first part of this activity. Choose one that you didn’t even come up with in your first critique, if possible.
<Repeat prompts from Stage 2 (IMAGINE) and Stage 3 (DESIGN)>
The REPEAT stage is really just a second iteration of the IMAGINE scenario and DESIGN brainstorming stages using an assumption drawn from the list created in the EXPAND stage. (Plus, it makes it so the complete “CIDER” acronym can exist, and who doesn’t like a snappy acronym?)
For this second round of brainstorming, it’s critical that learners select a peer’s assumption from the EXPAND-stage list, ideally one that they hadn’t identified during their own earlier CRITIQUE stage. This is the main mechanism CIDER relies on to broaden learners’ understandings of inclusion issues and design exclusion. By engaging with an assumption that they hadn’t recognized previously, learners are virtually guaranteed to increase their knowledge base of design bias examples by at least one new manifestation, which they can later draw upon in future design work.
With the CIDER technique, learners practice identifying inclusion issues, brainstorm several concrete ways to make technology more inclusive, and gained at least one new perspective on design bias that they may not considered previously. Practice and reinforcement are important for retention of these skills, but our case study suggested that even a single CIDER activity can have positive impacts on learners’ knowledge of inclusive design! Even if it’s just a quick warm-up activity, try this technique out in your course or design group to get people thinking critically about inclusive technology.
Interested in diving deeper into the foundations of the CIDER assumption elicitation technique? Want more information about teaching with CIDER? Check out our (open access!) journal paper for theoretical foundations (Section 3) and considerations for teaching (Section 6.3), or reach out to Alannah Oleson (olesona@uw.edu).