As someone who studied Information Science, I’ve always been drawn to grouping and categorizing things. Imagine my glee when I first learned there was a UX research method that was all about grouping and categorizing. My internal monologue went a little something like this, “Making order out of perceived chaos! My favorite thing! Let’s find ALL THE PATTERNS!”
And while my analytical thinking brain would LOVE to throw card sorting at every problem it could, it’s not a one-size-fits-all method, just like all the other UX research methods out there. So, what’s it good for?
The Basics of Card Sorting
What is card sorting?
Card sorting was traditionally used for assessing information architecture. Typically, this method was used most for structuring website navigation and structure, like Jakob Nielsen describes in his 1995 study. However, this deceptively simple method can be used in many more ways than it’s been known for. There are also some other non-traditional methods that IBM design researchers are using it for: story-telling, multi-level hierarchy, and prioritization. I’ll discuss those methods after establishing the basics of traditional card sorting.
The researcher writes one concept per card (traditionally, these were index cards, but now digital formats also exist). To prep for each testing session, the researcher should make sure the cards are in a random order to guard against sorting bias. When the testing session starts, the participant sorts the cards into the categories that make the most sense to her/him. Typically card sorting results end up with a two-level hierarchy: a category, and its sub-components. Ideally, card sorting should take place over about 15 testing sessions, as also noted by Jakob Nielsen.
Note: 60+ cards is too many. Shoot for around 40 and under so participants don’t get overwhelmed. IBM researchers found that once the number of cards get over 40, participants’ engagement tends to lag towards the end. One researcher who had used 59 cards observed a significant drop in being able to sort cards after sorting about 40 cards.
There are three types of traditional card sorts: open, closed, and hybrid. There are two testing facilitation approaches: moderated or unmoderated, discussed after the traditional types of card sorts.
Open card sorting means that the participant can create categories of her own, or add concepts that are missing. After discussion with some researchers at IBM, the consensus was that this approach works well for generative and exploratory work. It’s more flexible and allows for a more accurate portrayal of a user’s mental model. It can allow for new terms to make their way into the project, and expand the team’s understanding of the users and their relationship to the concepts. These new cards that participants add could be included in future sessions or left out, depending on how the researcher wants to structure the study.
- more accurate portrayal of a user’s mental model
- more flexible
- good for generative phase of a project
- less control
- wide array of answers that may not result in a discernible pattern
- bad for evaluative phase of a project
Closed card sorting means that the participant is limited to using only the cards provided to her. The other researchers at IBM agreed that this method is better for a more evaluative approach, where the terminology or concepts have been well established. This method is more likely to result in clear patterns, and give an indication of which concepts relate best to which categories.
- more likely to see grouping patterns in results
- more controlled
- good for evaluative phase of a project
- less flexibility
- potentially less accurate portrayal of a user’s mental model
- bad for generative phase of a project
Hybrid card sorting. The traditional hybrid method begins as a closed card sort, but allows participants to create categories that may be missing from the card deck.
IBM researchers discussed the merit of the closed and open approaches, and the biggest concern with closed card sorting is that there is still the possibility that we might have excluded a relevant concept. We never assume we know everything, and when talking to highly technical users, this point becomes especially salient. We propose a modified hybrid option: at the end of a closed card sort, offer participants the opportunity to suggest missing terms or categories.
Another hybrid that emerged in discussion was designating some cards as the “category” cards, separate from the other cards. The cards could be in separate decks, or physically set up separately from the other cards.
Testing Results and Facilitation
Simplification vs. Complexity in Testing Results
The result from a card sort is a summary of the overriding patterns across sessions. However, this result captures only a small part of the session: the participant will also make comments as s/he places the cards. The key to having a full picture from a card sort is also documenting the participant’s commentary as s/he places the cards, or debates the placement of the card.
IBM researchers found that occasionally participants struggle classifying a concept because the card may belong in more than one category. This restriction of placing a card in only one category sometimes limits understanding of where concepts may overlap. So we recommend providing the opportunity to have participants disclose which concepts could fit in multiple categories and discuss their reasoning on how they finally defined that concept. This is a strong advantage to moderated testing sessions over unmoderated ones. (More on that shortly).
The final analysis distills the patterns, and simplifies mental models to one rough estimate of where the cards most often belong. It removes the complexity of each person’s approach and mental model and decision process. The way that IBM researchers discussed adding the complexity back in to a card sort is by adding clips and quotes to help illustrate why notable concepts do or don’t belong together. Being able to understand why the cards belong in certain categories is key.
Moderated vs. Unmoderated Facilitation
The majority of this post has focused on moderated testing sessions, where a researcher and notetaker are present (either in person or remotely) and noting the comments and user behavior. In the moments where the participant debates where to put certain cards, or change where a card goes, the researcher can prompt discussion over why a certain card might belong in more than one category or why a card should change categories. The advantages of moderated sessions is that those moments are discussed, elaborated on, and documented. The disadvantages are that it is less convenient for international participants, and more time consuming since requires that the researcher, notetaker, and participant all be present for the testing session.
Unmoderated testing sessions, thanks to a wide array of online tools, have grown in popularity. The distinct advantage is that the testing sessions can happen asynchronously, simultaneously, and be completely unmoderated. The online tools (a short list of them is at the end) allow for researchers to provide prompts when necessary and appropriate to guide participants through the activity. These tools provide excellent analysis of the final results, correlating card groupings, and statistical patterns. The advantages to unmoderated sessions are that once the researcher sets up the test, s/he can walk away from it, and work on other things while the test runs. Additionally, having these tools do the analysis and correlation is significantly simpler than doing it by hand. The disadvantages are that those moments of going deeper into why a participant selected one category over another, or capturing those moments when a participant wants to put a card in more than one category.
Non-Traditional Card Sorting Methods
While I’ve described traditional uses of card sorting (open, closed, and hybrid) in detail, there were a few other non-traditional ways in which IBM researchers used the basics of card sorting: for storytelling, multi-level hierarchy and prioritization.
IBM researchers had used the concept of card sorting to prompt participants to tell a story. Researchers prepared a deck of cards with concepts on it, and encouraged participants to look through the deck as they described a process. This deck has roles other than the participants’ own. It also might include some connecting cards with arrows.
For example, if a researcher were trying to understand a buying process, the roles included might be:
- Procurement specialists
- Finance office
The deck would also include a series of steps like (though in a random order):
- Ask about budget limitations
- Learn about products in the market
- Rule out products that are too expensive
- Compare products
- Try out free versions of products
- Establish metrics for comparison
- Assess products
- Recommend product to purchase
- Submit proposal to manager
- Receive approval
- Submit proposal to Finance office
- Receive approval
- Submit proposal to Director
- Receive approval
- Submit a Purchase Order
- Order the product
- Receive the product
This approach is akin to the open card sorting approach, but instead of focusing on just information architecture concepts, addressing the involvement of roles and some basic assumptions about steps. The IBM researchers who had used this method left a few cards open, assuming that there were steps or roles in a process that likely were yet unknown.
This method is related to the storytelling approach, but instead seeks to assess the mental model of how hierarchical concepts relate to each other. Using this approach, the researcher asks the participant to think about the concept’s level in relation to the other concepts.
For example, if trying to assess how participants thought about their home in relation to the country, the participant might see the following cards (though in a random order):
- School district
- Voting district
- City region (north, central, south, east, west, etc)
- State region (upstate, coastal area, desert, etc)
- Country region (midwest, southeast, east coast, etc.)
Another way that teams at IBM use card sorting is to prioritize by assessing the participant’s mental model for concepts that are high importance or high use. Using this approach, the researchers asks the participant to consider which concepts are most important to the participant or which concepts the participant uses frequently.
For example, the researcher sets up the session with columns to rank importance of features of a car dashboard (Very Important, Moderately Important, Not Important). The participant might see the following cards:
- Gas Gauge
- Miles on a trip
- Total miles on the car
- Outside temperature
- Gear (1, 2, 3, 4, 5, R, N, D, P)
- Miles left
- Battery status
- Engine heat (hot, cold)
- Parking brake
- Maintenance needed
- Open door
In a different study, the researcher sets up the session with columns to rank the frequency of use for oven functions (Use Always, Use Very Often, Use Sometimes, Use Rarely, Never Use). The participant might see the following cards:
- Oven light
Recommendations for Tooling
As I mentioned earlier, there are some great card-sorting tools out there that do terrific analysis.
- Optimal Workshop — my favorite card sorting tool.
- Usability Tools
- uzCardSort (open source tool, only runs on Mozilla)
- xSort (Mac only)
If you are more interested in doing the analysis yourself, this phenomenal tutorial can walk you through that process. Donna Spencer, the author, gives great templates and a preloaded Excel spreadsheet with all the requisite formulas.
What are your experiences with card sorting? Are you using card sorting for another method not discussed here?
At IBM in Austin, a group of design researchers meet for lunch a few times a month to discuss research topics of interest. Afterwards, the researchers in IBM Power Systems, the primary conversation facilitators, collect and note the highlights of the conversation. This is one of the series of posts about the lunches from IBM Power Systems.
cary-anne olsen-landis is the Experience Research Lead for Power Systems at IBM based in Austin, TX. The above article is personal and does not necessarily represent IBM’s positions, strategies or opinions.