UX IRL Ep. 21 : Modified-Delphi Card Sorting

UX in Real Life
UX In Real Life
Published in
7 min readJun 16, 2022
Episode 21 cover

Hello! Here are our show notes for episode 21 of UX IRL: Modified-Delphi Card Sorting. We hit the highlights in this article, but get the full context by listening to the episode:

This episode we discuss Michelle’s favorite card sorting method — Modified-Delphi card sorting! We cover the basics, where the method shines, and best practices for trying it out for your next card sort. We’d love to hear if you try it. Talk to us on the zeroheight Slack community! (bit.ly/zheroes), comment below or reply on Twitter @uxinreallife or Instagram @ux.inreallife.

Why are we talking about this?

We wanted to shed light on Modified-Delphi Card Sorting because it’s not too widely known but is a great tool to have in your UX toolkit. The method shines because you can get tremendous qualitative information, and it’s not too difficult to do. The level of effort is very similar to a usability testing study. Michelle also wants the world to know about this method, and the more people she can inform (e.g., our millions and millions of listeners ;) ) the better!

What is card sorting?

Before diving into the Modified-Delphi method, let’s review card sorting and some of the more common methods. Card sorting is a fantastic tool to rework or test out different pieces of information architecture. (If you’re unfamiliar with IA, check out our previous episode.) Before digital tools, items were listed on cards (one item per card), and participants would sort these cards into groups. This activity helps us understand how they are thinking of items. It can inform us of their mental models and how we might want to organize sites, products, or navigation.

There are a few different types of card sorting:

  • Open card sorting — in this method, participants are given cards to sort into groups, but they get to decide the group’s name.
  • Closed card sorting — Participants are given predetermined categories to sort cards into in this version.
  • Hybrid card sorting — as you can probably guess, participants are given predetermined categories but can create their own if they’d like.

An example of closed card sorting is if you had a category called “fruit.” A participant can put the cards “banana,” “apple,” and “orange” under “fruit.” For open card sorting, the participant would just have the cards “banana,” “apple,” and “orange,” and they could create a category “fruit,” or they could even make a category “yellow” and put the “banana” card in that group.

What are some tools you can use?

You can do this both physically with cards or with digital tools. You don’t have to use a specific tool for card sorting. It’s easy to make a digital version with Miro, Figjam, Mural, Powerpoint, or Trello.

Mary Fran loves using OptimalSort from Optimal Workshop. It’s super easy to use, and it comes with a pre and post-questionnaire that gives you some context about each participant’s background and responses.

How does Modified-Delphi Card Sorting work?

Michelle first learned this method in her information architecture class at the University of Baltimore in grad school. Celeste Lyn Paul, a student in the program, created this method as part of her thesis project. She published a paper in the Journal of Usability Studies if you want to see the complete details. (There’s also a link the full PDF on that site.) With this method, you’re arriving at similar results that other card sorting methods would produce with much fewer participants.

Just like a regular hybrid card sort, you start with a set of cards and categories to sort. Your first participant sorts the cards into categories, relabels categories, or creates new ones. When they finish, you capture that structure (e.g., take a photo/screenshot). Then your second participant looks at the structure and iterates on it. So they’re not starting from scratch but adjusting the previous structure. After they finish, you capture their version, continue with the following participants, and capture their iterations along the way.

The idea is that most people will agree on general organization. For example, if there is an “About Us” category, most people would put “history,” “mission statement,” and “who we are” under that category. Because that’s a common understanding for most people, you don’t need to vet that repeatedly.

You’ll start to notice from participant to participant that some items will change back and forth from one category to another frequently. This is also a moderated method, so you’ll engage with the participant, hear their thought process, or ask them why they’ve made certain decisions. It’s an excellent way to capture some of the qualitative aspects you often don’t get with unmoderated digital tools.

UX IRL Takeaway: For more reading, check out Michelle’s poster from IA Summit 2016 on her project.

When to choose the Modified-Delphi method over other traditional methods

For Michelle, it’s her go-to method because it has a good balance of quantitative and qualitative information. While it’s more involved, it’s beneficial because you interact with the participants. You can run this activity for free or cost-effectively because you don’t need to pay for a card sorting service and use fewer participants.

She assumes most people use traditional methods because they’re unfamiliar with the Modified-Delphi method. But she loves it and thinks everyone should give it a shot.

UX IRL Takeaway: Try the Modified-Delphi method at least once. It’s fun!

Tips for running a Modified-Delphi study IRL

Michelle’s tried this a few times, and it’s been really enjoyable because you get to see the magic as it happens. It’s very similar to a typical moderated usability study where you’re watching participants, taking notes, asking questions, and so on. Here are some things to keep in mind when running your study.

  • Synthesizing isn’t too tricky — it might seem overwhelming to keep track of the nuances of each iteration. You’ll have to pay attention more than you would with a digital tool, but it’s very similar to any usability test. Pro tip: debrief after every session to make note of interesting moments.
  • Pilot the activity with at least two people — we had customer success managers participate since they had familiarity with the product. Pilots help your team become more familiar with the process and feel more at ease with the method.
  • Eight people is plenty for the study — you probably won’t notice too much change by the last iteration.
  • Start with a knowledgeable participant first — your first participant should be familiar with your product or topic to set the tone for everyone else.
  • Cards bouncing from category to category isn’t a bad thing — it’s an indicator the area(s) might need exploration.
  • You might notice the “boomerang effect” — Sometimes, you might have a participant that’s an outlier, and they’ll put a card in a completely different category. If this happens, don’t worry too much, subsequent participants will often catch it and put it in a category that’s more consistent with a common understanding.
  • How many cards should you use? It depends! Michelle initially piloted with 75, which was way too tedious for participants and us to synthesize. We narrowed it to 35 cards, but even that was still a lot! It’s a good idea to pilot the number of cards to ensure you’re finding a number that works for your project. You can often cut out cards if you have a solid understanding of that area, if you can remove similar cards, or if cards represent obscure areas that only a few customers access. You can also consider breaking things into multiple studies if things are large and complex.
  • Your last participant doesn’t finalize the IA! There’s always a concern that the IA is cemented in the last iteration. Michelle frequently gets asked, “What if the last participant destroys the IA? Are we stuck with it?” The answer is no, for sure. While ideally, it is relatively final, it doesn’t have to be the answer. Any form of card sorting is an activity to inform you with making decisions. As the information architect or designer, you’re the subject matter expert on the product and UX best practices. So you and your team have the final say, and it’s helpful to have the data to inform your decisions.

UX IRL Takeaway: This is just like a moderated usability testing study. A lot of the logistics are similar.

Synthesizing the data

Synthesizing the data is much like a moderated usability study. It takes some effort and is more manual than the digital tools. You can be confident in the organization for areas that stayed the same. Where things shuffled a bit is an indicator that you can make some adjustments in the organization — either with labeling, experimenting with tree testing, or leaning more on the visual UI to help provide clarity.

The real gem of this method is the qualitative information you can get from participants. You can see when they might be debating where to put a card. You have the opportunity to ask them what they’re thinking at that moment. This is something that other digital services don’t capture well. We’ve participated in other card sorts before and by the time it comes to answer the post-questionnaire, we forget entirely what we sorted.

Mary Fran notes that the post-questionnaire isn’t always something you can rely on because people might not fill it out. At least with real-time moderation, participants will engage with you.

UX IRL Takeaway: This method is rich in qualitative data, which is hard to get with digital tools.

We’d love to hear from you!

If you try this method, let us know! We’d love to hear how this goes. If you are interested in doing this but have questions, feel free to reach out to Michelle. She’s more than happy to help out because she would love everyone to give this method a try at least once.

The best way to reach out to us is on the zeroheight Slack community (which you can join here).

--

--

UX in Real Life
UX In Real Life

A podcast where we examine user experience design at work and the world around us. Brought to you by @soysaucechin + @maryfran874