Discussion guide tips to get better results from your moderated user research

Alice Clayton
SBG Product Design
Published in
8 min readMay 16, 2023

A discussion guide, often used in qualitative usability testing or 121 interviews, is a plan of how your moderated sessions will go. It’s an outline of what tasks you will give and what questions you will ask. Having a discussion guide helps ensure you get the insights you need and avoids any awkward situations where you don’t know what to ask.

To really get the most from your research sessions, you need a well-crafted discussion guide. It will be the best way to get insights that are robust and relevant to your research goals.

This article will give you some guidance on how to properly structure your discussion guide to set you up for success.

Tip 1: Consider your research goals

When you first start creating your discussion guide, it can be tempting to include every question you or your stakeholders may have that’s remotely related to the research.

However, we only have limited time with each participant, and often limited time for analysis and reporting too. The more we’re asking, the more work we’re creating for ourselves — and we risk our participants becoming fatigued. There’s also the danger that the questions we really need, to get the insights that answer our research goals, will become lost in the noise.

We recommend:

Before you even begin writing your questions and tasks, you need to have a clear understanding of what the purpose of this research is, what goals you are trying to meet and what the expected outcomes are.

This is something you should be discussing with your stakeholders when the research project first begins. You need to be asking:

· Why this piece of research is needed.

· What previous insights have led to this research (e.g. analytics data, previous UX research, etc).

· What you and your stakeholders want to learn from this research.

Once you know the goal of your research and its expected outcomes, you can begin to formulate your discussion guide. With every task or question you add to the guide, compare it back to your research goals and expected outcomes, to assess whether it will help you meet these goals or not. Highlight any key questions or ‘must asks’ so you and any secondary moderators know to ask these questions every session.

Note

Sometimes in a moderated session a participant might say something interesting that’s not directly relevant to the topic. You can still follow that thread and expand on their feedback, but keep an eye on the time so you can still ask the questions you need.

Tip 2: Ask the right kind of questions

When choosing what questions you ask, it’s important to consider how you ask them, as well as what you ask. Two common issues when creating discussion guides are using leading questions and closed questions.

Leading questions

Leading questions are a type of question where the words are phrased in a way that encourages participants to respond in a particular manner. For instance, saying “How easy to use was this design?” encourages the participant to agree that the design was easy to use, even if they may not have thought it was.

In moderated testing because participants often want to please the moderator and say what they think the moderator wants to hear, which is known as the social desirability bias. This means leading questions can make it even more likely the participant isn’t telling you the truth. If you’re not getting honest feedback then it’s difficult to trust the insights you generate from these research sessions, which could lead to making false recommendations.

Closed questions

Closed questions are a type of question that give very short, specific answers — usually a yes or no. For instance, saying “Have you ever played video games before?” will usually lead to a participant just saying yes or no, and not much else.

Sometimes you will get chatty participants that will give you a wealth of information every time they answer a question, but for more reticent participants using closed questions will mean you don’t get the depth of insight you need.

We recommend:

Whenever you are writing questions for your discussion guide, ask yourself ‘Does this question indicate a particular answer is preferred?’ and ‘Does this question give me enough information? Will I need to prompt the participant for more information?’.

Instead of using leading questions, try phrasing questions differently: “How easy or difficult was it to use this design?”. You should also supplement their answers with their behaviour during the test — e.g. how easy or difficult did they seem to find the design when they were navigating around it? Were there any pain points? How long did it take them to complete the task, if they even completed it at all?

For closed questions, try using TED questions. TED stands for ‘Tell, Explain, Describe’ and these questions are a way of getting more detailed answers from participants.

· Tell me about how you first started playing video games.

· Explain to me what makes you want to sign up with a new online casino.

· Describe how you choose what bets you want to place on a football match.

The more detailed responses you should get from participants by using these questions should give opportunities to probe further into their answers and ask follow up questions.

Image by storyset on Freepik

Tip 3: Check your timings

We have limited time for each moderated session so you want to make sure the length of your discussion guide fits the length of session you have.

If you have a 30 minute session and a discussion guide that would take about 45 minutes to get through then you’ll end up overrunning, rushing tasks and questions or cutting out parts of the test. This could lead to missing out on important insights and incomplete research.

Conversely, if you have a 30 minute session and a discussion guide only lasting 15 minutes, you risk only getting top-level feedback, missing out on anything richer. It’s also an inefficient way to work — if you truly only need 15 minutes of time with each participant then you would be better scheduling more shorter sessions.

We recommend:

Pilot tests are important to test out your discussion guide. You could use a colleague or recruit an external user with a similar profile to those you’ll be interviewing in your upcoming moderated testing. If you use a work colleague, try and find someone who is unfamiliar with the project.

Run the pilot test as if it’s the real test and keep track of how long you’re spending on each section, although keep in mind that pilot participants tend to be faster than real participants. If you are overrunning in the pilot test, either adjust the session time or amend the discussion guide. If you’re unsure what to remove, assess your questions and tasks for their relevance to your research goals.

Ensure your questions and tasks don’t fill the whole time for your session, as you will need time for an introduction, warm up questions, and a closing section. It’s also a good idea to build in some wiggle room for tech issues (if remote), chatty participants and late arrivals.

Finally, identify your essential questions and tasks that must be asked/given each time, and any that are less important and can be cut if you are overrunning.

Tip 4: Focus on behaviour over opinions

As researchers, we often get asked to find out which design users prefer, or whether a feature would make them become customers in the future. For instance, stakeholders might want to know whether a new promotional campaign that we’re usability testing is appealing to customers and will make them want to participate. Or there might be 3 different concept designs of the same page and stakeholders want to know which design users prefer.

The issue with this is by doing so, we are asking participants to predict the future or explain their own behaviour — both things people are bad at doing. So people end up giving an opinion that might not be accurate to reality.

A participant might say they will participate in the new promotional campaign, but when the campaign is launched maybe they have forgotten, or they don’t want to spend the money needed, and so on.

Or a participant might identify, when asked, one of the 3 concepts tested as their favourite, but their initial reaction to that one may have been less positive than their initial reaction to another. There is also the serial position effect to consider, where people tend to recall the first and last items in a series better than middle items.

We recommend:

Good UX research is grounded in behaviour, so that’s what you should be focusing on. If you are testing designs or an existing website, you should structure your discussion guide around observing how a participant behaves when navigating these experiences. You can also pay attention to sentiment when they are interacting with your designs or website — positive initial reactions to something could be a more accurate indication of how much they like it than what they later on tell you. To make the most of this, you can ask participants to think aloud as they complete the tasks.

If you are conducting interviews where there may be no designs or website to interact with, you can focus on exploring their past behaviour in relation to the topic. Learning about a participant’s previous experiences and how they usually act will give you more of an indication of their future behaviour than what they say they will do.

Finally, accept that usability testing or 121 interviews might not be the best method for what you want to learn. If your stakeholders want to know how many of their customers will use a new feature, explore alternative methods that will give you more accurate insight than 121 interviews can.

Note

This doesn’t mean attitudinal or opinion-based questions aren’t useful and shouldn’t be collected. They definitely are and should, but they should be seen as supplemental to the behavioural data you collect.

In Summary

1. Consider your research goals — every question and task should help you achieve your over-arching research goals and questions.

2. Ask the right kind of questions — follow the TED framework and avoid leading questions.

3. Check your timings — run a pilot test to see how long your sessions should take.

4. Focus on behaviour over timings — avoid relying on attitudinal questions at the expense of collecting behavioural data.

Creating a good discussion guide for moderated usability testing or 121 interviews can be difficult, but the advice above should help you. Once you are confident in your discussion guide, you can use it as a template for any future research you do, so you don’t need to start from scratch every time.

--

--