Qualitative research methods and how to combine them

Anna Zimmermann
Axel Springer Tech
10 min readFeb 7, 2023

--

As a product designer in an agency, I work on different products and project phases: From the elaboration of existing products to user research and design of entirely new products. This has led me to adapt and combine research methods to the specific needs of each project and knowledge about the user group. In this article, I explain three essential research methods and when to use and combine them effectively.

Effectively — yes, because we (the UX professionals) need to get the most out of our research as the budget for user research in product development is still a hard-fought jewel — yet it is the essential tool we have to ensure the creation of needed and used products — and therefore profit for the companies we work for.

  1. Research Preparation
  2. Research Introduction
  3. Research Methods, when to use, and how to combine them
    3.1 User interviews
    3.2 AttrakDiffs
    3.3 Usability Tests
    3.4 Hybrid Research
  4. Schedule & Setup

1. Research Preparation

  • Formulate hypotheses about the product in advance. Coordinate these with key stakeholders. With what hypotheses are you going into the research? Will your research confirm them, or will you learn new things / correct your assumptions?
  • Collecting the existing hypotheses helps you choose the appropriate method or combination of methods; do I mainly need to know about the usability context, about my users’ behavior or do I need to know if my new feature is intuitive? Is it usable, and is it fun?
  • It also helps you manage stakeholder expectations. If hypotheses are agreed upon with stakeholders before the research, you can refer to them afterward. They make your research findings more precise and less abstract: because you’ve found answers to stakeholders’ assumptions:

“Our research findings confirmed our hypothesis that…” or “The research showed that our hypothesis about feature X was incorrect; they found it easy to use, but they wouldn’t find it enjoyable or necessary; listen to the audio excerpt I prepared….”

  • Define a research question: What is the general question you want to address with your research? Then collect topics related to it and formulate questions/tasks for an interview or usability test.

“Our hypothesis about the product indicates that we need to find an answer to the research question: X. We will conduct the user research method X with X amount participants from our primary user group.”

2. Research Introduction

For each of the three research methods, you should allow enough time for an introduction and prepare for it. Let’s say you have the best user interview questionnaire in the world. But if you rush the introduction or forget it altogether, you are likely to waste your time and money because the research conducted will be of lower quality or fail.

  • Introduce yourself and ensure neutrality; even if you are from the direct product team, participants should understand you as an external researcher — this will ensure they feel open to giving honest feedback.
  • Explain your role and how you will work with participants during the test or interview, that they can ask questions at any time, that you may not be able to give them feedback during the usability test, and that they should always feel encouraged to address any ambiguities.
  • Encourage your participants’ confidence and tell them the rough agenda. This way, they feel they are in control. This creates a collaborative rather than testing environment between you and the participants, and the results are less affected by nervousness.
  • Make sure the setup (especially if remote) works.
  • Ask if they have any questions before you begin.
  • Ask permission to record the session.
  • Thank them for their help.
  • Start recording.

3.0 Research methods, when to use and how to combine them

I focus on the essential qualitative research methods: user interviews, usability tests, and AttrakDiffs. I will also suggest combinations of these methods. Hybrid versions allow you to gather different user insights within one research session.

3.1 User Interview

Interviews are a fundamental research method in product development. Before there is a prototype or product to test or talk about, there is a user group to understand!

  • Use the Mindtree technique to create a flexible interview questionnaire that allows you to be open and adaptable with each participant.
  • Focus on 2–3 topics for which you prepare open-ended questions.
  • In the best case, you will have a colleague with you to take notes so that you can focus on conducting it.
  • Allow natural conversation, and don’t stick strictly to the questionnaire. It may be that a participant has more to say about one question than another, and that is fine.
  • Be open to new things, and add these new insights to your questionnaire so you can ask the other participants about them and see if there is more to discover.

Unlike the AttrakDiff and usability test, the user interview can be adapted between tests. New insights about your product context are welcome and should be included in your questionnaire. Interview results are primarily about learning and understanding more about your users — you may find patterns, but you don’t need to document comparative statistics. Usability tests and AttrakDiffs need to be comparable, and to achieve that, they need to be conducted with the same, unchanged setup.

Analysis and evaluation of user interviews

  • Calculate 1–2 days for the evaluation and prioritization of the interviews
  • If possible, evaluate with your colleague who assisted you as a transcriber during the interviews.
  • Read notes aloud and record critical findings in short post its
  • Cluster what you learned into key insides.
  • In another session, which should include cross-functional team members such as POs, developers, and stakeholders, prioritize the clusters with them based on the Pareto principle and use a value vs. complexity approach.
  • This way, you can align the entire team on prioritizing the product backlog and possible adjustments to already developed features.

“The Pareto principle (also known as the 80/20 rule) is a phenomenon that states that roughly 80% of outcomes come from 20% of causes. In this article, we break down how you can use this principle to help prioritize tasks and business efforts.” — https://asana.com/resources/pareto-principle-80-20-rule

3.2 AttrakDiff

AttrakDiff is a questionnaire for testing a product's hedonic and practical quality. How to use it:

  1. Use this standardized questionnaire (AttrakDiff) to evaluate a product’s UX to generate research statistics and measure your UX quality during product development phases: Run an AttrakDiff after testing the product with a user and track the results.
  2. You can use it to test and compare the quality before and after using the product: Run the AttrakDiff before and after a usability test. This shows how actual use changes the quality of your users.
  3. You can also use the AttrakDiff to compare versions of a product or different products: For each product or version, run an AttrakDiff after testing it with or showing it to users.

When conducting an AttrakaDiff, you must inform the participants that you will now ask them questions whose answers they must rate on a scale. There is no time for explanations during the AttrakDiff. The participant must answer the questions directly and as they understand them.

“Please provide your impressions of the product you have tested by check marking your impression on the scale between the terms offered in each line. 1 2 3 4 5.” — AttrakDiff: Questionnaire Author: Prof. Dr. Michael Burmester, Prof. Dr. Marc Hassenzahl & Franz Koller

3.3 Usability Testing

You know your user group and draft the first concepts and prototypes. Running usability tests with at least three users is the only way to know if the concept is intuitive and enjoyable.

Evaluation time

Before starting the test, you should allow your user to get to know the product for 2–3 minutes.

“Open the app on the mobile device now and look around. You have 3 Minutes, and I will introduce the first scenario to you afterward.”

Create test scenarios

Test scenarios are tasks that you formulate in a natural usage scenario. In this way, we reduce the artificial test environment and the possible nervousness of the participants. Each scenario must have a concluding moment. The participant should confirm out loud if they think they have solved the task. For example:

“Imagine you are on a city trip in Berlin. You want to post a new story on Instagram about the Berlin Wall and accompany it with the song Heroes by David Bowie. Please tell me when you posted the story.”

Define the happy path and timeouts

  • For each test scenario, outline the happy path in advance to identify possible more complicated ways your participants will use to solve the task. You will also need the happy path to determine an appropriate time for the task. These are timeouts.
  • If your participant takes longer than the specified time, the task is considered failed, and you politely ask them to stop and move on to the next task (without telling them that the task failed).
  • If the participant has questions about the task that is not for clarification and could give them clues to solving the task, tell them that you cannot help them during the test, but you can clarify things afterward. Provide a focused testing environment and reduce discussion or questions during the test.

Evaluation of the usability test

Review each session immediately afterward. It is best to evaluate each task with your team or at least with another employee who observes the test. Ideally, your colleague will observe from another room to avoid distracting participants too much. Score the task using the traffic light system:

  • Red for failed
  • Yellow for done, but not within the happy path
  • Green for done (within the timeout and happy path).

Compare notes and agree on how each test scenario should be rated (red, yellow, or green). This means that when planning the usability tests with participants, you should keep a sufficient distance between them to be able to review each session immediately. Reviewing the notes directly with your colleague will make the overall evaluation of the usability test less time-consuming and more accurate.

3.4 Hybrid Research

Combining research methods allows you to gather diverse input from users. It is also resource efficient: you use time and resources for a combination of methods. The following combinations are recommendations. They can be adapted and improved by you.

Hybrid Version 1

You have done proper user research, but some questions about the primary user group come up during development. This approach combines all three methods and allows you to interview your participants before testing. You can clear up ambiguities you discovered, or your team raised. By interviewing users before a usability test, you also create a more comfortable environment for them. You have the opportunity to warm up together. Learn from users and test the usability and quality of your concepts by including an AttrakDiff pretest and post-test; this will tell you how usability has affected the quality of the product for them.

  1. Introduction (10 Min)
  2. User interview: Start with a simple question to warm up (in total 30 Min — 1 hour)
  3. AttrakDiff 1 (15 Min)
  4. Usability test (1–1,5 hours)
  5. AttrakDiff 2 (15 Min)
  6. Debrief (including tops and flops) (30 Min)

Hybrid Version 2

With this version, you can do a more rigorous usability test because users can’t learn about the subject through a previous interview. You focus on usability, have the option to interview afterward, and go more into the things you identified during the user test.

“I noticed during scenario two that you went back and forth between the home and search screen — before using the search feature. Could you explain why you did this?”

With a post-test AttrakDiff, you can compare users’ quality ratings from the AttrakDiff with their performance during the test to better assess the results.

1. Introduction (10 Min)

2. Usability test (1–1,5 hours)

3. AttrakDiff (15 Min)

4. Test Debrief (including tops and flops) (30 Min)

5. User interview: Start with a simple question to warm up (in total 30 Min — 1 hour)

6. Debrief; thank the user for their participation (5 Min)

Hybrid Version 3

You want to test the usability of a product/feature, but you also want to collect qualitative research data to measure your product's practical and hedonic quality. When you combine usability testing with the same AttrakDiff you use every time you test your product/feature; you can track and measure potential quality improvements during the design process and collect quantitative statistics.

1. Introduction (10 Min)

3. AttrakDiff 1 (15 Min)

4. Usability test (1–1,5 hours)

5. AttrakDiff 2 (15 Min)

6. Test Debrief (including tops and flops) (30 Min)

4. Schedule & Setup

  • You should schedule no more than 2–3 sessions per day, depending on the duration of the session, so that you can review each session immediately afterward.
  • You should always schedule a pilot test beforehand to see if the setup works well or if you need to correct or adjust anything. Pilot testing is a must and can be done with colleagues.
  • The golden test set is 5–7 participants. With this number of participants, you can already find out a lot during usability tests.
    User interviews can already be insideful with 3 participants.

Thank you for reading

--

--