Free Workshop Kit: Teach non-researchers how to lead customer interviews (Part 1 of 3)
I’m sure you’ve noticed a bazillion articles out there about how to run a customer interview successfully. I want to take this a meta-step further and write about how to teach other non-researchers how to lead customer interviews. I recently led a workshop at O’Reilly Media where I talked with product managers about what to do/not to do when it comes to interviewing customers.
We, just like many other orgs in the industry, have the same classic scenario where the research squad is pretty small, and there is so much research to be done, that the age old question comes up of “how can I empower the product teams to do their own quality, rigorous research?” At the end of the day, all that my little researcher heart desires is to deliver products that are informed by trustworthy data collected from our users with evidence-based design.
The goal of this article is strictly to provide tactical, step-by-step resources for researchers to get their product teams up to speed on interview best practices, as well as teach them how to avoid the most common pitfalls I’ve seen in constructing and conducting customer interviews.
The workshop itself took about 2.5 hours with a mixture of remote and in-person participants (about 9 people total). The mixture of participant locations mandated that I share all of my slides via web meeting, and use MURAL and Slack in order to facilitate some of the group activities.
The overall agenda of this workshop highlighted five key steps that must be taken in order to successfully plan, conduct and synthesize a customer interview series:
- Define your research question & method(s) (covered in this article)
- Screen & recruit participants (covered in this article)
- Design the research guide (coming soon in Part 2)
- Conduct the interview (coming soon in Part 2)
- Derive insights from the interview (coming soon in Part 3)
It helped to have the overall structure be a mix of instructional slides as well as a number of hands-on activities for each of the sections above. Facilitator Notes are included throughout this article to mark where you can introduce any workshop activities that will help engage your audience and enable them to immediately apply their learnings to practical, real-life situations that are relevant to their needs.
Suggested timeframes are also highlighted for each activity and instructional section in order to keep your team on track for getting through all of this material in 2.5 hours (however you can obviously use this as a base to stretch or shrink as needed).
Let’s start at the very beginning.
1. Define your research question(s) and method
I think this Research Rundowns website does a great job of articulating what constitutes a good research question, detailed in the bullets below.
Characteristics of a good research question:
- The question is feasible to answer.
- The question is clear.
- The question is significant.
- The question is focused.
- The question is complex (not a simple yes/no).
- The question is ethical.
The biggest thing you want to remember is that a research question is never something that you would just outright ask your participants or user. Most of the time, the answers to these questions would be incredibly hard to self-report, and let’s face it: people can be very good at making up answers to things in an effort to appease you.
Here are some quick examples of good and bad research questions that you can share with your team, sourced from the University of Indiana.
Facilitator Note: If you’re actually using this article as a guide for how to run a workshop on customer interviews, then I would pause here and ask your participants to brainstorm 2–3 research questions of their own around their domain spaces. This will help them to start thinking through the proper structure of a research question as well as immediately apply their learnings to relevant spaces that they should probably be thinking about for research anyway. At O’Reilly, this helped our product managers each hone in on areas that they admitted were gray for them. Not only did this help the workshop become more salient, but it also helped me as a researcher to have them self-identify the areas where they were currently making assumptions, or maybe didn’t have all of the answers already. Spend a couple of minutes here and ask for volunteers to share their research questions as examples.
After defining your key research question(s), you can then start to think about what methodologies are most appropriate for answering them. I love this grid from Daedalus, a design firm in Pittsburgh, that helps design research novices to evaluate where on the spectrum their research questions might fall, and then some possible methodologies for each arena.
Some of my other favorite resources to give people to look at as far as “what method should I be using when?” are:
All in all, the most helpful gut-check that I employ when picking a method is simply to ask myself, “Could I honestly answer that if someone asked me?” If I have difficulty thinking of how I’d articulate something, or how I might show someone something, then that’s a pretty good sign that I shouldn’t ask others to do the same for me.
2. Screen and recruit participants
Identify who to recruit
The first thing to remind people of here is that electing research participants is NOT an exercise in market segmentation. An upfront focus on market segmentation may only distract your thinking by pre-categorizing participants without allowing yourself the space to think freely about how people may act or think. (It may become relevant to categorize your participants into market segments later on so you can understand the intersection of your design research data with market research data, but I would caution you from doing this upfront as I find it more inhibiting than relevant to discovering user needs, at least at the outset).
Instead, when thinking of your potential research participants, it can be helpful to write out a participant description in order to highlight areas of importance to your design research. For example, here’s a participant description I once wrote for a particular study at O’Reilly so that the team could align around who we were trying to recruit:
- We hope to spend time with both frequent and infrequent users of the O’Reilly Safari platform.
- We want to talk to people with varying degrees of familiarity with the O’Reilly Safari platform.
- We hope to talk to users who are highly goal-oriented in their adult learning, as well as those who have zero learning goals to speak of.
As you can see, none of these pertain necessarily to demographic data or specific segmentation, but these qualities were important to our research question in order to feel like we would get the breadth of perspectives required to fully answer our quest.
Jon Kolko has a fantastic resource on how to recruit for ethnographic research as part of his library for the Austin Center for Design that goes into more detail about identifying and screening participants.
Facilitator Note: At this point, I’d ask your workshop participants to brainstorm their own participant descriptions that they would need in order to cover the research questions they defined in Step 1.
Make a recruiting plan
Recruiting can be an extremely time-intensive phase of design research depending on the audience with whom you are trying to talk. During my time at IBM, recruiting the right kinds of users was often near impossible when the research questions were highly technical and specialized to very few job roles around the world.
If you’re ever in a situation where you can’t get access to the right people (or if they don’t feel comfortable talking to you), think about the next best proxy who could participate in your research. Obviously it is always going to be most ideal to talk to real users about their actual experiences, but certain research spaces can be sensitive or difficult enough that a proxy may be required. Don’t be disheartened, but try to find workarounds that can still get the answers you need. Cary-Anne Olsen-Landis has some great tips in her article about recruiting highly technical research participants on next-to-no budget that might give you some ideas for how to get scrappy with your recruiting efforts in times like these.
Facilitator Note: It can be helpful to pause here and discuss all of the possible recruiting outlets that you have at your company, whether it’s through an actual recruiting service like respondent.io, or simply through your various sales teams, account executives, or internal customer lists that all might be used. You can also discuss the varying levels and types of honorarium that your company is empowered to give to their research participants.
After figuring out who you want to talk to and how you’re going to find them, it’s time to think about how you’re going to screen this audience to ensure that they meet the criteria you set out in your participant description.
Screening questions should be able to whittle down an audience as they answer them. However, I’ve also found it helpful to sometimes simply gather context of a participant’s mental model before the interview itself, even if the question is open-ended and doesn’t necessarily have a “right” or “wrong” answer to qualify them for your study.
Some examples of screening questions I commonly use at O’Reilly are:
- How familiar are you with the O’Reilly Safari learning platform (www.safaribooksonline.com)? (5-point Likert scale response from “Not at all familiar” to “Extremely familiar”)
- (If 3+ familiarity) How often do you use the O’Reilly Safari learning platform (www.safaribooksonline.com)? (5-point Likert from “Never” to “A great deal.”)
- (If 2- familiarity) What other online learning platforms do you use, if any? (open-ended)
Whew! And with that, we’re done with Part 1 of how to teach others to lead customer interviews. By this point, your audience should be getting the sense that it’s not as simple as “how do I ask the right questions and set up calls with people?” We haven’t even gotten to the interview yet, let alone writing the interview questions, and look at all of this overhead. The effort is well worth the reward, however, so stick with me!
In the next part of the series, we’ll cover how to come up with the research guide and actually conduct the interview.
As always, any feedback or questions are welcome in the comments below. Stay tuned for Part 2 coming soon!
Stefanie Owens is the Experience Design Research Lead at O’Reilly Media. The above article is personal and does not necessarily represent O’Reilly’s positions, strategies or opinions.