Episode 12: Just Enough Research

Anna Marie Clifton
Clearly Product
Published in
7 min readAug 1, 2019

--

LISTEN NOW ON iTUNES, STITCHER, OR OUR WEBSITE

Every product manager builds things for customers, and knowing just who your customers are—what their needs, hopes, desires, frustrations and goals are—is the bedrock of all the work we do.

But how do you go about learning that? And how do you know when you’ve learned enough and can get back to building? Well, does has Erika Hall have the book for you!

Listen in as Sandi MacPherson and I dive into Just Enough Research and discuss how we’ve used the tools and tactics presented there to help us get the best things to customers. We talked through the Who, What, When, Where, and How of user research.

Who: The people doing research

I typically find that there are three roles that drive research:

  • The User Researcher (if you have one, and they have time) is typically the one driving the studies.
  • Following that, the designers on the team (if you have them, and they have time) will drive the research.
  • And if not (as with everything) the fallback is on the PM.

In my ideal world, Researcher, Designer and PM are all inciting the research, in all the interviews, and part of the insight synthesis, but the User Researcher is the one calling the shots on what merits a study, with which methods, and what verified insights we bless and share out at the end.

But there are other major players besides just the ones driving the research. In every organization, key stakeholders can derail your efforts if they aren’t bought in to the research. Even worse, your engineering team can start to begrudge that effort if they don’t believe in the value of talking to real customers.

Research is the practice of drawing insight from incomplete data, and as such, has slightly shaky footing when people aren’t bought in. It’s deniable (“Well, that’s just 5, 10, 500 peoples’ opinion. That’s not really significant”) in ways that more quantitive measures like A/B tend to not be. But you can counteract that with a few tactics:

  1. Suss out if an org is pro-research before you join. I know it’s a cop-out, but really! Try and get a sense of this. Note if a researcher interviewed you or not. When talking to PMs on the team, ask questions like “When’s the last time you disagreed with the head of product and how did you resolve that?” Or “Who are your teammates and close collaborators?” You should aim for questions that are open ended and aren’t actually targeting at “research” to see if it comes up naturally.
  2. Set the goal and success criteria in writing up front. Get all your skeptics to agree to what research would convince them of a given direction. The less they trust in research, the more specific you’ll need to be about defining that. Consider framing things like “4 out of 5 participants from our target segment get through this prototype in less than 30 seconds with no backtracking, stumbling, or guidance.”
  3. Get people directly involved. There’s magic when skeptics sit in on a call, or go talk to a customer in person. Try doing a weekly or monthly “open call” where any member of the org can sit in and watch (silently and hidden). If you have trouble getting people involved in even that, try sending out 30 second snippets of goodness whenever you can.

What: The things you should spend your time doing

When it comes to generating value from your research, you should think of it like a funnel.

The first thing that your research is limited by is your (1) preparation and the goals—the things that you’re even trying to learn. And then the second thing you’re limited by is the (2) quality of the participants that you source. Then you’re limited by (3) the way you run the research itself and run the actual interview or other method. And then by your (4) analysis.

You should put proportional effort into those things in that order.

Typically people put most of their effort into just the actual interview, like that’s where most of their time goes. But if you think about getting good insights, focusing on goals, prep and being able to get the right people to talk to matters way more.

When: The right time to use research

Of course, in the abstract, there’s rarely a bad time to use research. But we all know that in reality life is about tradeoffs. So how do you decide when to stop doing everything else in order to start doing research?

On the podcast, I shared the following framework for deciding when to use research:

My old design director, Daniel Nordh, taught me that the design space could be effectively modeled as a decision tree. The nodes are decisions you need to make and the edges are outcomes of that decision. When you’re entering a new design or product challenge, the most effective use of time is identifying all the decisions you need to make and determining which ones are predicated on which other ones. The outcome is a decision tree that helps you focus on problems in order, and ignoring lower level problems on branches that you never end up traversing due to some higher level decision.

Getting to this tree can be hard, because there’s a constant pull to the bottom, all the way to what the UI should be. You have to be vigilant throughout the process to keep referencing that tree as the map of your product process, where you are, and where you’re going next.

Once you have that decision tree/design process map, here’s how you add research in:

  1. Start one layer above the node that you entered. Someone likely handed you this problem (or solution 😱) with some assumptions pre-baked. Take the time to understand what decision led to this, what other options there were (whether they were considered or not), and use research to get a gut check on “Is this right?” All too often, folks get started on new projects without this grounding only to thrash as the weeks go on because they were barking down the wrong branch of the (decision) tree. That said, you can’t re-evaluate everything that led to this project. You kinda have to accept that the company exists for a good reason, has a decent mission/vision/strategy, etc. 😜 Just go up one level to make sure things make sense.
  2. Use research at any node that is contentious. When people disagree, it’s because their base assumptions differ significantly. If you and your designer, or you and your manager, can’t see eye-to-eye on a given decision, start by trying some first principles, reliable heuristics and logic to come together. Still no? Try to define the underlying difference of assumption and see if you can get some research to (in)validate one of you. And do try to get that research at the layer of the assumption, not the decision you’re trying to make directly. One of you may prove out your prototype through a user test, but if the disparate assumptions haven’t been addressed, people will hold on and brood.
  3. De-risk your one-way decisions. If any of your nodes are one-way door decisions, try to get some real-live-user-validation that you’re going the right way.

And that’s it. Don’t over think it. Don’t “just test things” with users without a reason or a plan. Spend your research time wisely, getting deep, foundational insights (or a few, defusing observations).

Where: Getting in front of customers

One of the reasons you want to be wary of trying to research too many things is that getting in front of customers is incredibly challenging. Recruiting participants is so hard, so horrible, and so difficult. You should always be working on bringing more people into be researching, even if you don’t have an active study — anything you can do to make that easier for yourself.

You may think that people want to give you feedback on your product. They don’t. You may believe that people would LOVE a $30 amazon gift card for 30 minutes of their time. They don’t.

Get creative. Get persistent. Get out there.

Usertesting.com is amazing for this, but hard to target precisely. Street intercepts are timely, but limited in value. Getting to customers where they are is the best, but is very expensive (🕰 & 💰). And video calls are great, but people will fail to respond and then flake even when they do.

It’s ok, it happens to everyone. Just keep going.

How: Getting the most out of the participants

We dug into some tips and tricks for getting the most out of the sessions you do run with users—some from the book, others from our experience.

Both Sandi and I have learned over the years that nervous participants provide significantly lower insight-per-interaction than comfortable participants. It’s human nature to feel like you’re on trial or being evaluated whenever a group of people are grilling you with questions.

I learned a quick trick from a researcher at Yammer, Emma Beede, to put people at ease:

As we get started, I want you to know that this is not a test and you cannot fail. You are not on trial here. Our product is on trial.

Works wonders!

Another good thing to keep in mind is the hygiene around interviews: Always show up early and “Notes or it didn’t happen.”

When it comes to notes, I learned some best practices from our head of research, Erin Baker.

  • Make sure the interviewer is not the one taking notes
  • Always have dedicated note takers (plural!!)
  • Have those note takers specialize: (1) on observations from the session, (2) on direct quotations from the participants, and (3) on the participant’s environment (physical and digital).

And don’t rely on videos as your notes. People just don’t go back and watch videos. Almost ever. You need to distill and abstract away all the insights… it’s gotta be notes!

If you’re interested in learning more about these topics (and so many more), I encourage you to give the episode a listen! If you like what you hear, give it a review, a subscribe, or a share.

And we’re always looking for feedback on what to cover next. Do you have a thorny topic in product management you’d like to hear us cover? Or a specific, seminal book in some PM skill? Send em our way! Simply respond below 😄

Don’t forget to subscribe on iTunes, Stitcher, or wherever you find your podcasts.

--

--