The Who / What / Why & How of Prioritising Research Insights

Emma McCabe
6 min readNov 22, 2016

--

As a researcher it’s easy to fall into the trap. Where you end up working on every project that is asked of you. Where you are so focused on getting the next insights deck out the door that you don’t stop to ask

“Are my recommendations actually being acted on?”

This I-must-work-on-every-project-imaginable mentality can become a vicious cycle and hard to break. It can cause of a lot of “deja vu” moments. We’ve all been there. Someone in a meeting brings up a problem or UX issue in the product and how it needs to be addressed. Everyone in the room agrees. Suddenly it hits you that you had that exact insight or recommendation in a report 3 months ago.

It’s not good enough to only deliver research findings and move onto the next project. Researchers should strive to be the advocates for change. If you have the research recommendations, it’s up to you to see to it that something is done with them.

I’m not advising that you cut down on the amount of projects you work on so that you can spend a month peering over the shoulder of your PM insisting that every last change is made. It just means you need to be smarter about the projects you do work on, and smarter about how you get your recommendations taken on board.

So how do you tackle this issue so you don’t end up churning out insights that aren’t really actionable? I’m going to go through the Who / What / Why & How of prioritising research insights and hopefully they’ll be of some use to researchers in any size organisation.

Who should I be targeting to get research insights prioritised in the first place?

When you’ve finished a project and have created your findings deck/doc/poster/song or whatever else you use to convey insights, make sure it’s being listened to by the right people. For projects that I work on, I usually get the main stakeholders involved from start to finish so I’m not throwing my insights over the wall. These core stakeholders usually include:

  • PM
  • Designer (or Design Lead)
  • Engineer (or Eng Lead)

It’s essential that if you’re carrying out interviews or usability studies that at least one of these people is in the room at all times. Otherwise, it’s a hell of a lot more work to try communicate your findings to a room full of people that have no knowledge of what you did.

Once you’ve wrangled a couple of team members into your study or interview, hold mini-debriefs after each one and get everyone to say what they felt was the key takeaway from the session. This will get the cogs turning in folk’s heads. It’ll also help teammates feel more invested in your study.

Then after the analysis is done, get everyone in a room once again and walk them through your overall findings. Ensure that the main stakeholder (usually the PM) has a clear grasp of what the top 3 issues or areas to tackle are. After the debrief, sit down with the PM again and figure out how these top issues can be tackled.

Don’t just throw your research findings over the wall.

What findings should I be prioritising?

It’s all about impact. You should have some sort of system for prioritising your research findings, no matter how big or small they are. You can’t rely on other teammates to do the work and figure out what are the “holy shit-we need to address this now” issues versus the “nice to haves”. For usability studies I tend to weight the severity of an issue by:

  • What is the overall impact? (aka Does it affect multiple parts of the product?)
  • Is it a blocker for user’s workflows? (aka Can they not get a task done in the product because of this issue?)
  • Is it an area of frustration? (aka Did a high number of users in the study get annoyed with this issue?)

Once you map your issues to the above questions, you can rank them by severity. At Intercom, we use a P1/P2/P3 system and recently I’ve been adding a legend to notes within a slide deck that outlines how the ranking works. It goes along the lines of:

  • P1 — This is an important problem that needs to be addressed. This occurs often and mainly relates to issues that prevent users from completing a task within their workflow.
  • P2 — This is moderate problem that should be addressed. This occurs sometimes and mainly relates to issues that cause comprehension issues or frustration to customers.
  • P3 — This is a minor problem that would be a “nice to have” if addressed. This occurs sometimes and mainly relates to UX polish items.

Why should some findings be prioritised over others?

In a perfect world, every research finding would be acted on and products the world over would be simple, intuitive and delightful to use.

Unfortunately this isn’t how the world works. Empathy is a skill that researchers pride themselves on having plenty of. And it’s this empathy that you should tap into when a PM tells you that they won’t be working on all of your recommendations. Don’t take this as a personal slight. For instance, teams can be stretched too thin, there may not be enough engineering resources, higher priority roadmap items could be overdue, moral on the team could be low before a launch. Empathise with the team and consider these things when you’re fighting to get your recommendations on a roadmap.

This means that sometimes you have to swallow your pride and think of the product holistically. Would it be better to address the top 3 issues now while they’re fresh in the team’s heads or bombard them with 3 main issues, 6 sub-issues and a bunch of “Nice to Haves” when they’re trying to get a product out the door?

Although this isn’t an ideal situation, it’s the reality of working in software and sometimes you have to think of the greater good. Allow your research findings to be a focus for the team, not a distraction over everything else they have on.

How do I get my findings actually worked on (and not just be told that they “will” be)?

So you have your teammates sitting in on your studies and attending all your debriefs. You’ve stack-ranked the biggest issues that need to be tackled and communicated them to the PM. Now what?

I’ve experienced a couple of instances whereby I’ve done everything right on a project. All the boxes have been ticked. But a month later noticed the feature got shipped and not one of my findings were addressed. Instead of crawling into a ball and crying, I thought of ways to ensure my findings get acted on in future.

Pro-tip: Utilise the tools your team uses for their roadmap. There’s no use in all your recommendations being stuck on a poster or Google doc as they’ll become obsolete within a few weeks. Get them in direct view of the team!

That means if your team uses Trello, add cards. If they use Asana, add tasks. If it’s Github, add them as issues to be worked on. Use tags to communicate that these are research findings from a recent study. Make sure that they are easy to read for those who may lack context. Remember, never assume knowledge. For example:

  • [P1][Usability Study, Feature X: Nov ‘16] Issue Description
  • [P2][Marketing Page Research Study: Dec ’16] Issue Description
  • [P3] [Onboarding Concept Test: Oct ’16] [UX Polish] Issue Description

You’ll be surprised at how the impact of your work as a researcher becomes more and more apparent as you begin to do this. Packaging your research recommendations like this also helps the team as you haven’t created unnecessary cognitive load for them.

Conclusion: If you begin to do this, everybody wins!

--

--

Emma McCabe

Experience Researcher. Frank Ocean enthusiast. On a quest to pet every dog.