Five dysfunctions of ‘democratised’ research. Part 1 — Speed trumps validity

Leisa Reichelt
5 min readNov 11, 2019

--

The good news is that more and more organisations are embracing research in product teams. Whether it is product managers doing customer interviews or designers doing usability tests, and everything in between — it is now fairly simple to come up with a compelling argument that research is a thing we should probably be doing.

So we move on to the second order question. How do we scale this user centred behaviour?

Depending on where in the world you are — and your access to resources — your answer is usually to hire more researchers and/or to have other people in the team (often designers and product managers) to do the research. This is often known as ‘democratising research’.

Almost certainly this is the time that an organisation starts looking to hire designers and product managers with a ‘background in research’ and to establish some research training programs, interview and report templates and common ways of working.

This all sounds eminently sensible, but there are some systemic incentives at work that can undermine our best intentions. At best, it can render our research wasteful and inefficient, and at worst it can introduce significant risks in the decision making that our teams make.

Each of these are structural issues for many larger organisations, and anyone doing research in these environments is likely to be impacted.

So, let’s assume that people doing research have had adequate training on basic generative and evaluative research methods — here are five common dysfunctions that we will need to contend with.

  1. Teams are incentivised to move quickly and ship, care less about reliable and valid research
  2. Researching within our silos leads to false positives
  3. Research as a weapon (validate or die)
  4. Quantitative fallacies
  5. Stunted capability

Here we will start with the first, which is one that many will find familiar.

A ‘ping pong’ poll from a recent customer conference — most people told us that culture was holding their teams back, not technology.

Dysfunction #1.
Teams are incentivised to move quickly and ship, care less about reliable and valid research

The most popular research tools are not the ones that promise the most reliable or valid research outcomes, but those that promise the fastest turnaround. One well known solution promises:

Make high-confidence decisions based on real customer insight, without delaying the project. You don’t have to be a trained researcher, and there’s no need to watch hours of video.

It sounds so appealing and it is a promise that a lot of teams want to buy. Speed to ship or velocity is often a key performance indicator for teams. It’s not a coincidence that people usually start with ‘build’ and rushing to MVP when talking about the ‘learn, build, measure’ cycle.

Recruitment trades offs made for speed

The challenge is that doing research at pace requires us to trade off characteristics are important to the reliability and validity of research.

One of the most time consuming aspects of research is to recruit participants who represent the different attributes that are important for understanding user needs the product seeks to meet. The validity of the research is constrained by the quality of the participant recruitment.

What do we mean by validity? In the simplest terms, it is the measure of how well our research understands what we intend for it to understand.

Most of the speedy research methods — whether that’s guerrilla research at the coffee shop or using an online tool — tend to compromise on participant recruitment. Either you just take whoever you can get from the coffee shop that morning, or you recruit from a panel of participants online and trust that they are who they say they are and that they won’t just tell you nice things so you don’t give them a low star rating and they get to keep this income source.

There are many kinds of shortcuts to be taken around recruiting — diversity of participants, ‘realistic-ness’ of participants or number of participants being a few. Expect to see some or all of these short cuts in operation in product teams where speed to ship is the primary goal.

Being fast and scrappy can be a great way to do some research work, but in many teams the only kind of research they are doing is whatever is fastest. This is like eating McDonalds for every meal because you’re optimising for speed… and we all know how that works out.

Teams are trading off research validity for speed every day. Everyone in the organisation understands the value of getting something shipped, and this is often measured and rewarded. Not so many people understand risks associated with making speed related trade offs in research.

What is the risk?

Misleading insights from the research can send a team in the wrong direction. Invalid findings can direct a team to spend time creating and shipping work that does not improve their users experience or meet their users needs. This means they are spending time on work that does not increase the desirability or necessity of their product. This negatively impacts their productivity and the profitability of their organisation.

Does this mean that speed to ship is bad? That ‘bias to action’ should be avoided? Should all research be of an ‘academic standard’?

No.

Testing to identify some of the larger usability issues can often be done with small participant numbers and less care to find ‘realistic’ respondents. If the work that results from your research findings is going to take more than one person more than a week to implement, it might be worth increasing the robustness of your research methodology to increase confidence that this effort is well spent.

People doing research need to be clear with their teams about the level of confidence they have in the research findings (it is fine for some research to result in hunches rather than certainty as long as it is clearly communicated). And teams should plan to ensure they are using a healthy diet of both fast and more robust research approaches.

Organisations need to ensure they have someone sufficiently senior asking questions (and understanding how to critique the answers) about not just the existence of data from user research but also looking under the hood to evaluate the trade offs being made, and as a result the level of confidence and trust we should place in the insights and claims made.

You can read about the next dysfunction here

First published on disambiguity.com

--

--

Leisa Reichelt

Head of Research and Insights at Atlassian. Mother of small boys. Previously Australian Digital Transformation Agency + Government Digital Service