Why unmoderated remote usability testing is an addition to — not a replacement for — your moderated user research practice

Milly Schmidt
Insights & Observations
7 min readFeb 12, 2021

I often hear a note of skepticism when I speak to researchers about remote, unmoderated research tools. “Face to face research is essential for us. It’s the heart of human-centered design!” they might say. I was once like this — passionate about research and a little defensive about doing anything other than than the gold standard.

Maybe you’d be defensive too, if, like many researchers and designers, you were worried about your work being diminished, de-scoped or de-funded. In particular, any suggestion to cut research back can trigger a knee-jerk response. It’s often a result of many years of struggling to make research heard or valued in their company.

But I’m here to make the case that unmoderated remote research tools aren’t just research-lite, or research for those who don’t have that buy-in. These tools have some huge advantages, especially when used in addition to moderated, in-person or other higher-investment research tools like depth interviews — but not necessarily as a replacement for those tools.

So rather than trying to convince you to switch from your in-person chats to something like UsabilityHub, I’d ask you — why not both?

1. Your research toolkit should be diverse

As the saying goes, “If all you have is a hammer, everything looks like a nail.”

Just like your team, your research toolkit should be diverse. Different research tools allow you to investigate, test and measure with varying levels of specificity and precision.

It’s certainly true that in-person interviews are a rich and rewarding experience, allowing you to really see the participant as a full human being and empathize with them. But constantly forcing the team to prioritize and run depth interview studies can be time consuming, costly and — dare I say it — sometimes inappropriate. Eventually, trust in the research process can be eroded by insistence on using the heaviest methodology available at all times — especially for teams who are early on their research maturity development journey.

UX designers new to research will inevitably be excited to get some face-time with users — but like all disciplines, the senior practitioner knows how to use a variety of tools. A good MVP research toolkit might consist of:

  • Depth interviews (in person and remote)
  • In-person usability studies
  • Surveys
  • Remote usability studies (moderated and moderated)
  • Pulse surveys (e.g. NPS).

Knowing how to design and run various types of studies using different methodologies is a critical part of building an efficient and effective research practice.

2. Choose right tool for the job

So, assuming you know how to use all the tools in your toolkit — how do you know when to use which one?

A good first lens is checking what part of the double diamond you’re in. The UK Design Council’s well-known double diamond is a helpful, high level illustration of the two big stages that we call “the problem space” and “the solution space”

In the problem space, you’re still learning about what customers or users are struggling with and where their pain points are. This is where you deploy your depth interviews, card sorts, surveys and other types of exploratory/generative research. You may dip into the world of usability studies if you’re benchmarking a user experience that is part of the pain point.

In the solution space, you’re testing the ideas you’ve formulated that you believe will help solve these customer problems. This is where you deploy evaluative research — usability tests, pulse surveys, tree tests and other responses to designed artifacts.

The second lens that we use is around idea fidelity. I’m referring to how detailed and certain your idea is — is this a napkin sketch or a detailed hi-fi prototype? Is it a hypothesis or a pretty sure-thing, business-model-backed opportunity?

For lower fidelity ideas, it’s more appropriate to go deeper and broader, and as the ideas progress to higher fidelity solutions, you can be more laser-focused and specific. In particular, we use remote unmoderated usability testing in later, higher-fidelity iterations of a design, where we have discovered and validated the broader user problems and are now focusing on the finer details of the solution.

The third lens we use to decide on which research tool is appropriate is a lens of risk. Sometimes, for example, it might not be necessary to spend a lot of time exploring customer problems if the possible solutions are obvious and — crucially — low effort.

For example, in the case of a bug fix, where the problem is clear and the solution is clear, it’s not necessary to run depth interviews with users. If your gut tells you that you need to still check something about the proposed solution, spend some time refining the exact risk you’re worried about, and then run something lightweight where you can get results quickly.

3. Double check lingering assumptions

As much as we would love to eradicate all assumptions as we develop our ideas as designers and researchers, we can never entirely eliminate risk. In fact, the need to proceed only with perfect information can hold teams back when a lean build-measure-learn loop might be more appropriate.

Often towards the end of the solution development process when we start thinking about launching our work to production, we may notice some new assumptions have crept in over the course of design and implementation iterations. As new people are involved with the project, new ideas and interpretations of various decision are woven in.

Rather than forcing all of the creation of the solution to be policed by the design and research team, we prefer to allow input from various team members but make sure than any parts of the solution that have gathered their own assumptions are tested before we launch, especially if there is risk that of the design hypothesis is wrong, the efficacy of the solution might be diminished.

For example, if part of the implementation means that we can’t use the same layout as was tested in earlier iterations, we’ll throw together a quick remote unmoderated usability test on UsabilityHub based on screenshots from the local development environment to double check that our target users can still achieve their goals. It’s much easier and a lot better for the cross-functional team dynamic than being dogmatic — but it doesn’t replace the earlier research done to develop the solution in the first place.

4. Leverage short feedback loops for rapid iterations

Short feedback loops are critical in enabling teams to integrate research into their workflow smoothly. Not all research feedback loops are short — some longitudinal studies might take weeks or months to gather data, and that’s even before synthesis has begun.

As we go along the process from problem space to solution space, our feedback loops get shorter and our iterations faster. Usually in the early stages, we will be spending more time in conversation, going deep on data and thinking carefully about the implications of the work. But as we enter the solution space, those feedback loops speed up in order to allow us to test multiple ideas and learn faster.

Toward the very tail-end of the process, we sometimes run our research rounds so quickly that the results are instantly integrated into the design iteration by a developer. At this point, it might even make sense for the developer to be involved in the research to get them the insights as soon as possible!

We love using remote unmoderated research at this stage and really forcing ourselves to run short, sharp tests (rather than long, in depth ones) in order to stay laser focused as we push to the finish line. Earlier on in the process, it’s unlikely we’ll be running at that same pace, and other methodologies and tools are more appropriate.

5. Reach more participants with increased flexibility

One huge reason why we have started using unmoderated remote research more is that it helps us diversify our participant pool.

After emailing our beta tester list to be involved in some remote moderated tests, we noticed that the response rate was smaller than we expected. Rather than assuming, we sent a super quick survey to ask why they couldn’t participate, selecting from the following options:

  • incentive is too small
  • too busy
  • not convenient
  • not interested
  • something else.

We found that the majority of our participants simply couldn’t find crossover in their calendar and ours as we are based in Australia. By shifting to unmoderated sessions, we were instantly able to test with more customers as they could complete the session on their own, in their own time.

For us, this is important as our customers are spread all across the globe, but in general, being able to test outside of your local area is a huge win for research, especially when your users are not necessarily your neighbours.

Another big advantage is that because we don’t have to moderate every session, sessions can be done concurrently, allowing us to massively decrease the time from hypothesis to insights. Using the UsabilityHub panel allows us, for example, to turn around results from 50 participants in less half an hour — something that could take us close to 50 hours if we had to be part of every session.

Obviously this would be less helpful for a conversation, and in that situation, a remote moderated approach is preferable. But where it’s possible, we find remote unmoderated to be convenient for us AND for our participants, so it’s a win-win.

Part of a balanced diet

Hopefully I’ve convinced you that remote unmoderated research isn’t just a poor imitation of in-person, moderated research, but instead a complementary tool that you can deploy in addition to your existing research practice.

Sometimes you need to go deep, spend time, expand your feedback loop and invest in mitigating as many risks as you can, but sometimes speedy, lightweight and simple research is more appropriate.

If you’re looking to build a more complete research toolkit, head to UsabilityHub and start testing today.

--

--

Milly Schmidt
Insights & Observations

Director of product building design research tools at UsabilityHub.