Remote Qualitative Testing: A Primer

Christina Noonan
Design Intelligence
9 min readJun 27, 2016

If you’ve ever done interviews or research in any capacity, you know that as researchers, we can’t always reach interviewees at their place of business or home. With the growing set of tools to help researchers conduct virtual sessions, there are a number of important things to keep in mind when taking that route. While digital channels help us connect with others, they also create unique challenges that are not necessarily present in live or traditional lab environments.

When To Use Digital Tools

Digital tools have a lot of benefits, especially if you’re looking to create or evaluate a digital product, like a website or app. Some testing programs, like Hotjar and UserTesting, can help you blend quantitative measures with qualitative research. These tools also help you clearly translate your findings to other folks excited by “hard numbers”—think click rates, completion rates, problem frequencies, heat maps, task timing, and more. These numbers can also inform qualitative statements that involve generalizations like “most,” “all,” or “some.” They also allow you to invite additional stakeholders to see actual people using their product in real-time, which helps drive home key pain points quickly and efficiently, without a lot of immediate filtration and summary from the research team.

Digital tools do have their drawbacks though—you can’t always understand the context for use or recognize users’ body language as they navigate your intended experience. If those are important components to capture—which they may likely be—meeting your users live to walk through your offering will likely result in a more fruitful session.

I recently wrapped up a usability study where I was tasked with confirming design decisions and assumptions used to create a new digital experience, while also looking for unanticipated pain points prior to launch. There were several constraints that popped up for the user group we were interested in testing that drove the decision to conduct remote testing:

  • Opportunity: the desired user group were all members of a specific organization, which drastically limited the number of viable candidates for testing
  • Availability: these users typically had a higher level of seniority, and lower occupational flexibility and free time available
  • Proximity: they could be living and working anywhere in the US
  • Methodology: participants were not going to be identified through an outside recruiting agency

There are plenty of other constraints that make remote testing more applicable. For example, if there’s a chance that your physical presence in the room might bias participants, it may be better to conduct the study remotely. (Psychology buffs should take the classic example of Milgram’s experiment variations where the obedience level of participants varied based on the attire of the experimenter.) You can imagine that age, race, physical appearance, and gender may also impact your study, and while remote testing can’t shield all of those, it can limit the user’s perception.

How To Choose Your Tools

There are a number of additional considerations when conducting remote testing. There are an ever-growing set of usability programs to help you capture different components of a test. I like this article because it lays out a good set of different tools, with their relative cost, depth, and impact.

Usability Testing Tool Matrix by Craig Tomlin

Each tool has its strengths and weaknesses, and you’ll need to evaluate your financial means, number of participants, testing goals, research plan, and expected deliverables to determine which ones are best suited for your study. You may even find you want to layer multiple tools to accommodate your own testing methods. Here are a few things that you can do to help clarify your needs:

Confirm testing goals

Your stakeholders better be this happy when you’ve finalized the testing goals

Especially in larger organizations, different stakeholders may have different incentives for requesting usability testing. The best thing to get straight before you determine the channel, test material, or format of testing is to understand the goals for testing. Make it a point to ask each stakeholder, “What does success look like to you?”

Build a discussion guide together

Even if everyone is in agreement about the content during a goals alignment session, there may be subtle misalignments in terms of implementation of methodology. As a researcher, your initial inclination might be to craft a discussion guide on your own and share from there. However, crafting one with your stakeholders will help clarify intentions and draw out additional understated or misunderstood testing goals, removing unnecessary iteration. Creating this document with others can sometimes be a delicate dance to find the right questions (or even identify when not to ask questions) and to address previously determined testing goals. Ultimately, a discussion guide will help you determine the tools you want to use for remote testing.

Will users be interacting with any physical elements? On what medium will testing take place? Is live interaction important for your session? Answering these questions may point you to a specific remote testing platform.

Determine who should be in sessions

One of the benefits of remote testing is that any number of your clients and stakeholders can log in from the comfort of their own offices to see first-hand how their users think and act. This occasionally allows for beautiful revelations when assumptions about the experience are challenged.

The number of people and locations of all of these individuals may help you determine the testing tool as well. For example, if it’s just a small group of individuals in the same place, it might make sense to have everyone in the room with the interviewer. If there’s a handful of stakeholders joining from different locations, a teleconferencing program might work better. From experience, I’d recommend that if there are more than 15 people interested in seeing a session, it’s in the interviewer’s best interest to use a tool to record the session and then share the video after. From there, you can collect feedback from everyone about topics or techniques to improve for the next session.

Come up with ground rules

There are plenty of stories about testing sessions where conditions weren’t ideal because of the people listening in exhibited all sorts of behaviors that made the test less successful. I imagine that these sorts of experiences sway researchers from wanting to include outside parties during their sessions. To limit the possibility of a compromised testing session, I really recommend creating a set of ground rules for anyone listening in. Here’s a set I worked on with another colleague for a recent remote study:

For that particular study, I was conducting interviews with a note-taker who was ensuring adherence to those rules (another reason a note-taker in sessions is not only useful but nearly imperative if you have large groups of people listening in).

However, I’ve also been in situations where one of the stakeholders involved had a much deeper knowledge of the opportunity space we were conducting generative research within. In that example, it made sense to have one particular stakeholder directly communicating and following up in the session itself. The ground rules for that engagement revolved around ensuring the quality of conversations by avoiding things like leading or double-barreled questions.

Recognize nothing is perfect

I have yet to find the perfect testing tool. UserTesting is pretty good, but lacks flexibility in terms of conducting a small number of self-directed short tests. It’s also pretty expensive. GoToMeeting offers almost unlimited flexibility and control for testers, but introduces unexpected additional constraints. For example, you can’t identify or mute anyone who calls in to your session and doesn’t enter their audio pin. Additionally—assuming you want your participant to share their screen—there’s a good chance GTM will require them to install a plugin (which isn’t always a welcome choice). Every platform will come with its own caveats, and you’ll likely find your preference after experimenting with a few different tools.

You may be able to compensate for the weaknesses of one program by overlapping its use with a second—in a study I did years ago, we used Skype, a separate screen-recording app, and a video camera in the room in order to capture body language from both the interviewer and participant (which was important in the context of that particular study).

Pair the limitations of any software with the possibility of a bad connection or slow internet speeds, and you can understand why remote testing can be challenging. My best advice is to keep your cool, employ a note-taker, and follow my advice in the next section.

Tips To Increase Success With Your Chosen Tool

TL;DR: Account for every possibility

It’s impossible to think of every scenario that could play out during testing, but some of the more obvious things you can do are:

  • Create a hard copy of all application/web screens.
  • Create a short link (tinyurl, etc.) that you can verbally convey to a participant if they are not able to see a long link you send.
  • Always have 2–3 ways to communicate with your participant. I try to keep my webcam, audio (often backed up via telephone if possible), and some sort of chat box available for the participant.
  • Have the dial-in/log-in information written out on a post-it in front of the interviewer so they can manually read it out loud or use it themselves if they get disconnected.
  • Make sure you understand how the system identifies participants within the testing platform (some don’t). Does it identify the phone number used to join? Is there a pin number participants enter that identifies them somehow? Are they manually prompted to enter their name?
  • Try to figure out if and how to mute participants, turn off their webcams, and any other control you have over the set of all participants (including people that are simply intending on watching or listening in).
  • See if you can turn off the participant list for everyone in the session. This is the easiest way for the user to figure out that they’re not alone, and is likely to make them feel more uncomfortable.
  • Determine if you can limit or turn off the chat functionality associated with many of the webcam programs like Skype and GoToMeeting.
  • If it’s applicable, figure out if your chosen platform accommodates international users logging or calling in to the system.

Learn what you can about the tool and put it to the test (before your actual sessions)

It’s likely that whatever tool you choose to use will have a hidden or suppressed functionality that you might find you need for testing.

Conducting dry-runs at least twice will help you work out most of the kinks —once for learning more about the tool’s functionality (it helps if you request your counterpart to “misbehave” so you can stress-test yourself), and a second dry-run to actually test the session material, order, and flow. On a recent project, I was using a new set of tools, and based on my first dry run, I realized the need to schedule a second technology dry-run because of all of the unexpected functionality we discovered while on the session.

Plan to change the plan

It’s nearly certain: your first session will not go perfectly. There’s always room for improvement—self-reflection on how to improve your language, pauses, and introduction as the interviewer. In addition to capturing observations and surprises from the session, I would highly recommend asking your note-taker and any others listening in for feedback on the interview itself. I tend to include this in the post-session review period by asking about:

  • New Content: additional topics anyone is interested in hearing about that haven’t been brought up in the meeting so far
  • Existing Content: specific areas that were addressed in the session that need more prodding
  • Approach: observations and suggestions for the interviewer to keep in mind for future sessions
An excel sheet, shared live to all individuals involved in the session works well to capture comments and helps everyone feel that their perspective was captured during the summary review.

In summary, there are a lot of elements which inherently create more variability when it comes to remote testing, but using the suggestions above, you should be able to get a better handle on what tools to choose and how to increase the likelihood of an extremely successful testing session.

--

--