3 things I learned from conducting & analyzing remote user tests

Thor Galle
7 min readApr 8, 2020

--

The corona crisis is affecting the work of many people, and those with primarily digital jobs are not necessarily excluded.

I’m a design student, and I’m currently working on the thesis for my master degree in Human-Computer Interaction at EIT Digital.

I just finished my first round of six remote user test sessions, 30 minutes each. Normally I’d have conducted these in person, but unusual times ask for unusual measures. And maybe unusual doesn’t have to be bad!

In short, my degree project revolves around evaluating and improving the language learning features of SVT Språkplay, a mobile app where immigrants in Sweden can watch Swedish television with language aids.

A big chunk of this work is qualitative research with users. That means testing the app with them, conducting interviews and co-designing new features.

Here are three things I learned.

Lesson 1: Zoom is your imperfect best friend

A first challenge I faced was to observe and record the participants’ screen while they were using the app on their own phone. I first thought of weird hacks to accomplish this with a laptop webcam, but hey, technology advanced!

Smartphones can now also share their screens in video calls; both Zoom and Skype have this feature on iOS and Android.

From these apps, Zoom is the better choice. Here’s why:

  • It records of the participant’s shared screen in a sensible format
  • It’s possible for the participant to join with their phone screen and with their laptop webcam at the same time (why? see below)
  • The file size of a recorded video is remarkably low given the decent video quality: ~100 MB for 30 minutes.
  • You get the audio separately as well, which can be useful for sharing with AI transcription tools (see later)
Snapshot from a user test. Only my webcam is active.

Some caveats:

  • On iOS the screen-sharing process is not that obvious. See these instructions.
  • On Android at least, the support for sharing both a screen feed and webcam feed from a phone is limited. It only works together when you’re in the Zoom app. That’s why you might want to use the laptop and phone at the same time.

Still, it is better than Skype. Skype’s built in screen-recording feature is buggy.

The following Skype recording was supposed to show my study participant’s phone screen, which was shared live. Not just his forehead. Sometimes the webcam feeds weren’t even recorded.

“Honestly, I also don’t know what just happened”. We’re both laughing at a bug in the app.

There are more issues with Skype, but this bug is really a deal-breaker.

Lesson 2: Be extra patient

Conducting a user test often means keeping a balance between interacting with the user and observing their natural exploration of a product, because both can give interesting results. For the users to act according to their normal behavior they need to feel at ease. To achieve that, your intrusions as interviewer should be minimal. But at the same time, you want to intrude. You want to ask questions and gain knowledge.

I noticed that it’s harder to find this balance in remote user tests: it’s simply easier to misunderstand each other than in in-person tests.

It happened regularly that while the participant was doing something in response to a task, she made a remark after thinking for a while. But while she was starting the remark hesitantly, I had already begun my next question. These speech “collisions” result in awkward situations and a garbled the audio recording. I’m blaming the video call latency and harder-to-perceive body gestures for making this harder. But I could solve it:

The lesson: bring in pauses and leave the participant ample time to think

Lesson 3: Focus on insights, not transcription

When live notes fail

The goal of user testing and interviewing is to extract actionable insights for product improvement. It is a keystone in the user-centered design process. But while doing remote user tests on your own, it’s easy to miss or forget interesting remarks or actions from the user.

Taking live notes as interviewer won’t cut it either. You probably can’t observe, take notes, think and interact with the participant at the same time. Indeed, taking detailed notes might distract you. If you use a keyboard, the noise might annoy the user. They might feel “watched”. If you don’t have team mate to silently take notes in the same call, the best option is to record your call and to analyze later. Ironically, that makes users feel less watched because it’s easier to forget about.

Structured video analysis

A structured way to extract insights from recorded interviews, in contrast to live notes, is the process of coding: to assign “codes” to a transcript. That means tagging interesting bits so you can count their occurrences and compare them across interviews.

For that you’d need the transcript first, supposedly. But how do you transcribe a remote user test without spending much time?

I first tried the free web app oTrancribe that provides some aids for transcribing. Working through my first 30-minute video took almost 5 hours. A friend later referred me to the AI transcription tool Otter.ai: it could automatically transcribe a recording with a decent accuracy. The only problem was non-English speech, and if I was talking to another male the dialogues often weren’t split correctly. Still, this reduced the manual work involved in transcribing to less than 30 minutes.

With the transcript at hand, the coding process could start. I used the Qualitative Data Analysis (QDA) tool Atlas TI for which I had a university license.

Using Atlas TI for video coding

Work setup of coding remote user test in Atlas TI with a transcript loaded. A big screen helps.
The video coding feature of Atlas TI is good!

Atlas TI turned out to be an unexpectedly powerful and flexible tool for analyzing video interviews. In the first screen you can see me working with a transcription while at the same time having the “Code Manager” window on the smaller screen. It’s possible to group codes, define relationships among them, run queries on tagged video segments (eg. show me the parts I tagged with “Feature: auto-translation” and “Expression: confused”). Skimming this video shows what’s possible overall.

In the second screen I’m coding a video directly, without using a transcript.

It will take some time initially to construct your “code book” (tags), but after that the process will speed up. And the output is worth it: prioritized usability problems & opportunities.

With or without the transcript, the coding of each video took about 2 hours on average. This brings me to the lesson:

Lesson 3: you don’t need a transcript to use a video-coding tool

However, you may consider using Otter.ai or something similar to generate a transcript anyway, it might help speed up the video coding slightly.

And if you don’t have access to a video coding tool, the transcript is necessary to do direct text-coding with other tools like the open-source RQDA. That wouldn’t surprise me: if you don’t have a university license, the price for an Atlas TI license can easily go over 1000$… I haven’t found an affordable alternative for direct video coding.

Some more tips

  • Send a pre-questionnaire with contextual and demographic questions. What prior experience does the participants have? It helps you prepare the interview. I used a Google Form now, but you might as well check out Typeform.
  • Use a booking tool to schedule sessions with participants. I used Calendly. Their free tier allows for one event type which sufficed. They offer a Zoom meeting integration for free until June now, but you can also post your Zoom link in the event description otherwise.
  • Have a small chat in the beginning of a session. We all like a chat now!
  • For richer information in mobile screen shares, you can show the taps from participants on Android phones. Check this article to see how (it involves Developer Settings). On iOS this feature is unfortunately not as readily available.
  • Aalto University-registered student? There is a university license for home use of Atlas TI. Check download.aalto.fi. Student somewhere else? Check your university’s software offering.
  • Atlas TI is cool, but if you want to import your own transcripts from oTrancribe or Otter.ai it falls hopelessly short. Here are some notes on how I imported my oTranscribe & Otter.ai transcripts
    (Update April 9: I made scripts to do this, they work well now)

Hit me up if you have any questions about my thesis process or some tips for me. I’m now looking into remote focus group and co-design sessions.

Thanks to Karolina Drobotowicz who mentioned Otter.ai in reaction to an earlier version of this article.

--

--

Thor Galle

HCI grad student & consultant at Columbia Road. I talk code, design, data, and at great length. Read with me on Readup: https://readup.com/@thorgalle