Remote Moderated Usability Tests

Or what we have learned despite all the challenges we faced and how to avoid those challenges

Dmitry Lyapustin
Zedge Engineering
7 min readDec 17, 2020

--

— Hey Audrey (the name has been changed). It is Dmitry from Zedge. I am calling to remind you that we have a remote usability test session scheduled in an hour. Will you be joining us?

— Hey! Sure!

(…one hour and ten minutes later)

A group of people in the roles of observers — two Product Designers, two Product Owners, an Social Media Expert, and an HR Generalist, topped by a Product Manager on the moderator seat, all sitting in the call. You would better not ask the cumulative hourly cost of such a group of specialists.

Audrey is not there. Audrey does not answer the phone.

Does this sound familiar?

This year brought a wide range of complications to the industry. One of those, an inability to have user tests on-site, is not the most unbearable one, but when you are conducting the first set of user tests for a relatively young but existing long enough product… Well, it was challenging.

We all know how beneficial it is to learn from mistakes. Learning from others’ mistakes is even better — you pay no cost other than time spent on reading, you risk nothing, and the whole process is laid out in front of you. Just take it and reap the benefits, right?

Background

Shortz, the youngest of Zedge’s projects, is an app that allows users to read short stories as chats; think of thrilling stories that unravel in messages/SMS sent by characters to each other. The app was launched in December 2019, and despite these trying year is showing good performance and potential.

Last summer we at Shortz have started the long-anticipated moderated usability tests. This article is a reflection on the process we followed, the challenges we faced, and the mistakes we made.

Process

Step 1. Preparation.

Our team defined several goals for this project:

  • Learn what users like and dislike about Shortz, and what would entice them to come back;
  • Identify any issues that might discourage users from using the app;
  • Reaffirm who our users are;
  • Establish a culture of moderated user tests in the company.

Method? Considering our goals and questions we had to answer, there is no secret we have chosen moderated sessions. It will allow us to thoroughly collect precious observations on user’s behaviour, reactions, struggles, and pains.

Moderated Usability Testing is a testing method that involves the active participation of a Moderator. Moderator will work on the testing together with a test participant, guiding them through the process, and when and where it is necessary, diving deeper into the participant’s real motivations and thinking.

Due to the Covid-19 pandemic, we saw no option to run the tests in-person. So we went for a remoted sessions. It didn’t stress our budget as much as traditional testing would do, though it did create various challenges to deal with: starting from finding the right tool set-up — we wanted to avoid some common shortcomings of remote sessions and to kee the ability to see tester’s body language and reactions — all the way up to the issues with much higher no-shows rate than expected.

The next step was to create a test plan and write a script. It was the first testing for this team. Since it wasn’t a particular feature we wanted to test, but rather the entire first-time experience with the app, it took several iterations with the team to get all the areas captured and at the same time keep the test within reasonable constraints.

When it came to tooling, we end up using mostly just Google services:

  1. Google Forms — for screening our candidates.
  2. Google Sheets, and then Airtable — to collect, process, and filter test candidates.
  3. Google Calendar together with Gmail — for scheduling.
  4. Google Meet for video conference and recording (thanks to its built-in function, test participants could join the call with their laptop with a web camera and stream from their mobile device’s screen at the same time).
  5. Google Sheets, with Rainbow Spreadsheet — for thoroughly captured observations and further analysis.
  6. Real apps were used, hence there was no need for clickable prototypes.

We have considered other tools like Lookback and some of its competitors, but all of those were either not as reliable and straight-forward for mobile user testing as Google Meet, or had some limitations (it is fair to mention that this year has encouraged many usability testing services to invest more in improvements, and it is worth re-considering next time we will have the testing sessions).

Step 2. Recruitment and scheduling.

The first real challenge was to get participants with relevant demographics. At first, we have tried to recruit participants through Zedge’s social media profiles (by reaching out directly to followers) and through invitations posted to Facebook groups dedicated to testers recruiting. It took a week of back and forth conversations to realize that it is a very time consuming and inefficient process. Do not step on the same rake.

And then it just came to me: Zedge is a leading platform in the personalization market, with 400+ million installs worldwide! So the answer is a simple In-App Message campaign, which targeted a small subset of daily active users. Thanks to our Marketing Automation tool, it took just about an hour to design, set, get the approval from all stakeholders, test, and push to production. Thirty-six hours later we had a pool of candidates enough for a dozen of tests.

In a Google Form, all the participants were asked about their demographics (age, gender, occupation, etc.) and several other questions, including their experience with relevant apps, their on-device time, and their usual app discovery channels. So we had a detailed candidate pool for future tests and additional user insights.

The next challenge was with the scheduling. Do not rely on the email alone. Make sure you also have a phone number. It increases your odds of a person showing at your session if you speak to them on the phone at least once. Let them feel that there are real people behind these emails and appointments, and they will be waiting for them. And ask them to let you know immediately if they can not participate as scheduled.

So, if you would like to reduce the no-show rate, expect to handle a set of emails for each of the appointments:

  1. Welcome email.
  2. Confirmation email.
  3. Reminder email a week before the meeting.
  4. Reminder call a day before the meeting.
  5. A short SMS an hour before the meeting.

Make sure you gave clear instructions on how to join, what to expect, what to follow, e.g. “Please find a quiet space where you will not be disturbed or interrupted during our session”. We had one case when the participant was joining the session while commuting on the bus. That session did not take place. We even provided our users with clear screenshots where needed. Remember, it is likely their first-time experience.

Recruit more than you need, or be ready to extend your run. No matter how good you will prepare, be sure you see several no-shows anyway. You can anticipate it to be around a 20–40% rate, depends on how aggressive and smart you will be with the communication.

Double-book. It may sound like a shady methodology, but do not be afraid to make it clear. It saves your time, budget, and your testing run if you have a primary and secondary booking for each time slot you have. Just communicate clearly that you will invite the other person if you get a window.

Step 3. Tests.

The easiest and most enjoyable part of the process, if all preparations were done thoroughly.

Be patient and give your participants ten minutes grace period to appear, but keep your back-up person ready and prepared to jump on the call.

Plan to have up to fifteen minutes at the beginning of the call for setting everything up, solving any potential issues with the connection, streaming, sound, etc. Sometimes users forget to have a headset. Sometimes they are still in a noisy environment.

Even if you have a group of observers to make notes, please make sure you record your testing sessions. But ask the participant’s permission for that first.

Also plan some time between sessions to discuss with your colleagues the testing observations and results. We found it to be most useful to do straight after to session, while it all fresh. Make sure your observations are clear and specific, especially for a person who was not an observer.

Conclusion

The user feedback and observations during the first set have provided us with guidance for UX and UI improvements and confirmed most of the hypotheses we had.

It also gave us a lot of insights on how to conduct these tests more efficiently. For our further sessions we have these notes:

1. Consider using a dedicated recruitment service. It will increase your budget, but also it eases your process a lot. Potentially, such services manage no-shows and help you reduce the rate.

2. Consider using a dedicated scheduling tool. Handling the communication, emails, and reminders, ate a huge part of the time. Automate that process as much as possible, and focus on other aspects of the process.

What is the essential step in any UX process, and basically in any area when your primary goal is to improve? Reflection. As I always teach my six-year-old daughter, mistakes are an unavoidable and vital part of the learning process. The key to making the most out of it is that you should dedicate some time to reflect and analyze those mistakes, together with the findings.

After you finish your sessions, it is time for analysis. That is a completely different story, worth its separate article.

--

--

Dmitry Lyapustin
Zedge Engineering

Product Designer @ Zedge (zedge.net), a beardie half of Lyapustin Freelance Design. Baker, math lover, gridgeek, typomaniac and coffee nerd.