Demystifying Remote User Testing at Meesho

Pravalhika Injarapu
Meesho Tech
Published in
7 min readOct 5, 2021
Demystifying Remote User Testing at Meesho

COVID-19 forced us to rethink almost all professional workflows.

Considering Meesho’s growth, testing all the features has become more important than ever.

But why is user testing so important? Is it worth the time and effort? What are the benefits of remote user testing? And how should you approach the process?

In this article, we’ll cover the answers to all of these questions and more — but first, let’s define remote user testing.

What is Remote User Testing?

It is a research method where participants interact with a digital product or experience remotely using a screen-sharing platform, giving moderators a chance to observe how users engage with designs in their natural environment.

Why User Testing? 🧪

Usability Testing (UT) is at the core of how Meesho designs experiences. Prior to implementing any experiences we design, it is imperative that we test their effectiveness.

UT is something we take very seriously, and every project goes through this phase. In many cases, designers themselves are responsible for UTs, but the user research team can also take on this role for critical projects.

The ability to know why and how to test a design to iterate is a critical design skill, because it increases the probability of success at full-scale release.

The hyper-growth phase of Meesho demands that we design responsibly, maximize odds of success, and minimize iterations using Product-Design-Development bandwidth. We love it when there are no subsequent iterations and we hit the home run in the first attempt itself. Unfortunately, it doesn’t always happen, and as the saying goes, failure teaches us more than success.

Benefits of Remote User Testing

  1. Accessible & inclusive: Remote testing allows us to recruit applicants from anywhere in the country, including those with mobility restrictions or disabilities.
  2. The comfort of their own home: Remote testing puts the users in their usual environment so that they can be more relaxed and honest in their responses.
  3. Better focus on observation and note-taking: Remote sessions allow designers to observe what participants are doing on their computers and take notes as they speak.
  4. Cheaper and efficient: Remote testing is more cost-efficient because we eliminate the necessity for setting up the testing area in offices with the relevant devices, as well as refreshments for the participants. A prototype, an internet connection, and an application to record are all we need.
  5. Unlimited observers: We can invite a large number of people to watch the tests during the remote sessions, and it is easier for observers to communicate with the moderator. The tester will not be aware of the observers, resulting in a less intimidating environment.

Tools 🛠

The pandemic-proof process begins with the selection of the right tool. Before starting any research efforts, our User Research team lists down a few important questions:

  • How can we ensure that our research is as inclusive as possible?
  • Which technologies will our users have access to?
  • In what quarantine environment will they be housed?
  • How does that affect their trustworthiness, confidentiality, honesty, and accuracy?
  • Is it necessary to share a screen? If not, what should be done?

While Skype and Zoom are popular and have user-friendly screen-sharingṇ capabilities, their app sizes were a drawback, as Tier 3 & 4 users have basic smartphones with very little space. Based on our evaluation of a few products in the market, Lookback was deemed the best choice.

Lookback is a third party virtual space that allows us to talk to users, consensually track their actions, record their interactions with the product, enabling us to interview users anywhere in the country as long as they had an internet connection, all for the small size of 6MB making it easier for our users to interact even in low-speed network.

Planning ✍🏼

  1. Objective: The purpose of this report is to provide an overview of the product, business, and behavioral goals and their indicators of success. Together, the product team and designers work to identify the areas of concern for user testing.
  2. User profile: A list of specific details of the user criteria, used to mobilize the right testers.
  3. User incentive: A monetary reward in some form to maximize the intent of the user.

Preparation 👩🏻‍💻

  1. Moderator script: A script that a moderator uses comprises instructions and questions that are used for all test sessions to ensure all tests are consistent.
  2. Task cases & prototyping: We document task cases based on what typical users might go through while using the product. These tasks focus more on a user goal that has an endpoint, which we expect users to complete.
  3. Questions (Pre-Test, Post-Task, and Past Test Questions): An outline of open-ended questions that the moderator should refer to when asking questions to the testers for qualitative insights.

Mobilize Users

Based on the decided user profile, product analysts provide a list of users that qualify the criteria. Next, we either conduct a survey or call up users to take their acceptance for the test.

Run Pilot Test

Sometimes conducting mock tests internally first within the team ensures the tasks run smoothly. This helps us be aware of inconsistencies within the test and adjust the script and tasks, making the entire experience more pleasant and predictable.

After all the prep, it’s time to talk to users!

Execution 🧐

Conducting User Tests

To have a seamless moderated user testing session, two roles are crucial:

  1. Moderator: To guide the user through the purpose of the test, the task cases, and end it with questions about the feature. The moderator and the testers are in the same virtual environment, interacting via a video call.
  2. Observer: To take notes on user interaction with the prototype, feedback, and observe user facial expressions, which helps us collate the final report. The virtual observation room in Lookback lets observers prompt questions without interrupting the session.

Handhold Users During Setup

A pre-call to guide users in setting up the virtual space. We keep 15 mins before the test session to help our testers set up the virtual space.

Design & Product Collaboration

User testing is a very collaborative process here. Designers and Researchers work collaboratively to plan, prepare and conduct the user testing sessions. Product Managers involves closely in choosing the right user criteria, mobilize qualified users.

Also, the teams’ members join in as observers to learn about the users’ feedback first-hand.

Challenges Faced and How We Solved Them

  1. Setup woes: It can be challenging setting up Lookback in the user’s device to enable screen-share since we don’t see how users are doing and need extra guidance.
  2. Screen-share permission: It can be hard to convince participants to share their screens because they are concerned about their privacy and security.
    - What will you be able to see?
    - Are you trying to see my bank details?
    During pre-calls, we make sure that the users are aware of the limitations of screen recording and make them comfortable.
  3. Slow network: Limited network connections may make prototypes too heavy to load. Both consumers and devices benefit from breaking down tasks into consumable sizes.
  4. Life gets in the way: Although most of the users are considerate of a call or have a decent internet connection, we occasionally encounter internet connection issues, pressure cookers whistling, or just cute kids running around.
  5. No-show: In any testing scenario, there are certain no-shows. As a result, we increase the numbers of test participants by 30%, allowing us to meet our goals regardless of the no-shows.

Documentation 📝

After analyzing the recordings of the tests, and corroborating with our notes, we create a final report to present the results of the user testing sessions. The final report focuses on the following themes.

  1. Action Points: A prioritized action plan to resolve issues in design, which can be related to flow, layout, or appearance.
  2. Tester Feedback: Feedback from testers about the questions answered after the test session.
  3. Identified Behaviour Issues: A list of behavior issues based on severity.
  4. Task Case Results: The number of errors, successes, and failures per participant.
  5. Observations: An overview of the observations from the test sessions, including their limitations and experiences which might be due to demographic or psychological factors.

Closing Note ✌🏼

It is crucial to be aware of the fact that the current pandemic is creating a constant feeling of uncertainty for most people, making their behaviors and motives completely unpredictable. Under these new circumstances, anything we thought we knew about our users is void.

Conducting user testing is critical now more than ever so that we can understand how users and their behavior have changed.

Let’s go testing! 👋🏼

--

--