Being prepared for unpredictability during remote usability testing

Lucy Pickering
William Joseph
Published in
7 min readOct 13, 2022
Photo by Ricardo Resende on Unsplash

Recently, I ran a round of usability testing where it felt like everything that could go wrong, did go wrong. This is definitely catastrophising, but despite all the planning we’d done for the sessions, things outside of our control kept cropping up to cause problems. This left me reflecting on whether it’s possible to be better prepared for unpredictability. Or is this an oxymoron?

At William Joseph, we’re a multi-disciplinary team of product managers, content strategists, designers and developers who all get involved in usability testing. We factor it into (almost) every client project to ensure that user needs and behaviours are understood at every stage. We’re also moving fast, looking for efficiencies and creating templates where we can. I believe that we get better at usability testing every time we do it and, as part of this, we also need to take time to reflect on our practice.

These are some of the examples of unpredictable things that have happened during our moderated, remote testing sessions and how we’re trying to manage and mitigate them:

Tech issues

Ah, the familiar thorn in everyone’s side! During remote usability testing sessions, I’ve had problems with the tech on both the testers’ and the participants’ sides. This is commonly a weak internet connection or difficulty with using the software.

We use Figma for testing designs and ask participants to share their screen on a Google Meet or Zoom video call. It’s often the first time that participants have used one or both of these programmes.

Some ways that we try to manage this are to:

  • Introduce and provide guidance to participants (in advance) on the software that will be used during the session
  • Offer participants the option to suggest alternatives and, try to accommodate this, and ensure (as a tester) that we are familiar with it
  • Be prepared to talk participants through the functionality of the software if they need it
  • Be ready with fallback alternatives — sharing your own screen and following a participant or reverting to a phone call
  • Consider the impact of the issue on the outcome, e.g. as a result of a weak connection I’ve struggled to see a participant’s screen. This was challenging, but they were also giving detailed vocal descriptions and I knew that the images would be captured on the recording, so I chose to continue rather than rearrange the session
  • Assess whether leaving and rejoining or restarting might help
  • Make a judgement on when it’s not worth continuing

As frustrating as tech issues are, most people have experienced them and appreciate that they are often unavoidable, especially when working remotely. If it starts to compromise your testing goals or cause stress for the participant, it’s better to rearrange the session rather than push on.

Unexpected behaviour and adapting your plan

I think it’s more surprising when a tester interacts with your digital product or service in the way you expect or in the order that reflects your testing script or discussion guide. We use these as a guide rather than a rigid script. While there are subtle changes in wording that can steer a participant, dealing with this one comes from feeling more comfortable to ‘go off script’.

As an example, if you start a website testing session on a homepage with a set of tasks that you want to test on this page, (unless you specify otherwise) the user might choose to leave the homepage quickly to explore the navigation or rest of the site. This might instinctively feel frustrating but it is often more valuable than strictly sticking to the script.

It’s worth keeping in mind that you’ve created an artificial experience in this testing environment. In reality, users may be more likely to visit a site from social media or long-tail search queries and rarely even visit a homepage to complete a task or get the information they need. So being flexible and thinking on your feet within a session often leads to observing more realistic experiences and stronger insights.

Accessibility considerations and the risk of creating barriers

I’ll start this one by saying that, when we’re creating digital products and services, they should be inclusive from the outset. One of the positives of baking accessibility in from the start of a project is that you should feel better prepared to test with a diverse range of users, including those with disabilities, neurodiversity and/or who use assistive technology or adaptive strategies.

However, the reality is that products and services often become more inclusive as they get built. We regularly test designs or prototypes that are incomplete — they may not have all the functionality, be fully linked up or be populated with finished content.

We also test in Figma — their team are working on accessibility improvements but acknowledges that they still have more work to do. Other testing platforms may also present the risk of creating unnecessary barriers. For example, we’ve yet to find an online card-sorting exercise that works well for assistive technologies such as screen readers or keyboard navigation. Any suggestions are very welcome!

Users shouldn’t have to declare any kind of disability before they use a digital product or service and neither should they be asked to before a testing session of it. It obviously may be helpful for the facilitator to know in advance to be able to discuss any adjustments to the session. But when this isn’t the case, to be more prepared for sessions, I’m going to:

  • Advocate, along with the WJ team, for accessibility to be considered from the outset and at every stage of a project
  • Factor in how assistive technologies may impact the user’s experience
  • Build my understanding of technology barriers and look for more accessible alternatives
  • Continue to ask if we can make any adjustments at the start of a session

Ensuring everyone’s safety throughout

The safety of everyone involved in testing sessions is our absolute top priority. Sadly, when conducting sessions with the public, it is possible that the facilitator and observers may experience unacceptable language/behaviour from the tester/s. This obviously differs from the risks associated with in-person testing and may come in the form of racist, sexist, homophobic or otherwise inappropriate and offensive comments.

After some of our team experienced racism during a testing session, we made our guidance more robust for participants, clients and those involved in sessions. This includes (from a section in our agency’s internal princples doc) reassuring the facilitator and observers that:

  • They are fully within their rights and encouraged to end the usability testing session immediately
  • There is no justification needed; any of these feelings are unacceptable in a work setting
  • They should feel no pressure or obligation to address the comments, the priority is to remove everyone involved from the unsafe environment.

If the situation does occur, then either the facilitator or observer will end the call using wording such as:

‘Thank you for that and your thoughts on everything else. That is actually all we wanted to talk to you about today. Goodbye and thanks again.’

It is extremely unlikely that this will happen, but it’s something we take seriously and want to be prepared for.

Equally, we also let testers know at the start of sessions that:

‘If at any stage, you would like to pause or end the session for any reason then that’s absolutely fine, just let me know.’

We share this guidance with our clients and participants in advance, in an informed consent document, and at the start of sessions.

Working with charity clients, we’re also prepared for participants who may have experienced trauma, particularly when conducting discovery research but also during usability testing. This will be fully disclosed within the consent process when outlining potential risks to participants.

Other steps we will take include, but aren’t limited to:

  • Arranging a call before the actual testing session to outline the process, run through the discussion guide, and explore any topics that could be particularly problematic for an individual.
  • Rather than discussing direct questions, we can ask people to comment on an imagined situation. This is a softer way of opening up a topic in which their trauma may still inform their answers but does not become the focus of the conversation.
  • Provide follow-up materials and methods of support for anyone that does find the experience triggering. These will be agreed upon and lined up beforehand so that they can be immediately employed, perhaps even during the session. This might be having a counsellor, therapist or mental health professional on call to support someone immediately.

Jenny H Winfield has shared a great example of this in her post about keeping survivors safe during UX research based on work she led with research with survivors of sexual assault.

I’ve only been running regular remote testing sessions for less than a year and I’m sure that the ability to deal with unpredictability comes with experience. As with most unexpected situations, you can only control your response.

Hopefully, by sharing this with a hive mind, we’ll hear about how you’ve dealt with the unexpected during testing and spark some ideas on how to make remote testing sessions safer and more fruitful for all involved.

--

--