My recent experience with unmoderated usability testing

Simon Hoang
Bootcamp
Published in
4 min readNov 8, 2020
Photo by Etienne Boulanger on Unsplash

Recently in my current role, I conducted unmoderated usability testing on a website page that has now rolled out last month. It may seem relatively easy to set one of these up, but there are lots of important factors to take into consideration.

Why unmoderated usability testing?

One of the benefits of unmoderated usability testing is its ability to get a project available faster. It saves time because it is not monitored and doesn’t require any interaction between the facilitator and the participants. Therefore, participants can complete the test in their own time and space, and the UX designer does not have to schedule a time to monitor them.

By using unmoderated usability testing, I was able to collect results faster as participants completed the study without disruption to my daily workflow. The participants received instructions to complete tasks, and their actions and sounds were recorded, which allowed me to follow and hear what the participants did and understand the success/fail metrics on the tasks set out.

Essentially, this new page I designed for part of the product website needs to be put in front of users to see how it performs, to get valuable insights from real users and to iterate further. The use of unmoderated usability testing helped me quickly gain results to validate my (and my team’s) assumptions.

The process

The prototype I designed was to help customers find answers and offer different ways to contact us on the website. Putting the prototype to the test, I storyboarded extra screens to take the participants through a realistic journey with tasks for them to perform. I also asked follow up questions and to rate each task in terms of difficulty and obtain further insights.

Photo by Amélie Mourichon on Unsplash

The purpose of implementing unmoderated usability testing was to understand what the customers’ think and feel as they interact with the page. It was also to get initial feedback for myself to iterate for the MVP launch. I wanted to learn from users who interact with a website that would find help or more information leading to speaking to an agent from the company. By pinpointing the areas they found difficult via follow up questions, I was able to clarify what prevented them from doing this.

Now equipped with a script and a prototype, I needed to choose a software. As my fellow UX team members in other squads were trying out different research tools, I reached out to my friend in the design industry and he recommended I try PlaybookUX. The tool is very straightforward to use but the downside to it is that I had to recruit my own panel, however I had already prepared my list and was good to begin. I familiarised myself with the software and started the testing process.

The sample included a range of participants to provide a representation of the different users interacting with the page. The sample was composed of:

  • An age range from early 20s — mid 40s
  • Females and males
  • Working professionals
  • Students
  • Car owners
  • Non-car owners

Because PlaybookUX doesn’t provide their panel, I had to gather my own. The drawback to this was that the panel of participants I recruited needed to know how to use the software itself. It wasn’t as straightforward as I thought. When I published the test, a few participants immediately reached out to me saying they struggled to complete some of the tasks due to the usability of the software. So I put together instructions and republished the test.

Although there were a few bumps from the start, the results were a success. I gathered so much useful data to help me validate my assumptions and to iterate ready for MVP.

I collated the data, analysed it, validated the hypothesis and wrote an action plan for the next steps. Future action steps include data-driven designs which consist of A/B testing, heatmaps, screen recordings etc., and further usability testing to iterate from user feedback.

I then presented the research and results to the wider business.

Photo by Chris Montgomery on Unsplash

The key takeaways

  • Because you know how to use the testing software, doesn’t mean your participants do. Prepare an instruction sheet or manual for your participants on how to use the software (if it’s your own panel)
  • When asking questions, make sure you’re asking the participants what you want to know and more specific. There have been times participants have answered questions that don’t relate to the subject matter
  • When creating prototypes, be sure to use as much real-life content as possible to gain more valuable information
  • Ask for help where possible, doing it on your own may seem manageable, but there will be things you may miss. Such as testing the prototype, getting a member of your squad to proofread your script, or organising a panel to test
  • Collating data can be time-consuming, so have a plan and be clear about what it is you want to capture, and ask your squad members to help
  • Lastly, enjoy it!

Thank you for reading : )

If you enjoyed this, please share it and follow. You can also reach out to me on LinkedIn Simon Hoang.

--

--

Simon Hoang
Bootcamp

Product Design Lead @Moneyfarm. Excited about user-centred design, and the impact it can have on people’s lives. I also like to code. ⌨️ simonhoang.com