How to efficiently conduct unmoderated testing

Based on the example of cooperation with our client - Optio

Katarzyna Zarzycka
Properly Studio
10 min readJun 20, 2023

--

More and more companies are moving their priorities to improve the user experience of their customers. Whether it is a start-up or a company that has been around for years, the user experience is no longer just the domain of designers. It is becoming a staple for any company with an online presence. This is probably the reason why many established companies are starting to turn to usability research to discover problems and improve their products and websites.

Us designers — although we are experts in our field — we are not the users of the products we create. So to avoid creating solutions based on assumptions we have to make a space to evaluate our designs.

In this case, usability tests come in handy.

One of our clients who decided to do usability testing is Optio. Optio is a company that has been operating on the Norwegian market for 5 years and helps businesses incentivize employees, manage equity, stay compliant, report costs, and much more. The portal is divided into 2 sections — the participant portal, whose target audience is employees of companies from different industries such as banking, healthcare, or physical engineering. The other section is the Admin portal that helps HR departments or CFOs to manage the equity of their employees. Apart from managing employee equity, the Admin portal also has a feature to view and manage the company’s cap table. As you can imagine, this is a specific and complicated industry, so the portal must be easy to use for both admins and employees.

Usability testing is undoubtedly an indispensable tool to create intuitive and easy-to-use products, which not only improves user experience but is also of value to the business. A well-designed website means not only a better NPS score (better customer retention) but also reduces the time and costs of development.

Imagine that you have designed a product, and after its release, it turns out that the user has difficulties in finding key features and improvements we made are not even noticeable to them. In this case, usability tests are neither expensive nor a waste of time, right?

What are unmoderated usability tests?

Unmoderated testing allows users to test the functionality of a website or application without the direct assistance of a moderator. The user performs certain tasks directly on the platform and prototype, and their actions are tracked and recorded. Depending on the testing platform, these actions can be collected and summarized in a report or recorded and saved as a video.

Why did we decide to go for unmoderated testing?

While working on the Optio design, we needed to test several functionalities several times to get reliable feedback. And this testing method allows us to test multiple users at once. Depending on how the recruitment of users goes, it is sometimes possible to test up to a dozen people in one day. It is particularly helpful when the timeframes are tight and could be hard to synchronize the time of the coordinator/designer with the participant.

Another advantage of unmoderated testing is the ability to record the participants’ processes and responses. The platform that we have been using — User Testing allows us to record the participant’s screen, voice, and face at the same time. The whole process is recorded so it can be viewed at any time, as many times as we need. This is very helpful to catch at what point users had trouble performing a task, which elements of the interface are not quite clear to them, and a general observation of their behavior during the test. Based on that feedback, we can efficiently implement necessary changes and adjustments.

Steps to conduct efficient unmoderated testing

Checklist for unmoderated usability testing
  1. Defining a study goals

Testing objectives will vary depending on what we want to find out during testing. The study goal can be how the user navigates the site, and how they go through certain functionality, thus allowing us to detect potential inaccuracies in the interface or information architecture.

In the case of Optio, we were involved in both task-based testing, such as going through the process of e.g completing the order, settling the financial instrument and how the user understands the information architecture.

2. Choosing the testing software

Choosing the right software largely depends on what combination of functionalities is desired for us researchers to interpret the results as accurately as possible. Nowadays there are many platforms available on the Internet. Some of them have extensive reporting, such as time bars or heatmaps, like Maze or Useberry, while others allow for session recordings, such as User Zoom or User Testing.

I would recommend the choice with the possibility of recording sessions. As the unmoderated nature of the test deprives us of interaction with users, listening and watching recordings in addition to dry data also allows us to observe the emotions of respondents and collect more accurate results. Very often it happens that during the recorded sessions the user expresses a much broader feedback, expands it, which is essential when working on the results.

Examples of usability testing tools

3. Creating tasks

Scenario

Tasks are activities that we will ask participants to do on the interface being tested.

Before proceeding with tasks, it is necessary to define and explain the context of the scenario, so that the person performing the test can understand why they are doing it.

It is important then to remember that the tasks should match the study goals. These activities must be planned in such a way that they guide the user to the section of the site we are interested in.

Furthermore, the tasks should be descriptive enough that point the user to specific action without telling them exactly how to accomplish that. Writing the tasks we need to be careful with the wording not to provide the user with too many details, like where to click or look at and so on.

In addition to that it is also important to give the participant some brief context so they can step into the role of a real user of the site, and picture it even better.

Example: Imagine that you want to add a New Award document for your employee. Please follow the steps to complete adding this document.

Types of tasks

No matter the nature of testing, it is also worth remembering warm-up open-ended questions. Usually, these are questions about work experience or experience related to the industry of the particular product we are testing — this way we can additionally verify whether any problems during the test are due to a lack of knowledge of the user or the design itself.

After the introduction, we can proceed to the main part of the test, that is, the tasks on the prototype.

Of course, we have a wide selection of tasks, so we need to choose the ones that best fit our project.

In the beginning, the user has a moment to look at the prototype and this is the space to tell what he sees or what caught his special attention.

Example: Look at the screen and tell what information caught your attention in the first place and why?

Common types of tasks are “closed tasks” or in other words — quantitive questions, which have a clear indication of whether the task will succeed or fail. It is ideal to check the usability of a particular feature of the product and the only possible outcomes are the completion of the task or its failure.

Example: Imagine that one of your employees has not yet accepted his awards, please send a reminder to Cody Fisher and notify him about new awards that are pending.

The user performing task

It is very important to remind users at this stage to say out loud what they are doing and comment on their actions and observations.

The other types are open-ended or qualitative tasks. They are characterized by freedom of choice in solving the task. The result will also be success or failure, but different ways of achieving the goal are important here. A crucial aspect here is to observe how the users navigate the site and how they explore the particular functionalities.

Example: Imagine that you just started using an Optio portal to view and manage your equity. We would like to ask you to begin discovering a Dashboard.

In addition to typical tasks where the user has to perform some action on the prototype, equally important are verbal or written response questions which are a complement to the tasks performed and a perfect opportunity to collect broader feedback on the tested site.

Examples: What is your overall impression of the processes you performed? What was it that bothered you during testing the prototype? Did you miss any information you needed on the prototype?

4. Pilot test

In any best practice guide, you will learn that before releasing the target number of tests, first you need to run a pilot test.

I fully agree with this, as experience has shown more than once that this is the way to detect errors that need to be corrected for the next sessions. The errors can be related to the script, i.e. unclear questions, too long or too short, in-depth scenarios. There may also be problems with the prototype itself, such as buttons that do not work, a wrong link or not making the link available to people outside the organization. If we detect and correct these inaccuracies, we can count on reliable, results from which we can make valuable conclusions.

Example of the task performed by the user

5. Participants

Before launching the test, it is necessary to define a target audience that corresponds to the needs of the test. Testing tools like e.g. User Testing have a large database of testers that we need to filter out for our needs using a screener.

Screener questions are multiple-choice questions that are designed to eliminate testers who do not meet the needs of our test, e.g. do not have enough knowledge, about finance for example, so we don’t end up collecting insufficient or wrong feedback. In such a case, participants could also have difficulty understanding the vocabulary of the field and not be able to complete all the tasks.

Part of the screener

6. Analyzing results

After the tests are over, there is time to look at the results.

When we use a tool with a recording option, we should watch them in the meantime making notes.

Platforms such as User Testing also prepare ready-to-use metrics, such as time spent on task, and number of interactions — but these are only quantitative data and if we care about qualitative results in our research, it is necessary to carefully study the recordings and organize the obtained results.

How we organize the results depends largely on us — designers, the important thing is that it should be displayed clearly and visible enough so we can easily summarize the notes. Some designers/researchers use for this purpose a spreadsheet in Excel, other tools like whiteboards (eg. Miro, FigJam), or ready templates.

In my work on the results of tests for Optio, the first tests were reported in Excel, but after some time I found the option of sticky notes more transparent. I used the tool FigJam, where I could quickly and easily illustrate repeated responses and comments. The tool also gives the freedom of formatting and moving the elements, which is essential for me in the maze of data, and that spreadsheet does not give.

A main view of User Testing platform, with new results displayed

Once we have organized the results we then have the opportunity to summarize what are the strengths and weaknesses of our product, and what elements caused problems for users. Knowing all this, we can write recommendations for further work on the product, what needs to be changed, what still needs to be worked on, and what should be left as is.

The best practice, in the end, is to submit a comprehensive report so that every person on the team can have an overview — from the purpose and process of the tests to the results and recommendations.

Example of noting the results using FigJam

Final thoughts

Unmoderated usability tests are undoubtedly worth considering, a very helpful tool when they are done well. In some aspects, they are superior to moderated tests as they are faster to conduct and often cheaper, and the software tools that we work with can provide us with a large sample size from all over the world.

Of course, there are also some downsides such as less control over the study and thus less room for error. The questions can be misunderstood, and the lack of a moderator makes it impossible to guide to the right track.

Another aspect that researchers face is professional testers taking part in testing. Sometimes we have no control over this, especially when choosing platforms with their own user pool.

Nevertheless, if we meticulously plan the tests, we can minimize the risks mentioned above and obtain valuable results that will improve the experience of future users of the product we are designing.

In addition, testing reduces the time and cost of implementation. This is because we can detect errors before the user does it on the existing platform. That allows us to avoid making changes not only from the level of design but also implementation, which would involve many professionals and would be much more expensive.

--

--