How to analyse usability testing results
Usability testing involves participants completing set tasks using a digital product and providing feedback. Usability testing can be conducted moderated (with a facilitator present to support the process) or unmoderated (participants conduct the test independently) in-person or remotely. Researchers or UX designers then review and analyse the results to identify opportunities to improve the usability of a digital product. In this article, we outline the process of how to analyse your usability testing results.
Understanding usability test results
Usability testing is an important research method within the UX design process. It helps to identify usability issues and gather real insights from users to support enhancing the user experience of a digital product. Results from usability testing can represent quantitive metrics and qualitative feedback depending on the type of testing and questions asked. Following analysis, usability testing results should be utilised to identify recommendations on opportunities to improve the usability of the digital product.
Methods for analysing usability data
When analysing qualitative results at Make it Clear we first like to pull results from across all participants into a digital whiteboard tool such as Miro. This means that we can see all of our results in one place enabling easier comparisons and contrasts to be made as well as enabling a collaborative approach to analysis. We use set formatting of this board and utilise the tagging functionality to support the categorisation of insights. The process of grouping insights and identifying themes is conducted as a multidisciplinary team of research and UX roles who have been involved in the testing process this helps to ensure an unbiased approach and identification of recommendations. Once we have grouped findings we then review these in more detail breaking down into more themes where required and discussing what these findings mean i.e turning a finding into an insight.
Quantitive results most typically are a result of unmoderated usability testing. Unmoderated testing is typically conducted via a research platform such as UserTesting which allows researchers to set up a usability testing study and participants to complete this when it suits. These types of platforms will often do some analysis of the quantitive data such as time spent on your behalf, creating averages etc.
Key metrics in usability testing
As previously mentioned, quantitive metrics are often most commonly captured within unmoderated usability testing. This is partly because unmoderated usability testing is often deployed at a much larger scale than moderated testing i.e. more participants are usually involved in unmoderated testing meaning more reliable averages. Some metrics that are often captured within this type of research include:
- Success rate
- Time on task
- User satisfaction
Creating a usability test report
Your usability testing report should have four sections: background, methodology, key findings, and recommendations. The background should introduce the context of the testing and the methodology provides an overview of how the research was conducted. The key focus of the report should be placed on the insights and recommendations. It helps provide structure to the insights to play these back in easy-to-understand groupings such as per task or page. Providing a visual reference such as a screenshot of the page in question or a short video clip from the prototype or testing is very useful to help the reader understand exactly what part of the interface the theme is referencing.
Read the full article at: https://makeitclear.com/how-to-analyse-usability-testing-results/