Why You Should Use Real, Relevant Data in Usability Testing
Recently at SEEK, we completed the fourth round of testing and research for a product I'm working on.
It has been the most insightful and fruitful series of research sessions I've ever performed. There are a few reasons for that, including great attendance by our wider team, unearthing unexpected insights and importantly refining our design to a point where we are confident we can build something our users will use and be able to do so easily.
Testing with high fidelity prototypes
The product we are building is part of our job ad posting flow. With thousands of job ads posted every day on SEEK, this flow serves a wide variety of employers across every industry imaginable.
We tested with high fidelity prototypes built using a combination of Axure and HTML/JavaScript code.
Ensuring high relevance for every participant
As usual, we prepared all aspects of the testing sessions with great attention to detail: from participant recruitment, building of prototypes, facilitator scripts, through to recording setup.
However, the thing that made the biggest difference this time is that we used not just real, but also highly relevant data. For each of the 20 participants, we looked up their job posting history and ensured that every aspect of the job posting process in our test reflected the nature of the job ads they had posted recently. The product being test was customised for each participant, to display data that would be highly relevant to them.
The returns on investment were invaluable
Although customising test data for each participant was a lot of work, the returns on investment were invaluable.
Our participants were highly engaged throughout the session. The difference in how much attention they paid to finer details of the new product were huge compared to previous sessions I've facilitated and observed. Not having to say things like “just imagine that had the kind of data you would use” meant that the sessions flowed much more smoothly. Participants were engrossed in the task, often stopping to really think and consider their options, as if posting a job ad for real.
The end result was that we ended up with the kind of deep insight that is not possible when you use relatively generic data in your prototypes, even if that data is ‘real’. We gathered insights we expected to gather, but as a bonus received a vast amount of unprompted feedback which has helped us fine-tune our product and refine a number of smaller details.
A little extra effort, a whole lot of benefit
Is the effort of creating a custom prototype for each user you test with worth it? Do the deeper insights you gather justify the considerable additional work?
In our case, absolutely.
When building a complex, high value product, cutting corners on the data you use in your prototypes devalues the process of user research, ultimately producing shallower insights.
Not every product will require this level of prototype customisation, so understanding how complex your product is and exactly what you need to get out of your research is critical.
In design, using real data makes your designs more honest and confronts issues which will occur in production, upfront — Josh Puckett and Mark Jenkins recently wrote about this.
In research and testing, the relevance of the data to your participant plays a different role — it helps you as the designer to get the best possible insights and inform you to come up with the best possible designs.
Credits
Getting the great results described above, would not have been possible without three people in particular: our Senior Researcher, Mimi Turner who was integral in structuring the research sessions, and Emma Haslip our Structured Data Curator who performed a huge amount of data analysis and curation to make sure we could present our test subjects with highly relevant data. Finaly, Misha Moroshko, one of our UI devs, helped implement some of the trickier interactions in JavaScript.