Usability Test Reporting: a Lean Approach

During my time at the Bentley User Experience Center, usability test reports were long, detailed PowerPoint presentations full of screenshots and recommendations. They were well-organized, but typically a bear to get through, and presenting them could easily take an hour and a half or more. As we started working with more clients who were taking the Rapid, Iterative approach to testing, we started streamlining our findings into a Word-based, outlined memo format that still includes verbatims, participant numbers and other details.

At the Harvard Business Review, we’re trying to balance methodological rigor in our testing approach with a leaner approach to reporting results that can help us push changes more rapidly. After some trial and error, we’ve created a format for reporting that reads more like a memo, but still hits all the important points.

Getting the data

Starting with a clear research goal helps inform the test plan.

For our first round of usability tests, we planned a relatively simple test based on the task of finding an article to read on hbr.org, and examining how people would do things with the article, such as send it to a colleague or save the article to read later. In addition to the tasks, we had people give us information about their job role and some of the challenges they face at work, both to frame the task and to gather data for use in personas down the road. Finally, since this was a benchmarking study, we made sure to test with many users — 9 desktop and 9 mobile.

Gathering insights

One of the things I appreciate most about usertesting.com (and no, I’m not trying to sell you on it!) is that, even with a pretty rigorous screener, we got test results within 12 hours or less. Within a few hours after launch, I was able to see video of people using our product, and I was able to take notes and create highlight clips while watching the video. After watching all 18 videos, I had compiled a list of the major problems we needed to tackle, with examples, and had identified several things that we seemed to be doing well.

Live whiteboarding while watching test videos helps align stakeholders on key recommendations.

While watching the videos, I outlined the key things I was seeing in the main Word document, and pointed out examples as sub-points of the main bullet point. I also created clips of interesting moments or quotations as I watched, to work into a highlight reel later. Once all the videos had been watched, I created a 2–5 minute highlight reel for each major issue, and another for the positive feedback.

Organizing the report

The report itself is formatted as a memo in Word. Findings are organized in themes, with positive notes first, followed by areas for improvement. The basic outline is thus:

  1. Background: a quick sentence or two about why we ran the test.
  2. Study Logistics: a bulleted list with information about the participants, how they were screened, where they came from, etc. This includes: a list of research objectives decided prior to planning the study; a copy of the test protocol, taken directly from the test plan on usertesting.com, and presented as a simple numbered list.
  3. Key Findings: This is where the meat of the report is. In about a page and a half, we go through the key findings, starting with the top positive findings and followed the most salient usability issues. Each usability issue included a couple of examples related to each issue, and a link to the associated highlight reel so stakeholders could watch users in action.
  4. Appendix: extra information that stakeholders might find interesting, or that might come up in discussion. For us, this included: A breakdown of participants; A list of screener questions; Selected Answers to the post-test questions; The Net Promoter Score given by participants.

Presenting the results

When the report is finished, we schedule a meeting with the tech, design and product teams, as well as any key stakeholders touched by the findings. The report is sent out prior to the meeting, so people get a chance to read it and watch the videos. The meeting is run much like a debrief; after a brief explanation of the test purpose and study logistics, we talk about the positive results and the key usability issues, and show highlight reels where needed to emphasize key points. There’s room for questions and discussion when talking about each finding, so stakeholders can dig into the points that are most salient to them.

Once the findings are presented, we open up to discussing a few of the key areas we want to focus on from a “fixing things” perspective, as well as the things we’re going to need to dig into with more research. Those recommendations are collected and expanded upon, and then added to the report as a post-meeting task. From there, I work with the project manager and product owner to identify the tasks needed and stakeholders involved, and we start moving things through Jira (our project management system).

Extending the approach

Over the past six months, we’ve been able to extend this reporting approach to our longer, moderated studies. While the format changes a little bit to accommodate non-usability related insights, our basic approach still includes:

  1. Starting with a clear research goal,
  2. Focusing analysis and reporting on the top 10–15 usability issues and recommendations, and
  3. Working with product managers directly to prioritize needed changes in the ticket management system.

This has resulted in quicker turnaround on both usability studies and the related improvements.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.