Quantifying the User Experience, Qualitatively

A practical guide to UX Benchmarking

Brooke Kao
Suitcase Words
5 min readJan 26, 2022

--

A common issue in the UX industry is proving our business value. Metrics like conversion rate and retention are king to so many companies, and UX certainly influences those factors. But how do you know it was UX, not a well-timed media buy? Or a sale?

Social media and CRM have analytics tools to prove the ROI on their efforts. But UX is frequently left behind, cobbling together results from A/B tests and shouting in the wind. Behavioral analytics tools can tell you a lot, but they’re also expensive.

Though Mad Men is pretty outdated at this point, one quote from Don Draper still rings true to me:

“If you don’t like what people are saying, change the conversion.”

One way of changing the conversation for UX is Benchmarking.

What is UX Benchmarking?

UX Benchmarking measures and compares the user experience of a site. Common ways to compare are with device (desktop vs mobile), industry (T&N vs Purple) or historical data (This year vs last year). We measure the UX through metrics, most typically ease of use. Here is a sample visual of what UX Benchmarking of ease of use looks like.

Why we do it

Any stakeholder can’t really argue with the fact that users need to be able to use the thing we’re building. We want to ensure we’re always creating best in class user experiences. UX benchmarking is a way to quantify if we’re improving or regressing by measuring how easy or hard it is for users to complete key tasks on the site.

Most importantly: UX Benchmarking can act as an annual KPI for UX-ers. In 2020, we were able to raise the ease of use of certain pages on Tuft & Needle by 5%. People managers love data-driven results, and individual contributors love a good reason to get an “exceeds expectations” on their performance review. ;)

Sample of a usability KPI informed by UX Benchmarking

How to do it

UX Benchmarking should be done by eComm product teams quarterly or when major new features are released on the site. Here are the roles and responsibilities:

  1. Research, design and product should collaborate on key UX tasks they want to measure in their respective product teams. These tasks should be important enough to have an impact on conversion. For example: “Find and describe the differences of firmnesses in the mattresses.”, “Add the item to your cart”, “Find the estimated delivery dates of your items.”
  2. Research will set up a usability test on UserTesting.com or other unmoderated usability testing platform.
  3. Participants will go through tasks and then rate the ease of use of each task from a scale of 1 (most difficult) to 5 (most easy). It depends, but 15–20 participants is typically sufficient to derive an ease of use average score of a task.
Graph showing users rating the ease of use of a task from a scale of 1–5
  1. Research will collect the data and create a UX Benchmark analysis. This is the part where I shout out MeasuringU’s Quantifying the User Experience for providing helpful tools for the analysis.
Example benchmark analysis of ease of user last year vs this year

Ok we have the scores, now what?

The ease of use scores should never be the end-all, be-all of this exercise, especially given the inevitable margin of error on a low sample size. Qualitative data is essential to driving those “aha” UX moments. Remember those usability tests you set up? The videos of the users completing the tasks contain a wealth of evidence of what’s not working and why.

In this example, you can see users rating the difficulty of the task. In the interest of efficiency, you can just watch the videos of users who rated the task lower than a 5.

By watching multiple users stumble through tasks, patterns will begin to emerge. This will help you compile your key themes when you create your report.

Benchmarking report that demonstrates ease of use compared of a Mattress Compare page with 4 different brands. Insights into what’s working and not working annotated on the bottom.

But wait, how does all this = $$$?

Here’s Benchmarking in practice. Through previous user interviews, we learned that customers find it helpful to see payment methods, particularly financing, before checkout. However, we scored relatively low on the ease of finding both.

The graph shows ease of use scores for finding payment options and financing info. The leftmost teal bar shows our site scored 4.2 and 2.5, respectively, which was significantly lower than the five other sites we tested.

One of our designers reviewed this information, watched the videos, and proposed an A/B experiment where we display payment methods on cart.

Left shows the “A” variant of our cart page with simply text of financing info. Right shows the “B” variant where we display logos of accepted payment methods.

The experiment was a huge success: We saw an increase of $13k in revenue per day!

But how did we do from an ease of use perspective? We ran Benchmarking again at the end of the year.

The left teal bar shows our score in the beginning of the year. The right yellow bar shows the end of the year. You can see that the ease of use of our payment options task went from 4.2 to 4.8, an increase of 9%!

Quite literally proving that good UX means good business. Try it out and let me know what you think.

--

--

Brooke Kao
Suitcase Words

NYC based Researcher and Strategist // @brookekao