Measuring UX — A Brief Introduction to Attitudinal Metrics

Markéta Kučerová
DESIGN KISK
Published in
8 min readNov 23, 2020

So you designed a product, you did a few rounds of usability testing, you talked to the users and now you are ready to say which parts of the product work well and what should be improved next.

But wait, what if you need some quantitative data? Maybe the stakeholders want to see numbers rather than words or you need something more measurable to be able to track your progress in the long-term run. Be it the former or the latter, soon you will probably find yourself reading about UX metrics and how they can help you get just the data you want.

UX metrics are on the evaluative quantitative side of UX research: they help us validate our designs by collecting a large amount of data. They can tell us what is happening, but they alone won’t tell us why. Essentially, they allow us to measure and compare: measure task complexity, user satisfaction, or usability of the system, and compare the results between different designs or to designs of our competitors.

Behavioral vs. Attitudinal Metrics

Let’s see what you can measure in the first place. It can be either behavior of the user: how many users completed a task and how long it took them; or you can measure their attitude: did they find the task overly complex to complete, did they feel confident using the system? Based on what you want to measure, you can choose one of many standard behavioral or attitudinal metrics, or even come up with your own.[1]

Examples of behavioral and attitudinal metrics. Adapted from [1]

For now, we will set behavioral metrics aside and have a closer look at attitudinal, which can be especially useful to plan a strategy for a product and to continuously track our progress.

Measuring Attitudes

Attitudinal metrics consist of asking the user one or a few questions concerning their impression of the system. Basically, we ask the user if they liked our system, but we wrap it up in clever questions in a particular order and with a particular answer scale. This way we can get more accurate, standardized, and repeatable results than if we just asked the obvious “Did you like our product” question. We can also address particular parts of the “liking”, such as loyalty (would the user recommend our product?), usability, and credibility (is your product trustworthy?).[2]

Depending on what we want to find out, these metrics can be used in different stages of the design process and the project’s lifecycle. In the design process, a questionnaire can be placed directly on the product’s website or can be handed to usability testing participants at the end of a testing session. In the project’s lifecycle, measuring should be conducted at the product launch, before planned product improvement, after the improvement, and also continuously in regular intervals to keep track of any changes.

Some attitudinal metrics can be used to benchmark our product with other products in the industry or our competitors. We can also compare the results against our own previous results to track progress over time, or we can compare the results of two competing solutions we designed to perform A/B testing. If we measure several features of our product at the same time, the results can help us decide which parts of our system need our attention or which direction should the budget go.

Knowing what metrics are good at, we should also be aware of their limitations. First of all, the metrics show how users feel about our product but not why. To find out more, we need to combine the results of our measurements with other research methods. The next limitation stems from the nature of measuring attitude: attitudinal metrics tell us how users feel about their actions, not how they objectively performed. For instance, a user may find a certain task easy to complete while failing to actually complete it — maybe they didn’t understand what the task was in the first place.

If we know some parts of our system are more favorable than others, it may be tempting to put the metrics to the nicer parts. For example, buyers are likely to feel satisfied after their purchase, and less likely to be happy once they have to reach customer support because their purchase didn’t go as expected. To get an overall picture of the customer experience, it is essential to measure users’ attitude in a variety of touchpoints with the system.

If you are a digital products user (and if not, how come you are reading Medium today?), you must have come across a website or two asking you about your impression of their product. Odds are it was NPS (Net Promoter Score), perhaps the most common attitudinal metric, that goes “On a scale of 0 to 10, how likely are you to recommend our product to a friend or colleague?” Or maybe you were asked “Did you find our website useful today?” and offered five emojis to choose from, seeing one of many variations of CSAT (Customer Satisfaction) survey. Or maybe you encountered a longer questionnaire somewhere? Let’s have a closer look at some of the most common attitudinal metrics.

Net Promoter Score (NPS)

NPS is rather a customer experience metric than a UX one. The basic way of using this metric is to ask your user how likely they are to recommend your product to other people given a scale of 0 to 10, where 0 is not likely to recommend and 10 is extremely likely to recommend. Based on their rating, users are classified as detractors, passives, and promoters. The total score is calculated by subtracting the percentage of detractors from the percentage of promoters, which gives us a number between -100 and 100.

Net Promoter Score scale with 3 user groups: detractors, passives, and promoters.

Pros

  • Easy and quick to answer (from the client side)
  • Easy to implement and understand (from the product side)
  • You can use the score to benchmark your product to others in the industry
  • Resonates with the stakeholders

Cons

  • It asks about the probability of hypothetical future behavior
  • It ignores the fact that some people just do not recommend products to other people, not even if they like them. Or that they might not know any people who could find the product useful.
  • Users could perceive the question differently based on their language and cultural background.
  • Each user can have a different sense of how good 6 out of 10 means [3].

NPS is one of the most commonly used standard metrics for customer experience. It’s easy to understand and implement but should not be used as a standalone metric. To find out where your product is failing and how to change it to gain more promoters, you need to combine NPS with other metrics or usability testing.

Customer Satisfaction (CSAT)

Similar to NPS, the Customer Satisfaction survey consists of only one question: How satisfied are you with our product? It can be found all over the internet, and it takes on many different forms. Respondents rate their satisfaction on a scale of 1 very unsatisfied to 5 very satisfied, presented as numbers, text, or emojis. The overall CSAT score is the percentage of satisfied and very satisfied users in all the respondents.

Example of Customer Satisfaction survey.

System Usability Scale (SUS)

Compared to NPS and CSAT, which are rather customer experience metrics, SUS digs deeper into the actual user experience of a product. The questionnaire is composed of 10 questions, of which 5 have positive wording (e.g. I thought the system is easy to use) and 5 have negative (e.g. I found the system unnecessarily complex). The user answers the questions on a scale of 1 strongly disagree to 5 strongly agree. The overall score is computed by adding and subtracting the answers following specific rules (described with examples here), and the result is always somewhere between 0 and 100.

SUS questionnaire. Adapted from [3]

Now, the cool thing about SUS is that you can benchmark your product against other products in your industry who all used the same 10 questions in the same order in their measurements. It is an old metric (we’re talking 1986!), so it allows you to understand your score in relation to both your current and former competitors. You can also use NPS and CSAT scores as benchmarks, but SUS score may be more interesting as it is a result of more than just one question.

Because of the length of the survey, it is not seen as often popping up on website users. Instead, it can be used as a part of a more profound usability testing — after the user finishes using the system, they are asked to fill in the SUS survey.

Usability Metric for User Experience (UMUX)

UMUX is often regarded as a shorter version of SUS. It consists of 4 statements (2 positive and 2 negative) with a 7-point answer scale of 1 strongly disagree to 7 strongly agree. While SUS focuses more on the perceived learnability and usability, UMUX measures usability by “assessing effectiveness, efficiency, and satisfaction”[4]. There is also UMUX-Lite, an even shorter version of UMUX which consists of 2 positive questions.

UMUX questionnaire. Adapted from [4]

In comparison to SUS, UMUX is still fairly new (it was introduced in 2010 [5]), so there are fewer possibilities for benchmarking your score with other companies. Nevertheless, UMUX (or UMUX-lite) can be a good compromise between 10-question SUS and single-question metrics.

Final Thoughts

In conclusion, usability metrics can provide you with the numbers you need to back up your qualitative research. Results of attitudinal metrics are primarily valuable in the long-term run: they aid decision making, they allow us to track trends and to benchmark. There are several standard attitudinal metrics with a wide variety of modifications, from single-question surveys to more elaborate 10-question questionnaires. Finally, it depends on your project’s and team’s particular needs which metrics you choose to put into action.

Sources
[1]
Meyer, Sandro. “The 7 Most Important UX KPIs and How to Measure Them | Testing Time.” Testing Time, 29 Jan. 2019, www.testingtime.com/en/blog/important-ux-kpis/. Accessed 23 Nov. 2020.

[2] Ratcliff, Christopher, and Kuldeep Kelkar. “What Metrics and KPIs Do the Experts Use to Measure UX Effectiveness?” UserZoom, www.userzoom.com/ux-library/what-metrics-and-kpis-do-the-experts-use-to-measure/. Accessed 23 Nov. 2020.

[3] Laubheimer, Page. “Beyond the NPS: Measuring Perceived Usability with the SUS, NASA-TLX, and the Single Ease Question After Tasks and Usability Tests.” Nielsen Norman Group, 11 Feb. 2018, www.nngroup.com/articles/measuring-perceived-usability/. Accessed 23 Nov. 2020.

[4] Valdespino, Anastacia. “UMUX (Usability Metric for User Experience).” Qualaroo Help & Support Center, 23 Jan. 2020, help.qualaroo.com/hc/en-us/articles/360039072752-UMUX-Usability-Metric-for-User-Experience-. Accessed 23 Nov. 2020.

[5] Sauro, Jeff. “MeasuringU: Measuring Usability: From the SUS to the UMUX-Lite.” MeasuringU, 10 Oct. 2017, measuringu.com/umux-lite/. Accessed 23 Nov. 2020.‌

--

--