There isn’t a usability thermometer to tell you how usable your software or website is. Instead we rely on the impact of good and bad usability to assess the quality of the user experience. Assessing that impact starts by knowing and collecting these 10 metrics.
Here are 10 metrics you should be familiar with and ready to use in any usability evaluation.
1. Completion Rates: Often called the fundamental usability metric, or the gateway metric, completion rates are a simple measure of usability. It’s typically recorded as binary metric (1=Task Success and 0= Task failure). If users cannot accomplish their goals, not much else matters.
2. Usability Problems (UI Problems) encountered (with or without severity ratings): Describe the problem and note both how many and which users encountered it. Knowing the probability a user will encounter a problem at each phase of development can become a key metric for measuring usability activity impact and ROI. Knowing which user encountered it allows you to better predict sample sizes, problem discovery rates and what problems are found by only a single user.
3. Task Time : Total task duration is the de facto measure of efficiency and productivity. Record how long it takes a user to complete a task in seconds and or minutes. Start task times when users finish reading task scenarios and end the time when users have finished all actions (including reviewing).
4. Task Level Satisfaction: After users attempt a task, have them answer a few or just a single question about how difficult the task was. Task level satisfaction metrics will immediately flag a difficult task, especially when compared to a database of other tasks.
5. Test Level Satisfaction: At the conclusion of the usability test, have participants answer a few questions about their impression of the overall ease of use. For general software, hardware and mobile devices consider the System Usability Scale (SUS), for websites use the SUPR-Q.
6. Errors: Record any unintended action, slip, mistake or omission a user makes while attempting a task. Record each instance of an error along with a description. For example, “user entered last name in the first name field”. You can later add severity ratings to errors or classify them into categories. Errors provide excellent diagnostic information and, if possible, should be mapped to UI problems. They are somewhat time consuming to collect as they usually require a moderator or someone to review recordings( although my friends at Webnographer have found a way to automate the collection).
7. Expectation: Users have an expectation about how difficult a task should be based on subtle cues in the task-scenario. Asking users how difficult they expect a task to be and comparing it to actual task difficulty ratings (from the same or different users) can be useful in diagnosing problem areas.
8. Page Views/Clicks: For websites and web-applications, these fundamental tracking metrics might be the only thing you have access to without conducting your own studies. Clicks have been shown to correlate highly with time-on-task which is probably a better measure of efficiency. The first click can be highly indicative of a task success or failure.
9. Conversion: Measuring whether users can sign-up or purchase a product is a measure of effectiveness. Conversion rates are a special kind of completion rate and are the essential metric in eCommerce. Conversion rates are also binary measures (1=converted, 0=not converted) and can be captured at all phases of the sales process from landing page, registration, checkout and purchase. It is often the combination of usability problems, errors and time that lead to lower conversion rates in shopping carts.
10. Single Usability Metric (SUM): There are times when it is easier to describe the usability of a system or task by combining metrics into a single score, for example, when comparing competing products or reporting on corporate dashboards. SUM is a standardized average of measures of effectiveness, efficiency of satisfaction and is typically composed of 3 metrics: completion rates, task-level satisfaction and task time.
About Jeff Sauro
Jeff is a Six-Sigma trained statistical analyst and pioneer in quantifying the user experience. He is the author of five books, including Customer Analytics for Dummies and Quantifying the User Experience, and over twenty peer-reviewed research articles. He has a PhD in Research Methods & Statistics, and is the Founding Principal of MeasuringU.
Originally published at measuringu.com. If you enjoyed this article, click the Recommend button. This will help share it with others.
Title Photo Courtesy of MilStan @ Flick.com