Measuring the financial impact of UX in two enterprise organizations

Aaron Powers
athenahealth design
9 min readApr 3, 2019

By Aaron Powers, JonDelina ‘JD’ Buckley, and Yookyoung Kim

Based on an article originally published on UXmatters, January 7th, 2019

Two enterprises in vastly different industries have used statistical models to compare the impact of usability on financial metrics: athenahealth, a healthcare-technology company, and a Fortune-500 human capital–management (HCM) company. If you want to adapt this model, here are some suggestions to think about as you embark upon your own UX measurement initiative at your organization.

How Does Usability Impact Business Metrics?

When people talk about the return on investment (ROI) of user experience efforts, the question they usually want to answer is: “How much does design impact the bottom line?” While it is possible to answer this question by calculating a detailed financial ROI, we’ve discovered a simpler, but equally revealing approach. Understanding how users perceive your product and how their perceptions of its usability influence their actions can be easier to calculate than ROI and more valuable in helping a company to make future business decisions.

Models of how a company’s actions impact business metrics have been around for more than 25 years. The most popular of these is the Service-Profit Chain, which Len Schlesinger and his colleagues published in 1994. This model, shown in Figure 1, demonstrates how key factors of a business influence other factors. The statistical relationships described in the Service-Profit Chain have been validated by over 518 different studies. UX professionals focus primarily on the External Service Value block in the model, of which usability is a part.

The Service-Profit Chain

The foundation of user-centered design is built on the premise that delivering a high-quality, usable, satisfying user experience, will have a significant impact on a company’s key performance metrics.

While most UX professionals agree with this assertion, very few companies have been able to demonstrate a statistically significant correlation between their products’ usability and the company’s performance.

We’ve reproduced a portion of the Service Profit Chain and verified it using statistical models at two companies. The research design is slightly different, but we’ve still found overall statistical support for this model at both companies. We believe that this model can provide value to other companies as well, as they strategize how best to demonstrate that UX teams and a user-centered design process can positively influence a company’s bottom line.

Modeling the Relationship Between Perception of Usability and Business Metrics at athenahealth

At athenahealth, beginning in January 2016, we conducted monthly perception surveys within the app. With more than 50,000 completed surveys over a two-year period, we combined our data with business metrics and established a clear, statistically significant, positive relationship from users’ perceptions of usability to two business metrics — both retention and referrals.

Our basic model focused on the same areas as the service-profit chain. Our goal is to see whether certain product characteristics such as ease of use can impact the business. However, as shown in the service-profit chain, this is not a direct relationship. The product has many aspects, all of which together create the users’ perception of the product.

We asked questions about both the ease of use and reliability of the product. Our users comment on both of these frequently, so we hypothesized that they were drivers of users’ overall satisfaction with the product. The statistical model supported this — that is, users’ perceptions of individual aspects of the product — such as ease of use and reliability — predict their overall satisfaction with the product.

We validated this model statistically, using several regression models — such as multiple linear regression and logistic regression — combined using mediation analysis. The results of using all of these models were statistically significant, with p<.000001 — in part because of the large size of the base datasets.

athenahealth’s validated model, correlating UX metrics to company KPIs. Specific numbers, including correlation coefficients, R-squared, and odds ratios, have been replaced with an X.

To be able to see the relationship between user experience and attrition and retention, it’s critical to reduce the amount of the noise by modelling multiple steps. For example, individual users’ ratings of ease of use would not directly predict referrals in a statistical model because there is so much more that goes into someone’s decision to make a referral than just ease of use. Thus, breaking the model into multiple steps helps isolate and remove some of the noise arising from all the other factors that the model doesn’t include — for example, customer support.

At athenahealth, we’ve used this model as a foundation to help support strategic decision making regarding user experience — for example, to prioritize where to invest more effort, to validate related metrics, to understand what users perceive as the company’s biggest opportunities, and more.

Measuring the Value of a UCD Process for an HCM and Payroll Company

In an earlier article on UXMatters, we (JD & co-authors) discussed our team’s UX measurement initiative for an enterprise HCM company’s payroll-compliance application, which meant first identifying our users’ top tasks. We then established a baseline for the current user experience by benchmarking at regular intervals, measuring differences in our users’ attitudes, behaviors, and processes while attempting to accomplish those top tasks. Surprisingly, task satisfaction and perceived time were not statistically correlated with overall satisfaction and overall usability. In other words, for each individual task, task time and user satisfaction were less significant factors in predicting users overall satisfaction with the product.

Collecting several measurable metrics — both after each task and at the end of every benchmark study — has yielded informative insights over the course of several studies. Our multimetric approach, which has allowed us to examine both task-level and overall metrics and gauge the impact of subsequent design iterations and releases on the user experience:

An sample of the metrics used by an HCM company’s UX measurement program

A common question we’ve heard when presenting our approach — both internally and externally to our organization — is: Why collect so many UX metrics? Shouldn’t one or two be enough? We’ve come to think of our efforts to measure our software user experience as similar to measuring a virtual experience economy. Much like those economic indicators, our indices of UX metrics can sometimes be leading, lagging, or coincident indicators of our users’ experience, as follows:

  • Leading UX metrics — Metrics such as success/failure or task ease/perceived difficulty can be early indicators. For example, they might provide a sense of how and why specific features do not provide our intended improvements to support users’ top tasks.
  • Lagging indicators — Metrics such as Net Promoter Score (NPS), overall satisfaction, System Usability Scale (SUS), and (Standardized User Experience Percentile Rank Questionnaire (SUPR-Q), including trust and credibility, may help us to better understand users’ sense of increasing value or frustration with an experience over time.
  • Coincident indicators — Metrics such as task satisfaction and time on task provide immediate measures of our users’ actual versus perceived experience.

In the same way as for measuring economic indicators, by compiling several UX metrics into indices, we can minimize some of the volatility and confusion that are associated with individual indicators and provide a more reliable measure.

Initially, we hypothesized that measurable connections between our UX teams’ efforts and our company’s key performance indicators (KPIs) would reflect correlations in similar UX metrics. We assumed that standard UX metrics such as improvements in task time and satisfaction would have the greatest influence on customer KPIs such as NPS as well as customer-support contacts. However, while improvements in our users’ experience do seem to impact our company’s performance metrics, we’ve discovered that the connection is more complex than we originally anticipated.

While we were running a series of statistical analyses, including linear regression, logistic regression, and analysis of variance, to uncover significant correlations between numerous metrics, a surprising model emerged. What did it reveal? We found a measurable correlation between a high-quality user experience and customer referrals. We were excited to discover that task-level metrics such as task success and task ease had the strongest correlation to UX metrics such as SUS and overall satisfaction, then to our product’s NPS.

As we’ve continued to search our data for statistically significant connections between task-level and overall UX metrics and company KPIs, across several comparison studies, we’ve started to think of these connections in terms of the model:

A proposed model, correlating UX metrics to company KPIs

To summarize, new releases of the user experience first had to better support users’ ability to complete their top tasks successfully, as well as end-to-end. Second, if those top tasks felt easier to complete, users were more likely to rate the experience as more satisfying and learnable. Finally, if an experience met these first two conditions, users were more likely to give the product a higher NPS rating.

Research suggests a strong relationship between NPS, revenue, and profits. However, especially at enterprise organizations where the quality of customer service can play a significant role in the end-to-end user experience, service quality can dramatically impact NPS scores. While our UX team has uncovered a statistically significant correlation between the task user experience and the NPS for our post-task product experience, we’ll need to conduct additional research to validate suggested correlations between the NPS for the user experience and our corporate NPS — and, therefore, revenue and profits.

Adapting This Model to Your Business

Many companies could adapt the model that we’ve described to their needs — perhaps by generalizing it slightly or simplifying it. Our four-step model breaks down the way users think and make decisions into measurements of four key areas, allowing you to use statistical modeling to explain how user experience impacts business metrics.

Our UX-Revenue Chain model

Both of our enterprise companies have found the insights that have surfaced encouraging — not only for our respective UX teams but also for the potential strategic opportunities it has revealed across the entire organization. We’re calling our collective discoveries and the model we’ve developed the UX-Revenue Chain. It has inspired discussions across organizational silos, regarding the importance of establishing who are a product’s primary users — as well as their top tasks and workflows — as an essential design and business-strategy approach.

Given both the benefits and challenges inherent in developing a UX measurement initiative and achieving the expected results, it is important to answer a few initial questions before attempting to follow this approach. The answers to the following questions can help you to determine whether your enterprise — or even a consumer organization — is ready to undertake this kind of initiative:

  • What is your organization’s level of UX maturity? What is its data maturity?
  • Does your executive leadership support a UX measurement initiative? How strong is that support?
  • Keeping in mind the longitudinal nature of this research, is your executive leadership willing to provide the time and budget for tools and resources to support this kind of strategic initiative?
  • Does your UX team currently have the skills, tools, and resources for this kind of endeavor?
  • Does the UX team have access to reliable past and current, raw, KPI data to support analysis?
  • Does leadership understand how to best utilize the insights from this kind of model in support of strategic decision making?
  • What is the best approach for communicating results across your organization to support actionable implementation and strategic execution?

While considering some of these questions before you even embark upon your UX measurement journey can seem overwhelming, it is essential that you gain alignment and support — both early and at regular intervals — to ensure the ongoing success of the effort.

One final note: Jeff Sauro, founding principal of the quantitative research firm, MeasuringU states: “Making a case for ROI is a good thing to help justify methods that should help the user and ultimately the organization’s bottom line. But don’t overstate or oversell your case. Understand the limits of your data. Both the metrics and methods affect the strength of your case for a return on investment.”

References

Derfuss, Klaus, Jens Hogreve, Anja Iseke, and Tonnjes Eller, “The Service-Profit Chain: A Meta-Analytic Test of a Comprehensive Theoretical Framework.” Journal of Marketing, May 2017.

Heskett, James L., Thomas O. Jones, Gary W. Loveman, W. Earl Sasser, Jr., and Leonard A. Schlesinger. “Putting the Service-Profit Chain to Work.” July-August 2008.

Sauro, Jeff, “The One Number You Need to Grow (A Replication).” MeasuringU, December 2018.

Sauro, Jeff, “10 Metrics To Track The ROI of UX Efforts.” MeasuringU September 1, 2015.

--

--