Your CX and UX Metrics Are Myopic

Debbie Levitt
R Before D
Published in
5 min readNov 30, 2020

--

You can barely throw a moldy orange without hitting someone who is currently writing or talking about using quantitative metrics to measure CX or UX success.

To some, CX/UX success is found in customer satisfaction surveys, NPS® scores (surveys asking if you’d recommend that company), and task related scores like how fast could the user accomplish something (and did we make that time longer or shorter, as the business might want). We also look at error rates, trust measurements, results of A/B tests, and more surveys trying to learn how “usable” your product or system is.

A/B testing image from DepositPhotos.com.

We end up with a lot of scores. Some are meaningless vanity metrics, and some are actionable, especially if you use them to spin up qualitative research to learn hows, whys, tasks, and problems.

But what I want to talk about today is where metrics go wrong: you’re looking at too short a time span as well as limited behaviors and outcomes.

Metrics are too often examined and celebrated in isolation. It’s time to connect them.

Easiest Example: eCommerce

Take online shopping. Many companies have desired metrics like:

  • AOV (average order value, aka let’s get people to spend more in one purchase)
  • Conversion rate (what percentage of shoppers visiting the site become buyers)
  • Repeat customer rate (how often are they buying from us)
  • CLV (customer lifetime value, aka how much has this customer spent with us and how much do we predict they’ll spend in the future)

Most of these metrics focus on proving that we created behavior change in users. Did they spend more, did they shop again, did they buy more this time, etc? Did the business get more of what the business wanted?

The problem is that with business goals in mind, we imagine the transaction ends when we achieve those metrics or KPIs. The shopper spent more; we have success. But did we declare “mission accomplished” too early?

Imagine an A/B test where we hope B will increase our conversion rate and get more shoppers to check out and buy. Let’s say we have the sad industry standard of a 3% conversion rate, and we’re hoping for a still sad but better 5% conversion rate.

Success! We made enough changes on the site to now have a 5% conversion rate. On most teams, they would celebrate that and move on to another project. But…

You will have to look at the longer arc and at more data. The transaction doesn’t end when the shopper checks out. That’s myopic and doesn’t take into account what happens throughout the process.

Follow the B variant over time to see what other metrics or behaviors change.

Did you increase the conversion rate, but decrease the repeat customer rate because customers weren’t happy and were less likely to buy again or more in the future?

Did you increase the conversion rate, but customer service is seeing more complaints and people are leaving bad ratings online?

This is why metrics must be watched over time, and combined with each other to see the full effects of changes and public experiments.

Imagine You’re a Platform

You don’t sell items but you allow people to sell items. Think eBay, Amazon third party selling, Etsy, Poshmark, Mercari, etc. You want to see shoppers buying more things (and hopefully higher-priced things) because as the platform, you take a cut of the sale.

Let’s say you are making changes to some of the pages and flows on the site in the hopes that you’ll get more shoppers buying (conversion rate). You show more structured metadata and less of the seller’s prosaic description. Or the description is moved down on the page to deprioritize it, and move up other things like reviews or “You Might Also Like.”

This makes shoppers less likely to read the description and therefore better understand what they’re buying. That means that even if B seems like a success, there might be unintended negative consequences when we look at the larger picture.

Consider these (example) outcomes:

  • A (our current site) converts at 3%. 5% of customers have complaints after they buy. 3% of buyers leave bad ratings for their seller or item. 0.5% of customers end up returning the product for a refund.
  • B (the new variant) converts at 5%, but our changes made it less likely that the buyer read the item description. This caused buyers to be more likely to have made up unreasonable expectations for what they bought, which leads to more disappointment with the reality of what they got. 10% of customers complain after they buy. 6% leave bad ratings for their seller or item. 2% return the product for a refund.

Should we still celebrate B because our conversion rate went up? Is B the right choice to replace A? Are the sellers on your platform celebrating B? They are getting more questions, more disputes, more returns, and lower ratings. Is B a success?

B should not be considered a success.

Many companies see that B made more sales or created higher conversion rates; they see that matches their goals, and they declare it a success. The platform will make more money because more sales were made. Everybody slaps each other on the back, and moves to the next project. Business goals achieved!

But in the larger context of the seller and buyer experience, B sounds like a downgrade. This is where looking at metrics in too limited a way can let us down or lead us down the wrong path.

Therefore, as you are working on creating CX/UX metrics, success criteria, OKRs, or the like, consider the full arc of the customer experience. You got shoppers to buy more, but are shoppers happy? Are they using the product more? More complaints? More customer service time and effort to calm and help people? More returns? Lower ratings?

Stop looking at isolated metrics in a vacuum.

Your B variant or live experiment may have shifted many things, not just conversion rate, and we should be smart enough to see and analyze all of it.

If metrics are about measuring behavior change, before you declare something a success, consider all of the behaviors that might be changed — for better or worse—along the full arc of the transaction or experience.

--

--

“The Mary Poppins of CX & UX.” CX and UX Strategist, Researcher, Architect, Speaker, Trainer. Algorithms suck, so pls follow me on Patreon.com/cxcc