Metrics Versus Experience

Julie Zhuo
The Year of the Looking Glass
10 min readJul 6, 2016
Photo by ChocolateFrogs

Some decades ago, if you wanted to build a great experience, you’d take a deep breath, close your eyes, and mutter a prayer to that oracle of answers: your gut.

But as our ancestors learned to harness the power of fire, so did we learn to harness the power of measurement and analytics. No longer did we grope in the darkness, wondering Is anyone actually reading those e-mails I send out every two days? and I wonder how many people tried out that new feature we launched. We simply dive into our treasure trove of numbers and emerge with the answer.

Alas, despite this new-found light, existential shadows remain. I hear them debated in the halls of the office, muttered over drinks after work, typed IN ALL CAPS on errant comment threads:

  • Are we just doing this for the metrics?
  • How can we balance driving up numbers and doing something meaningful?
  • And my personal favorite: Are you that data-driven, or do you actually care about the user experience?

Woah there! Charged words and hot accusations!

Want to have a productive conversation about metrics and good experiences? Here’s what I’ve learned.

First off, don’t frame stuff as “Metrics Versus Experience.”

Besides evoking big-budget movies pitting two passionate-fanbase superheroes against each other, framing things as “metrics versus experience” is entirely the wrong way to start the conversation.

It’s like saying “carbs versus eating healthy.” You don’t lead with that if you want to start with credibility in a discussion about nutrition.

Being able to measure stuff gives you insight into what people are doing within your product. Unless you like living under a rock, having more information is a good thing. Sure, you need to be able to sort through what information is important and what isn’t, but arguing that the whole concept of having more information is bad is not really a defensible position. Metrics is not the villain.

Furthermore, if you do something people find valuable, that should be entirely in line with making your success metrics go up. You can’t really argue that you made something better if nothing ever changes with how people use your product. Conversely, if you make a change and people start to use your product less, I don’t care what you did, but the evidence is fairly clear that you messed something up.

Finally, the third reason metrics are so valuable is that they help rally a team around something clear and tangible that they can hold themselves accountable to. From a purely logistical perspective, it’s hard to get 50 people to execute against mission like “create a super-amazing experience!” Sure, everyone might pump their fists in the air after your empowering speech and yell, “Yes! A Super-Amazing Experience! That’s what we’re after!” But when it’s Monday and Team A shows up all excited and says “Check this out! We made a super-amazing experience!” and Team B’s reaction is “Um, no, that is actually crap,” what happens? Who is right? How do you consistently and clearly define what a “super-amazing experience is?”

One way to solve this is hierarchy. You can denote a specific individual (or a ladder of individuals) at the company to be the judges of what is or isn’t high quality. If you’d prefer less hierarchy, another method is to define a measurable goal: “A super high-quality experience means that 50% of people who try out this feature will come back and use it again within the week.” Now, Team A, Team B, and everyone else knows precisely what they’re shooting for, and how close they are to it from day to day.

So, to summarize, metrics are useful, and at a high level, metrics and user experience are not locked in an eternal struggle against each other. Make sure you don’t say things that imply that they are.

Certainly, bad things have been done in the name of “improving metrics.”

Just like how you can eat too many jelly-filled donuts and give carbs a bad name, so too can metrics be used to justify poor decisions. This can happen because not everything you can measure is worth measuring or affecting. This can also happen because you don’t get the full story by looking at a single metric. Often, you need a suite of metrics to get a really good picture of what’s actually happening.

If you happen to pick the wrong thing to measure and try to move, you may end up doing something that’s actually harmful to people’s experiences. Observe some examples:

a) “Originally, the click-through rate of this story was 2%. When we made this change, the click-through rate went up to 5% Hooray!”

What’s the problem? Click-through rate on a story does not tell us enough about whether the experience was actually better. What if I changed all the links on my site to “click here to earn $250 dollars”? Hell yeah, click-through rate would go up! But eventually, people will realize that I’m not actually giving them $250 dollars, and then they’ll get pissed. They’ll stop clicking on my links, uninstall my app, and give me a 1-star review in the app store while writing FUUUUUUU — in all caps. My business fails, and my life sucks. The end.

b) “People used to spend 5 minutes on my app. Now, after launching my latest feature, they only spend 3 minutes. Uh-oh.”

What’s the problem? Is time spent on your app actually an important metric for you to track? It depends. If you are a content destination, then likely yes — the reason you exist is to give people great things to read/watch/listen to, so the more time they spend, the more likely it is that they’re finding what you offer valuable. If your app is a utility, like helping people sign digital documents, then no. You are better off tracking some other metric, like the number of times your app is actually used to help people sign documents. In fact, people probably love the fact that now they can sign their documents faster, which over time should correlate with more usage of your service.

c) “Originally, more people were using our cat-photo-meme app in Illinois, but now we have more people using it in Ohio.”

What’s the problem? Cool story, bro. Doesn’t matter, and isn’t a metric you need to pay attention to or try to affect.

Certainly, there are important things we can’t easily or accurately measure.

If we could read user’s minds, then we could in theory design the perfect experience for them. Unfortunately, we’re not all Jean Greys, so we make due with what we can measure to try and take educated guesses as to what people care about. In this day and age, what we can measure has its limits, and it’s important to always remember that. Simply looking at what people are doing in your product can’t tell you:

  • the degree to which people love, hate, or are indifferent to your product or any of its specific features
  • whether a change increases or decreases people’s trust in your product over time
  • how simple and easy to use your product is perceived to be
  • how people see your product versus other similar products in the market
  • what things people most want changed, added, or fixed
  • how people will want to use your product as time passes

Some of these things you can try to get at with qualitative research or surveys and polls, but none of those methods are perfect (remember Brexit prediction polls?) And even in the cases where you can accurately measure broad sentiment (like brand trust), it’s hard to know what effect specific changes had (for example, did that logo or visual redesign improve people’s perceptions of my brand?)

Not being able to accurately measure the above means that there are instances where metrics fail us. Observe the following examples:

  1. Understanding the cost of complexity: each time you add a new feature to your app, it’s likely that the metrics you are tracking will turn up positive (after all, before nobody used X, and now more people use X, and people don’t seem to be using Y or Z any less, so overall this feels like a win.) However, if you keep adding features, at some point, you’ll end up with what’s perceived as a cluttered and bloated product. Then, suddenly, some shiny new competitor will gain fast traction because everybody’s like “I love Q! It’s just so simple.” The paradox of choice and the costs of cognitive complexity are real. We just haven’t figured out how to accurately measure them yet.
  2. Understanding the power of brand: when Apple or Nike comes out with a new product, lots of people are inclined to buy it, even without doing their research, because they’ve had an awesome experience with that brand in the past. The same would not be true if some new upstart called Pear or Sike came on the market with an equivalent product. At a high level, we all know and understand this. However, it’s hard to quantify the power of brand and turn it into a number that can be tracked every day. It’s hard to know how all the thousands of decisions a company makes impacts that brand, and what the costs and benefits are in weighing those tradeoffs.
  3. The power of big bets: no metric can tell you what the bold strokes needed to win the future are. Imagine 2008, when smartphones were just starting to emerge. If you looked at the metrics for your website, you would have seen a tiny sliver of traffic coming from smartphones. You may have concluded, very practically, that you shouldn’t really invest too much into building for mobile since it’s such a small part of your audience. Today, we realize the vision and foresight of those who did bet big on mobile and reaped huge rewards. No examination of current behavior can accurately tell which way you need to leap. Strategic, long-term planning still requires much of the same thing it always did: trusting your gut.

Some rules of thumb for good metrics hygiene:

These are some of my biggest learnings in my quest to become more and more disciplined about the tactics of good goal setting and measurement.

  • To assess for product-market fit, look at retention. Do not look at the sheer number of people using your product or feature (which can be skewed by things like how aggressively you promote it.) Retention best correlates with whether your product is valuable because it tells you whether people who tried it liked it enough to return and use it again.
  • To optimize for growth, understand your funnel. In order for people to become regular users of your product, they have to pass through a bunch of hurdles. First, they have to be aware of your product. Second, they have to be interested enough to check it out. Third, they have to convert (download an app, fill out a form, confirm e-mail, etc.) Fourth, they have to do enough within your product to understand why it might be valuable in their lives. Fifth, they have to remember to come back. At each of these steps, you will lose people. If you can track and measure what that rate of loss is, you can then start to figure out where to focus your efforts to make your funnel less leaky.
  • Figure out which metrics are truly important, and focus on those. It’s tempting to get into the state where you track everything (because you can), and you have a dashboard filled with numbers that all feel like they should be green. Recognize that most things don’t matter, and that only a small handful actually do. Don’t waste time talking about the unimportant stuff, and don’t sweat letting some of the less important metrics go up or down.
  • To figure out the best metric to track, use the magic-wand technique. Ask yourself: “If I could wave a magic wand and know anything about my users in the world, what would I most want to know to tell me whether my app will be successful?” Even if your answer is not something you can actually measure (“Is my app suggesting recommendations that my users find valuable?”), it is a helpful starting point to work from. (“Okay… so I can’t ask every user if the recs were valuable… but if it were valuable, I’d probably see them saving or sharing recs more, and they’d probably spend more time reading recs, and…etc, etc.”)
  • Don’t just accept a metrics goal without understanding it. I can’t emphasize this enough: the goals you and your team agree to will be hugely impactful to your work, so make sure you buy into them. Do not accept metric goals at face value. Ask why. Ponder whether or not they make sense, and what behaviors they will incentivize. Are there situations where something will feel like a good decision but the metric doesn’t move? Conversely, are there situations where you could imagine the metric going up a lot but not be convinced that the product is actually better? If so, would another metric (or set of metrics) do a better job of tracking what actually matters?
  • View data skeptically by suggesting countermetrics. If the data is showing you what look like good results, ask yourself: “What else can I look at to convince me that these results aren’t as good as they seem?” These are called countermetrics, and every success metric should have some. (For example, don’t look at click-through rate without looking at the number of fast bounces back, don’t look at the sales numbers of a product without looking at how many returns or cancellations there are, etc.) It’s much better to be paranoid about interpreting data so you can quickly catch your mistakes and adjust your strategy. Don’t fall into the trap of confirmation bias where you’re just looking for signals that prove your intuitions are right.
  • Use qualitative research to get at the why. Quantitative data that tells you what people did is best paired with qualitative research that give you insight into how people felt. Conduct usability testing, utilize focus groups, and run surveys to get at the why behind the behavior you’re seeing.

Now go forth and wield your understanding of data to make better experiences for people everywhere.

Interested in asking a question or following along for more advice? Sign up for my weekly letter.

--

--

Julie Zhuo
The Year of the Looking Glass

Building Sundial (sundial.so). Former Product Design VP @ FB. Author of The Making of a Manager. Find me @joulee. I love people, nuance, and systems.