User metrics on product teams


User metrics are a weird and interesting thing in the Valley. There’s been a sort of push for more data-driven decision making — for putting things in ways that you can compare with charts and inequality signs. It feels STEM-y. It’s the world of big data where modeling is king. But somewhere along the line, metrics teams got lumped into business development teams, or were tasked with complex machine learning work. And that’s a shame.

When thinking about user metrics, I like to compare it to a really complex (or not) Rube Goldberg machine. You know the kind where a bunch of balls roll through the machine, turning levers, tripping flags and bouncing down cowbells: plink-plink-plink. Your product is the machine. Your pool of potential users are the balls that are ready to flow through. The catch: without user metrics your entire machine is inside a big opaque wooden box. You have the designers’ blueprints; you know how the engineering team built it. Market research tells you what the users look like and how many there are. But you don’t know what actually happens inside.

User metrics offers little peepholes into the box, to see how the balls are actually traveling through. Maybe it’s a counter to see how often something lights up, maybe it’s sampling balls that go through a certain path. Maybe it’s just a count of how many times the ball hits the cash register bell. Cha-ching! Cha-ching!

In this sense, metrics are a great thing. Obviously. The more metrics you have, the better visibility you have into what’s going on. As your engineering team adjusts ramps and tunes dials, it seems only natural to want to see how those new features cause balls to go in different directions. So that’s where we are today: More metrics, more metrics, more metrics!

And it seems so simple: just drill more holes in the box and stick some counters in them. Or drop a probe in and remove every 1000th ball for surveying. You just try to find someone who “has a good head for numbers” to build a dashboard or do some statistics. But working with user data isn’t like working with server load or revenue models or machine learning. And so, you end up attaching numbers to decisions, calling it “data-driven” and not actually learning about how your system is and isn’t working. When you make measurement an afterthought or let anyone with a drillbit do it, you often end up doing the easiest or quickest thing, not the best thing.

Fundamentally, there are three basic rules to picking things to measure: measure things that you can affect, measure what matters, and measure something you will trust.

Give your teams metrics they can influence. If you have a design team working on a feature, they should be able to see the immediate effect of that feature. If they can’t tell if their products has any impact, they can’t fix it when it doesn’t. This is probably the biggest mistake made with things like A/B tests. If you make a A/B test for something like a grid layout, don’t measure its output with something way down the line like retention rate. Even if the needle does move, it’s impossible to know what happened. Instead, put some measures closer to the source: click through rate or time spent on page.

At the global level, it’s just as important to focus on measuring what matters. If you care about the total number of users, measure that. If you care about revenue, it should be tracked. Don’t assume that one directly relates to the other. If you really want your company to change the world, measure how much you’ve terraformed.

Once you get your data, trust your data. A lot work might have gone into a feature or product that turns out is a net negative. There might be pressure from a VP or a lead engineer that the product has to ship to justify its cost and that can translate into a list of excuses to say the data is meaningless. A good data-driven organization doesn’t let that happen; a bad one justifies the error by coming up with new and fancier sounding measures. The worst situation is when you put the work in to measure something and then someone paints over the dial because they don’t like what it says.

If your organization is even a little “data-driven”, you probably have someone who thinks about these things as a side role. However, there are numerous pitfalls and rigor falls by the wayside when things get busy. Experienced user metrics teams know all the tricks to get the knowledge you want when you need it. They helps you understand how users interact with your product and lets you drive product decisions with a user focus. This allows you to do experimentation and optimization with your product and have confidence that you’ll be able to use the results.

When they do their jobs well, your product team gets dedicated insight into how your product works, and you can make something quite awesome!

Like what you read? Give Cheng Wang a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.