Analyze This!

Product development work should be data-driven — or, to be more precise, data-informed. We live in an age of science, which means that we should use data to resolve arguments, even if we don’t go as far as to make love with it.

But there are two fundamental data-driven approaches to product development. The first is experimentation that takes place after development, typically in the form of A/B tests. The second is analysis, which takes place before development. How do we decide which approach is more appropriate?

Optimize for the expected return on investment.

In a talk on web science that I gave a few years ago, I compared a past where the cost of experiments is high, so you need to be careful in choosing which ones to perform, with a present where experiments are cheap, so you should perform as many experiments as you can. In this oversimplified view, there’s no need to analyze when you can experiment.

But reality isn’t so simple. Not all experiments are cheap, and even the cost of cheap experiments can add up. Sometimes analysis is much cheaper than experimentation. Deciding whether to turn to analysis or experimentation should be based on optimizing for the expected return on investment.

You can’t afford to not do analysis.

I’ve seen lots of organizations where everyone is so busy developing products that no one can make the time for analysis. The last thing that a product manager wants to do is allocate time for activity that doesn’t lead to anything being shipped.

There’s nothing wrong with everyone being busy, but neglecting analysis leads to a lot of wasted effort. Specifically, a lack of analysis leads to many failed experiments: A/B tests that show negative or neutral results.

We shouldn’t expect all A/B tests to show positive results — if they did, that would mean we’re being too conservative in what we choose to test. Still, an A/B test that shows a negative or neutral result should make us ask ourselves: could we have anticipated this result through an analysis that would have cost us less than we invested in product development and experimentation? If so, why didn’t we?

Neglecting analysis makes us more susceptible to cognitive biases.

Beyond the objective risks of over-investing in product development efforts that lead to failure, we have to confront a more insidious risk — namely, that of sunk-cost fallacy. The more effort we invest into something, the less we are willing to write off the investment.

Once we invested in developing something to the point of A/B testing it, we are highly motivated to see the test succeed. So motivated that we may be tempted to cut corners in our testing methodology. There are lots of ways to convince ourselves that a test is positive, if we are determined to do so. Or we may keep trying to make tweaks and perform additional tests, convinced that success is just around the corner.

In contrast, we’ve invested less at the analysis stage, and thus are less vulnerable to cognitive biases. Analysis is the best time to rule out efforts that are destined to fail, since the objective and psychological costs are minimal.

Opportunity analysis: not just for solutions, but also for problems.

The other advantage of analysis over experimentation is that it allows us to explore whether a problem is even worth solving, before we invest in developing solutions for it. Sizing a problem offers us a best-case upper bound on the impact of any potential solution. If that bound isn’t compelling, we can decide to prioritize other problems.

In general, opportunity analysis should drive the product roadmap. Lots of people — product managers, engineers, customers, CEOs — are full of ideas about what’s wrong with the product, as well as ideas about how to fix it. Some of these may be exactly the problems we should work on! But we should always start with an opportunity analysis to size the problems, establishing preliminary — and optimistic — estimates for return on investment.

But don’t overanalyze.

Hopefully you’re convinced by now that analysis is important. But don’t overdo it! It’s easy to get so caught up in analysis that you do as much work — or more — than if you went straight to experimentation.

As a rule of thumb, you should be willing to invest a day in analysis before deciding to invest a week into development and experimentation. If analysis seems as expensive as experimentation, you’re doing it wrong. Remember that analysis is a heuristic, so it’s ok to simplify your models. Save your rigor for when you perform A/B testing.

Indeed, keeping analysis lightweight is a good way to avoid making excuses that you don’t have enough resources to invest in it.

Analysis and experimentation: two great tastes that taste great together!

Hopefully I’ve succeeded in convincing you of the value of analysis. But none of what I’ve said takes away from the importance of experimentation. Analysis should come first, but experimentation should always be the final arbiter to determine whether a change is actually an improvement.

So, go forth and be data-driven — combining analysis and experimentation.