Metrics are useless on day one of a startup

Manuel Küblböck
Manuel's musings
Published in
3 min readMay 27, 2012
measure

In the it-agile Startup March we tried to make decisions based solely on hard facts. No more guessing or subjective opinions, just objective numbers. This appealed a lot to the mathematician in me. I, and most other participants had just read Eric Ries’ book The Lean Startup. We wanted to use his process of Innovation Accounting to make decisions based on validated learning derived from metrics, i.e. automatically measured customer behavior. Hello, wonderful new world of objective decision making!

Context: We tried to solve the problem of online (distributed, asynchronous) discussions that don’t come to decisions.

Unfortunately, here is what really happened. As far as I can tell, we made four major mistakes:

  • First and foremost, we tried to use metrics too early on. Metrics are absolutely useless before you can drive (a statistically relevant) amount of users to your site. What we retreated to after a while, and what we should have done in the first place, is to talk to potential customers. Steve Blank distinguishes the two steps Customer Discovery and Customer Validation in his Customer Development Model. We tried to skip ahead to step two where you test your “ability to scale against a larger number of customers with another round of tests, that are larger in scale and more rigorous and quantitative”. Here metrics make sense. In contrast, he suggests to get out of the building and talk to customers in step one.
  • We didn’t define specific success / failure criteria for our experiments. Because we didn’t articulate our expectations for our experiments, we always ended up having a hard time trying to interpret the results and still making subjective decisions after all. Starting an experiment without specific success / failure criteria is usually a waste of time. All experiments we started this way ended up in the ‘indecisive column’ on our board.
  • We expected metrics to tell us what to do. In a best case scenario metrics will tell you if something is working or not. That is, if you defined specific success / failure criteria before the experiment as described above. But metrics can’t tell you why something did or didn’t work or how to fix it in case of the later.
  • We didn’t understand how the data was collected. There is several great tools out there that can help you collect usage data of your site and condense them into actionable metrics. The problem is if you don’t fully understand what exactly is measured, the measurement is not very trust-worthy. Especially, if you are trying to compare numbers from several tools. For example, a ‘unique visitor’ is defined differently by almost every tool I know. I suggest measuring the few metrics you need yourself. Only then can (/ should) you be sure to fully understand them. Usually, this can be done by a few simple DB queries.

To be fair, all of the above mistakes are described in Eric’s book. It’s just that sometimes you have to make the mistake yourself to fully comprehend its consequences. So go ahead and see for yourself (if you must).

Further reading:

Photo by Chandra Marsono
Photo by Veri Ivanova

--

--

Manuel Küblböck
Manuel's musings

Org design & transformation, Agile and Lean practitioner, web fanboy, ski tourer, coffee snob.