Cheap, Fast, Measurable, Dead

What Lean Enterprises get wrong about MVPs

James O'Brien
13 Antipatterns
5 min readNov 8, 2017

--

Minimum Viable Product is becoming a discredited term. If you’re familiar with the Gartner Hype Cycle you’ll know that we’re in, or slipping into, the Trough of Disillusionment. Organisations are realising that rebranding their shonky alphas and cheaped-out v1 releases as MVPs doesn’t magically confer the customer engagement they were promised in Lean Enterprise. They’ve discovered it’s harder than it looks to convince a kid their Christmas present of socks is a Minimum Viable Superhero Outfit.

Design Thinking and UX specialists have tried to push back against this with the Minimum Delightful Product but for me that’s solving a linked but not causal problem. MVPs as I have all-too-commonly seen them practiced have a fatal error: There is too much invested in them for failure to be an option. By all means, your MVP should given every chance to prove the hypothesis. But if you’re using an MVP in the build-measure-learn cycle, and you lack four key attributes that allow for the potential of learning from failure, not only will you never learn from it, but you’ll teach yourself things that aren’t really true.

Those four key attributes? Cheap. Fast. Measurable. Dead.

Above all, your MVP needs to be cheap. Ideally, cheap enough that it can come from Business As Usual and discretionary spending. Why? The moment you need someone to sign off on a budget item for it, that person has accountability for it. They have a stake in its success, and most businesses still don’t count “it failed but we know why” as the success it is. The accountable person is now in direct conflict with your MVP aims and will be looking for something they can report back as proof of their competence, even if it’s wildly at variance with the experiment’s goals.

The accountable person is now in direct conflict with your MVP aims

This is less realistic for some organisations than others, I recognise. But if you do have to get authorisation, do whatever you can to keep the cost low, keep the accountability light, and get buy-in on the prospect of learning over profiting from the budget holder (There is also a whole rant that could go here about innovation arising from autonomy, and how businesses should fund that).

Next, your MVP needs to be fast. Not least because time is money and see the previous point, but also because the longer your experiment takes to prep and run, the more likely you’ll only ever run one iteration of it. Businesses run to the calendar; if you take the best part of a financial year creating a business case for your MVP then the business will expect to see returns in the following year (and you’ll have overthought your hypothesis, bloated the spec, ballooned out the success cases and fallen right back into cargo-cult MVPing).

The longer your experiment takes to prep and run, the more likely you’ll only ever run one iteration of it

The beauty of Build-Measure-Learn is iteration. Small learnings build on other small learnings to make a rock-solid base of insight. Obviously the faster you can iterate within a given time frame, the more solid that base becomes. Often what hurts the speed of the cycle in larger orgs is the perceived need to deploy on legacy systems, pass full-fat compliance regimes, and protect the brand. But we’ll address that complication shortly.

All the speed in the world teaches you nothing if the results aren’t measurable. I don’t just mean that the analytics tools are present. Unless you have pre-agreed success metrics, and you stick to them, you’re running a risk.

In Communicating The UX Vision Martina and I talk about the difference between extrinsic motivation — the need for your budget holder to report success up the chain—and intrinsic motivation, the desire that we all have to do a good job. In the flurry of data that even a relatively simple MVP will generate, it’s trivial to cherry-pick a figure that shows some positive side to the experiment, and that validation pings our intrinsic motivation. But it’s only part of the learning, and what if it tempts you into growth-hacking that measure alone, or if the offhanded reporting of it to the business causes them to latch on to it?

It’s trivial to cherry-pick a figure that shows some positive side to the experiment

There are any number of reasons for people to find success which is not really there. Agreeing on what success looks like before you begin is the only way to keep everyone on the straight and narrow.

Finally, when the insight’s been extracted, your MVP needs to be dead. Kill it and walk on to the next step without remorse. This may sound like a waste of money and effort, but if it that’s the case you screwed up on the cheap and fast bits. You didn’t know when you started this experiment whether it would succeed or fail, and if it’s robust enough to be carried into production then you didn’t truly allow for the potential of failure. You got lucky this time, is all.

If it feels like a waste then you screwed up on the cheap and fast bits

If you build to kill, you remove your dependencies on legacy org technology or brand risk. You can spin up a cheap site, a 404 page, a mailing list signup or a human-powered “chatbot,” under a placeholder brand just for the sake of getting to the answers.

How did we get here? MVPs have evolved from the experiments they should be, into first releases sometimes even with their own Profit & Loss accounting. I think it’s an error of wording. A great deal of attention is directed to the word Viable in MVP, because it’s always fun to argue what makes a product viable (when there are no pin heads around to count angels on). I think the place it’s gone wrong is in a different word. Product. Product is a special term in most companies, imbued already with meaning, structures, and procedures. MVP has been viewed through the lens that considers a Product to have marketing, operations, a P&L and so on. Minimum Delightful Product is a laudable attempt to deliver the best customer experience from this over-extended definition, but in my opinion it’s solving for the wrong problem.

What if, instead, we considered what it means to make a Minimum Viable Proof? With this terminology, the emphasis is squarely back on the cheapest way to prove or disprove the hypothesis, with none of the loaded language to bloat the concept. Without the need to build a whole viable Product we can drop all the end-to-end complexity and start thinking about cheap, fast, killable techniques like 404 tests, InVision prototypes and Wizard-of-Oz scenarios again.

When you come to define your next MVP, don’t ask yourself what’s the least my customers will put up with? Ask, how can I prove this hypothesis in the cheapest, fastest, most measurable, most killable way? Then use the resulting insight to ensure your first release does contain the utility and delight that make for happy customers.

--

--

James O'Brien
13 Antipatterns

Freelance UX, Product and Culture consultant operating out of London.