The Hardest Part of Startup MVPs

Alex Devkar
3 min readFeb 8, 2015

The lean startup and minimum viable product (MVP) methodology is a powerful way to get your startup off the ground. But it isn’t easy. These are the two biggest challenges I had.

We’re not really testing hypotheses

The canonical example of an MVP test is putting up a mock landing page for a non-existent product to see if lots of people click ‘purchase’ or sign up for a mailing list. This will give us the answer about whether there is demand for the product before we build it.

What does this experiment really test? Sure, there is some weak signal about whether people want the product. But primarily what we’re testing is our ability to drive traffic to a landing page and entice people to give up their email addresses. It’s an important skill, and some people are very good at it irrespective of the product.

The point is that even the simplest examples of MVPs are not well-defined, scientific tests. We build an MVP and hope people love it. Invariably the feedback and data will be open to interpretation. There won’t be any statistical conclusions about whether a hypothesis is true or not.

An MVP is more accurately described as an unstructured search for feedback. It is a process that is supplemented by that feedback but driven by our intuition at every turn. There’s nothing wrong with that. We just need to recognize that we’re not going to get clear answers.

We misinterpret and overweigh feedback

We build a feature, test it with a few hundred people, and end up with a set of feedback. Now what?

Interpreting feedback is more art than science. Imagine you get a mix of comments like these:

  • “Perfect.” - person who immediately becomes hooked
  • “Great idea. Love it!” - person who never uses it again
  • “I could see other people wanting this.” - person who uses it occasionally
  • “It would be better if it did X also” - person who rarely uses it
  • “I wasn’t sure what to do” - person who never uses it

It’s easy to see whatever you want in this feedback. If our intuition was to add X, Y and Z to improve the product, we’ll see justification for it. Is that the right next step? Maybe, maybe not.

Written and verbal feedback is misleading. Humans are bad at identifying precisely why they use or buy something. (See conjoint analysis studies that identify the hidden reasons people pick products that they don’t even realize.) Beware adding features because some people said they’d use the product if you did.

Usage data — week-after-week, month-after-month—from happy users is the only thing we can trust. Other kinds of feedback are prone to misinterpretation.

The MVP methodology is aimed at learning quickly. It does just that, and I wouldn’t attempt the early stages of a startup without it. The challenge is that what we “learn” at each step isn’t obvious and, in fact, might be wrong. We have to continuously reflect and trust our intuition.

--

--