The Myth of Minimum Viable Product

Yeah. She’s mad at your app.

Let’s clear up what a minimum viable product (what the insiders call an MVP) should and shouldn’t be.

To give a little bit of context, MVP didn’t originate in the startup world, but it has gained traction in startup, allowing small companies with limited resources to get to market quickly and prove out product ideas.

The goal of MVP is to get a barebones version of your product out to market for real customers to buy, use, and break. Then you clean up the failures, fix the weak points, and most importantly, figure out where your customers find value and where they don’t.

Make no mistake, I’m a big believer in the MVP concept. In fact, I’d call myself a die-hard zealot. That said, I’ve definitely seen the concept abused, especially over the last few years, and I’m noticing the definition of MVP has been altered slightly. It’s becoming a means to and end. And that’s dangerous.

An MVP is indeed meant to be the quickest version of the product you can put out to market. But when I use the word “quickest” here, that reduction in time to market is achieved by a reduction in feature set, not a reduction in quality.

Lately, I’m seeing the pendulum swing away from quality too often.

To illustrate what an MVP should look like, let’s take a semi-fictional example that’s easy to follow. If we were building Uber from scratch, before any kind of ride-hailing app existed, it would be a massive and prohibitively expensive undertaking to build the technical and operational infrastructure needed to support a ride-hailing app service. How much money? Don’t know. Never been done. But a lot.

Assume we will need tens of millions of dollars to prove Uber will work as intended. Our startup is not going to get anywhere near that kind of seed money unless we can prove the model works and people will eventually turn Uber into a verb. It’s chicken and egg.

Unless we turn to an MVP.

Now, I don’t know if this is how it was actually done. I wasn’t there. But for the purposes of our example, let’s say it went down like this:

The MVP might be a simple app with a big button to hail for a ride. The customer’s location is pinpointed automatically with their mobile device. Then maybe we send an email with all the ride and rider details to a 24/7 support team, maybe the founding team, who calls or texts a group of test drivers (friends) and arranges the ride. When the ride is complete, maybe the payment is taken with a manual card swipe.

This is not the frictionless, anytime/anywhere service that made Uber disruptive (and we’re also assuming the hundreds of small-to-large issues that arise from that use case are yet to be discovered). The point is this would be how we strip Uber down to its MVP, and that stripping down is done by de-automating any part of the process that isn’t necessary to prove out the idea.

In this fake Uber example, automatically pinpointing and acting on the customer’s location is likely the critical test case. If we don’t have that, our version of Uber doesn’t deserve to exist. We can manually hire the drivers and bootstrap how we weave them into the process. We can be wrong when determining the price before accepting the ride and eat the difference. We can overpay to accept payment in the app.

That’s why we do that MVP — to test the viability of the most important aspects of the thing that’s going to make the product worth the time and resources needed to get to the billion dollar story. Once we have data that says: “This works. People will use it. It’s cost-effective. We can scale without falling over,” we can harden those important aspects, then raise our money and go for broke. So to speak.

So that’s an MVP. Here’s where the definition is getting sketchy.

Too many times today we see an MVP released to market that tries to do too much at once and does it all poorly. This makes many messes, including but not limited to:

  1. We release garbage to market and immediately create a lousy customer experience. Customers can live with a product that doesn’t quite do everything they want. They will not accept a product that does the most important thing poorly or not at all. If we don’t have the critical use case right and robust, we shouldn’t release the MVP.
  2. We sacrifice important things like security, privacy, even physical safety thinking we don’t need that yet. Beyond the moral and ethical implications, it’s something that will kill the product and maybe the company if the exposure is too great. You can never put your customers at risk, no matter how small that risk may seem. It’s the equivalent of never starting a company without a lawyer, because if you’re doing startup right, you have no idea what kind of trouble lies ahead until you’re neck deep in it.
  3. We defeat the purpose of releasing the MVP in the first place. At most, our release should be A/B testing a few features, one we know the customers want and some we think the customers want. If our customers are ignoring any feature, we need to know why. And if why is “it didn’t work,” then the test becomes moot. It also becomes costly as we rush fixes to market to retest our hypothesis.

In our Uber example, all those things that weren’t automated had to work correctly, every time. We need to make sure that no matter what the guts look like to make the ride happen, the customer gets to where they need to go quickly, safely, and inexpensively, with little headache for them, which means a lot of headache for us.

Now, you can question whether or not Uber grew right, especially in the areas of driver and rider safety, driver economics, and industry and government backlash. I’m not going to champion Uber’s method, but let me add some nuance. While we can’t release crap to market and expect to learn anything, we can’t be 100% bulletproof either. Our MVP should break, especially at the limits of performance or edge cases. The more time we spend tinkering and quality-assurancing and alpha testing and layering and lobbying, the less time we have to learn and iterate.

And, to be honest, the more time we give to Lyft.

I’m not saying it’s right. I’m saying it’s the way it is.

Thankfully, this is where data and best practices come in. We should never be completely comfortable releasing a product or a version to market. We should never be surprised when something goes terribly wrong, and we should never be more than a few clicks away from finding the source and stopping the bleeding.

Being able to change and tweak on the fly is the hallmark of a great early startup. Customers might even tell us we’re doing it 90% wrong and 10% right, but if we can capitalize on that 10%, then we might have a winner. Our MVP should have the hooks and the model for us to be able to make those changes quickly and easily.

This means an MVP should never lack a feedback loop, an easily modifiable architecture, and a kill switch.

So let’s put it all together. An MVP needs a small feature set, high quality, low automation, and maximum flexibility. Once these things come together then the only thing stopping us from launch is fear itself.

And in startup, fear is good.