Preparing To Sell Your Minimum Viable Product

Joe Procopio
Jun 13 · 6 min read

Launching a successful Minimum Viable Product is only the beginning. Selling and scaling an MVP is harder and riskier.

If our product doesn’t emerge from MVP and quickly generate enough revenue to push our startup into the growth phase, our startup will stagnate. However, there are a couple challenges we’ll face right out of the MVP gate.

  1. The company could crash under the weight of laborious processes that aren’t automated quickly enough to meet new demand, especially unexpected demand.
  2. The product could fail under the load of unanticipated use cases from all these new customers.
  3. The MVP launch likely produced a number of false positives. Just because we went from selling one unit to 100 units doesn’t mean we can take the same path from 100 units to 1,000,000 units.
  4. Oh, and finally, margin will immediately evaporate once we start to scale.

Let’s tackle these one at a time.

The conventional wisdom in startup is to come out of the MVP launch strong and immediately put all our efforts into accelerating sales. Everyone will want this, including (and maybe especially) our investors. It’s hard to argue against adding zeroes to the top line.

But that’s one of the primary reasons startups blow it at the growth stage. We can get away with selling a fair amount of a product that requires a bunch of scrambling and duct tape behind the scenes. A fair amount. And the lure of revenue drives us to push forward until something breaks.

The longer we wait on stabilizing our MVP, the greater the chance that a larger number of things will break and impact a growing number of customers. The losses will pile up quickly.

So we need to divide and conquer. While we fire the sales rocket, we’ll also need to automate almost every process we left un-automated.

We should dedicate a team to this, even if that team is only one person. And that team should have only one task: Making the processes around the core feature set more robust. They shouldn’t be adding features or tweaking features or even fixing bugs.

Then, when the product is stable and we can serve a thousand customers as easily as we served one customer, we can redirect all out efforts into chasing a million customers.

If we did our MVP right, the product going to struggle or even fail under certain use cases at the extremes of what we believe our customers want. Struggling is OK at this point, failure is not.

For use cases where the product struggles, we’ll be applying band-aids and apologizing and hoping like hell that the automation we’re adding as we stabilize makes those problems go away.

But we’ll need a separate effort, probably another dedicated team, to determine where the failures are happening and limit the usage at those extremes. There are a few ways to accomplish this:

Limit the availability of the feature: With this solution, we put rules in place to limit usage at the point just before the feature starts to fail. We can limit by time period (i.e. only during business hours), by frequency (i.e. only once a day per user), by location (i.e. only within 50 miles), or some other dimension that will cap the product’s usage.

This isn’t my favorite method as it can create a lot of friction and result in a poor customer experience.

Limit the depth of the feature: Here, we look for ways to reduce the load on the product as we approach the extremes. In some cases, this is what we’d term throttling — like how you get blazing fast mobile data speeds until you reach a limit, then things start to slow down.

While throttling connotes a negative response, it’s actually not a bad method to control usage the extremes. Limiting depth allows us to serve the vast majority of our customers exactly the way they expect, and still handles the outliers. We just need to make sure we’re transparent about where those depth limits are and what happens when they get hit.

Upcharge for the feature: Exactly what is sounds like, this is essentially artificially limiting both the availability and the depth of the feature without adding rules or throttling.

This is obviously the most elegant solution, but we risk losing a certain amount of business by pricing customers out.

Kill the feature or part of it: If none of those options are palatable or possible, we’ll need to shake out certain feature elements that are too early, being used by too few customers, or causing too many problems to be of value.

Here’s a bit of a secret. We don’t learn much from our first wave of customers, but we can learn a lot from our second wave of customers. This is because of false positives, all the externalities in the data that we can’t see because the data is too new and the sample size is too small.

What we don’t want to do is take this first set of customer actions as gospel, plan around it, and then get all disappointed when the results don’t repeat themselves. I’ve done this more times than I can count.

On top of that, there’s also something I call early adopter noise in that first wave of data. As an example: We’ve all given false positives as customers. We get a cool new thing soon after it hits the market, and then we use it a lot until we realize it isn’t as great as we thought it was. Then we put it down and we never use it again.

This will happen with our product too.

So if we want an honest assessment of where our customers find true value in our product, we need to give them time to break the product in, get into a rhythm with it, if you will. Then we’ll get the data we need from the second wave.

Once we have that data, the strategy looks a lot like it does as we’re stabilizing the product. We learn from that second wave of customers, and we apply those learnings as we accelerate sales to a million customers.

Even as we start to automate and cure some of the headaches from the MVP, we’ll likely start to see diminishing returns from that automation until we get a critical mass of customers using the product. It seems like a chicken-and-egg problem: You need more customers to financially support the company to keep improving a product to support more customers.

The answers are in the margins.

Use the margin — the difference between the additional revenue as new customers are added and the additional costs of delivering the now-more-robust product to those customers — as the guide to determine which lever we need to pull — accelerating sales or stabilizing product.

But be aware that the math is a little tricky.

It’s not as simple as “If margins rise, do this, if they fall, do that.” We need to know why the margins are rising or falling, and we can figure this out as we bring more automation online and as we onboard new waves of customers. We can also make some rudimentary guesses as to how quickly a fix will increase those margins. In other words, if the cost of the automation now outweighs the incremental increase in margin, we delay the fix.

Margin expansion and contraction is a natural growth phenomenon, because we’re not automating our product processes to be able to go from serving one customer to 100 customers or even a million customers. We’re automating to be able to go from one to infinity (or close).

So we need to be able to financially survive the period where the margin is catching up to the spend. And what we ultimately want is for those margins to stabilize as we stabilize the product.

That last part is actually the overarching goal of the whole preparation process. With all of these measures in place, we’ll be taking a stable product and a stable model to the market. This will allow us to plan and execute our way through the growth phase instead of guessing and stumbling. If we do that, then we’ve got a much better shot at success.

Joe Procopio

Written by

I’m a multi-exit, multi-failure entrepreneur. Building Spiffy, sold Automated Insights, sold ExitEvent, built Intrepid Media. More about me at joeprocopio.com