MVP examples from the real world

I get asked all the time — what are some MVP examples (or experiments) that I can get inspiration from?

What’s an MVP?

First and foremost, the biggest problem I see when developing a MVP is that MVP gets confused with beta.

A beta product is an early release of your product, designed so customers can play with it and the business can see what happens, get feedback, etc.

A MVP is an experiential prototype the business uses to answer a specific question.

After the first few rounds of testing, the MVP might become a beta. But I often see people’s first instinct is to create the first draft of their product right away and I think they would be better served by going with a true MVP to start.

MVP Example #1 — Experiential prototyping

Levaté is one of the teams in my summer startup accelerator at the University of Oklahoma. Here’s an image they currently have on the front page of their website —

What you’re looking at is a quintessential beta product. It looks, acts and feels much like the final product will look, act and feel.

To get this point though, the Levaté team used human-centered design and agile production principles to run a series of experiments, or MVPs. One thing the engineering team wanted to know was whether a wheelchair user preferred to be lifted up from the seat of the wheelchair or to have the entire wheelchair itself be lifted, passenger included.

The MVPs they used to test this question looked nothing like this product. In the first test, the team placed stacks of paper underneath the butt of the wheelchair user, lifting that person up from the seat. The team found out that the users did not like the sensation of being far from the wheels, which they tend to grip for support and balance.

In the second test, the team lifted the entire wheelchair up onto wooden pallets, to replicate the experience of being off the ground but still in the wheelchair. The users reported they preferred this feeling, but only up to a maximum height of 12 inches.

Teams using experiential prototyping to test hypothesis can move much more quickly than teams building fully functioning betas. That allows the MVP teams to figure out what customers or users want faster and better.’

MVP Example #2 — The smoke test

One of the first questions any new product or service faces is: does anyone even want this? In other words, is it worth the time, money and effort it’ll take to develop this service? An auxiliary question is does a customer or user want this product or service in the form that the business is imagining?

Most businesses try to solve this problem with the Build it and They Will Come strategy. The business rationalizes the solution in a board room, removed from any contact with real customers. They are often shocked to find out what they imagined as a perfect solution isn’t well received in the real world.

The best solution is to use a smoke test or pre-sale. A smoke test uses a tool like to build a variety of landing pages that then have paid web traffic directed at them. The goal is to see if anyone signs up to buy something. It very similar to kickstarter, where businesses try to raise money for projects they want to work on.

Both are doing the same thing — verifying that there is customer demand or interest in a product before building it.

Another variant on this is the cold call/email. Here you are making a pitch to a potential customer by phone or email and measuring the response rate. One of my summer startup teams, Project Xip, used this methodology when they were ready to make a potential pivot away from their marketplace parking app, where private sellers of parking spots could find buyers. They couldn’t find anyone who was interested in using this app, so they thought about another application in the parking space that would measure the usage of university parking lots. They ran the test by sending cold emails to 20 email addresses of parking directors at universities within a day’s driving distance of OU. And within 24 hours they had 8 responses. That’s a 40% response rate and good enough to validate there is enough interest in the product to continue moving forward. (Note: the product had not yet been built at this moment in time.)

A similar tactic that works well for existing businesses is to ask customers to pay for feature enhancements they want to prioritize. So all feature requests come into a central repository and the development team works on them in the priority the company deems best. But any feature can be bumped to the top of the line if the user is willing to pay for it. This system quickly highlights what features the user really cares about, and which ones are just nice to have.

Levaté used this pre-order system to build a landing page form on Google Drive and take beta user signups. Within a few weeks of opening the form they had already collected over 150 interested users, complete with contact info. This is a strong indication that there is customer interest in the product. Not quite the gold standard of Kickstarter, but pretty close.

MVP Example #3 — The paid pilot

Driven Analytics is another business going through the accelerator this summer. They plan on using a physical device that plugs into the dashboard to track car usage data. The device also has bluetooth, GPS and a cell phone/data connection. They want to sell this product as a service to car dealerships to help with customer satisfaction and retention.

Driven Analytics really needs to test two things at this stage of its development: will businesses pay for any of its proposed features and what should those features look like. The best MVP for this is the paid pilot.

For Driven Analytics, a paid pilot means getting a car dealership to pay a meaningful amount of money to test out the devices and the data they produce over a specified period of time. As it turns out, two dealerships signed up within the first week of pitching them! That answers the “will anyone pay for this” question in the affirmative. Now comes the task of trying to figure out exactly what the dealerships need from the device and how they would use the data it collects. Suffice to say, within the first week of selling the device and its benefits, the value prop of the business has changed pretty dramatically, as actually selling a pilot gets a startup business very valuable feedback.

MVP Example #4 — Little Bets

Another common question I get is how to to start using lean startup or experiential prototyping today. It’s all well and good to talk about it in a classroom, but people want practical tips for translating that into real work.

I’ve borrowed the name of this section from Peter Sims’ excellent book — Little Bets. And the Mine Fellows in Tulsa, OK recently applied its principles in a project for United Way. The challenge was to bring new people and projects into the social entrepreneurial ecosystem in Tulsa. The United Way had a special pot of money called the New Venture Grant that was earmarked for exciting new initiatives. But it was only the existing nonprofits that were applying for its funding.

The fellows first came up with the idea of having a pitch competition for the money and giving the cash away in large blocks to the best ideas. But this begs the question: where will new entrepreneurs come from and how will they get prepared to use the cash? And even more importantly, how will the United Way check for quality in the projects? In short, this is a very typical launch project where there is a lot of risk and uncertainty.

I proposed that rather than creating an elaborate system of checking in on the new entrepreneurs to see if they were using their cash appropriately, and also instead of giving away large chunks of cash all at once, that the pitch competition should be stripped way down to only give away a small amount of money, say $1000. And rather than giving away $100k to 2 or 3 projects, at first $10–20k could be given away in 1k increments. The recipients would then have 3–6 months to do something with the money. I think of that “something” as a lean startup-style experiment. After the time went by, the entrepreneurs would come back to the Fellows, discuss what they accomplished, and then be considered for a bigger grant.

And that is the essence of little bets. Rather than placing a few big bets on some very risky propositions, it is better to start with lots of little bets, and then double down on the ones that seem to be working. This exact same methodology would work well in any enterprise. If there’s a challenging problem, full of risk and uncertainty, the best bet is to let a lot of different people and ideas have a run at it. If you keep the experiments small, low in capital and short in duration, then the risk of a huge failure goes way down. And when the results of all those tests come back, the company can use data, instead of logic, to decide where to continue investing in innovation.

Originally published at on July 7, 2014.