Minimum Viable product to prediction: A Learning Cycle
An MVP is not just a product, but a process of getting through the build-measure-learn feedback loop with the minimum amount of product iterations and experiments. Every product starts with an idea, and they build an MVP wrangling about what features to have or not.
However, some(many) startups believe MVP as a product with half or more features are blended in or a quicker way to take the product to market.
This explains the perception:
So, what would an MVP process look like?
And, this is where you would end up having an average product with flaws and bugs which will hurt your reputation and most importantly your customers.
As you can see, it should be a strategy and process of idea generation, prototyping, presentation, data collection, analysis, and learning. One seeks to minimize the total time spent on the iterations and make sure to obtain a desirable product-market fit.
The startups approach this problem in a way that how many customers will sign up for the free trial and we believe it is enough information. But, it can lead to a lot of waste like how many features do you actually need to appeal to customers(early adopters).
Every extra feature is a form of waste, especially when you delay it, then it comes with a potential loss of learning and time cycle.
So, when you really create something and not have enough early adopters, your options boil down to these:
1. Retry: Change the features in some way and try again. Or leave out the non-essential features
2. Move: Moving is heading in some direction with the product or feature without validating your hypothesis.
3. Fail: Quit without having learned much. Try another idea and repeat the loop.
Learning is very essential in the process unless and until you’re scratching a clear itch or building something for a quick flip.
The testing phase:
After analyzing some MVP testing campaigns, I came across these ideas to run the experiments:
1. Landing page
2. Interviews or surveys
3. Product Video
While running this experiments, we will try to answers these questions:
1. What are you trying to learn with this particular MVP?
2. What data are you collecting about your experiment?
3. What determines the success or failure of this experiment?
Let’s boil down.Pretend like you have an idea for a mobile app or any software product. You decided to have a landing page for the value proposition and learning. Landing pages are popular as a form of MVP because it is the easiest thing to build as they can offer you good ROI for the time spent and expenses on the current assumptions.
Also, you can test whether customers understand your idea, collect metrics on the best ways (Adwords, Social media, community launch) to attract users and if users are willing to sign up.
A landing page could be a limited version of your product or a paper prototype that attracts the early adopters to talk about your product or any magic algorithm intended to solve customer’s need.
In the second case, you can write some questions to test whether the customers(now respondents) would be happy with the existing features. Most of the survey forms could be integrated with your product page to even validate whether they will be willing to pay or not.
The good starting point can be the people in your network and then reach out to communities. Asking questions aren’t easy and most startups hate it but days spent doing this could save you months of development time and efforts.
When we talk about product video, it would be unfair not to mention Dropbox MVP test. Following the Lean methodology, Drew Houston (the CEO of Dropbox) released a four-minute video highlighting only the deployable(main) features of Dropbox and surprisingly it got 70k people signed up within days.
This approach really appeals to many startups(you won’t see a single launch without an explainer video); and also videos are an easy way to explain the intangible aspects of how your product(app) will work rather than just using static words and pictures.
The measurable metrics are:
1. Number of visitors
2. Number of people watched it halfway(Wistia does the job well)
3. The click through rate
4. Their sentiments(feedbacks)
5. And, the number of people signed up
When done with the assumption validation, you should have an idea of “Is this the outcome that I wanted to see or not?” If the answer is a clear no, you can be confident that there was a problem with the basic premise of the idea or your product positioning.
And you need to think of other experiments to get the desired output. However, if the answer is yes, or better than you expected, then you can continue with confidence.
Feel free to share your thoughts or say Hello ! on Twitter mihir shah