The Ultimate Step-By-Step Guide to Validating Your Startup Idea, Part Two
Just getting started? Read Part One to fully know the framework!
Now that we’ve understood how to figure out our problem-person fit, we can start designing experiments to test our hypothesis. At Launch Academy, we define startups as a human institution that conducts a series of experiments to search for a repeatable and scalable business model. In this guide, you will learn how to design a lean startup experiment and create a test plan.
So why do we experiment?
- So we can minimize risk of investing too much resources into unvalidated business model
- So we can gain insight into the market, customer, problem so we can come up with potential solutions
Remember, your solution should be designed based on your discovery and learning. The results from your experiment will tell you what your customers are looking for.
Ash Maurya once said, “The true product of an entrepreneur is not the solution, but a working business model. The real job of an entrepreneur is to systematically de-risk that business model over time.”
And we do that by conducting a series of experiments.
Step 4: Develop a product hypothesis
To put it simply, a hypothesis is an assumption that can be clearly proven wrong. For example:
“I believe restaurant owners will use our lightweight video resume app at least twice a month to hire servers quickly and they will convert to paid subscriptions after a 30-day unpaid trial because our product helps them hire 50% faster.”
This is called a product hypothesis because you are testing your assumption whether or not your intended audience will use your product. So let’s dissect our product hypothesis into a simpler formula.
Product Hypothesis = I believe [target market] will [do this repeatable action/use this solution], which will [result in expected measurable outcome] for [this reason]
A good product hypothesis:
- is falsifiable, which means it can clearly be proven wrong
- is written down
- contains metrics that can be tested and measured
Takeway: Your opinion is a hypothesis for research. Don’t waste time arguing with your co-founder(s) on the basis of opinions because you’re only making assumptions. Agree on the hypothesis you are willing to test and collect meaningful data to prove it right or wrong.
Step 5: Design your minimally viable product (MVP) or test
In every experiment, there is a test, or in our case a minimum viable product (MVP).
“The minimum viable product or MVP is that version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort”- Eric Ries, The Lean Startup
The key thing to highlight here is validated learning. Validated learning is what propels you forward with your decision making. After each experiment, you should have a good idea of what your next step is.
Remember, MVP is a strategy, and not a one-time thing. You can have multiple tests or MVPs to get to the validated learning you need to proceed to the next experiment.
There are two categories of tests: low fidelity and high fidelity. Low fidelity tests are typically low cost in terms of both time and money and are relatively easy to set up. High fidelity requires more work and time investment. We won’t go over all the tests listed above, but we will cover 4 key MVP strategies.
Concierge MVP is when you deliver your product and service as a highly customized service to selected customers. Say you want to deliver bookkeeping services but you don’t have a product. You could hire a few of bookkeepers to perform the services for your clients and through this experience, learn how to productize your service. This type of strategy helps you identify which parts of your services can be automated through technology.
Wizard of Oz MVP
Wizard of Oz MVP refers to the illusion that you give your customers where they think they’re experiencing the full benefits of a complete product. However, in reality, you are manually making the magic work behind the curtain. When Zappos first started, the founder went to local shoe shops to take photos of shoes so he can put it online. When orders started to come in, he would go back to the shoe shop and purchase the pair that was ordered. He would then handle, payments, shipping, returns manually by hand. His customers however experience a flawless ecommerce experience. They came to his site, saw the pair of shoes they wanted, and got it delivered to them. They were happy customers. Zappos now generates over $1billion in annual revenue and was acquired by Amazon in 2009 for 1.2 billion dollars.
Single Feature MVP
Single feature MVP involves building out your product to solve one specific problem that your customers are having. It’s usually a tool with one single feature. Some exists in the form of chrome extensions, other exists in the form of wordpress plugins or widgets. This is an incredibly powerful way to start because you are focused on solving one very specific problem for a very specific niche group better than anyone else. Chances are your early adopters will give you valuable insight to how your product should eventually evolve into a platform.
In this day and age when API is so accessible to developers, you can literally build your product by piecing together different existing products. Groupon, in its early stages, was a combination of WordPress, Apple Mail and an AppleScript that generated PDFs manually as orders were received from the website. Most products these days are built on top of existing products or services by using their API.
Christopher Blank wrote a great guide on 15 ways to test your MVP here. Find out which strategy is more suitable for your stage.
Takeway: A MVP is a strategy. Most of the times your MVP will not function like the product you envisioned. However, your MVP will help you gather valuable insight to your target customers and what their needs are.
Step 6: Define your expected metrics
To test the validity of your product hypothesis you must first establish the expected metrics that you are going to measure. In the video resume app example, we’re going to measure:
- The frequency of app usage per user
- The ratio between the number of servers hired in a month with using the app vs a benchmark average number of servers hired in a month without using the app
- The percentage of user who converted into paying customers after a 30-day trial period
Metrics keep us honest so we stay objective about our hypothesis. It’s easy to lie to ourselves about how great our ideas are, but if the data shows something different, you must make a decision on whether to pivot or keep going.
Takeway: The more experiments you run, the better you’re going to be at establishing expected metrics. These metrics keep you grounded and focused on what is actually obtainable.
Step 7: Compare expected metrics with observed metrics and make a decision
When you compare your expected metrics with observable metrics, you will encounter the following scenarios:
Scenario 1: My observed metrics is nowhere near my expected metrics
In this scenario, your initial hypothesis is clearly invalidated. However, this may be a good opportunity to learn why that is. Perhaps your intended audience is not the right audience for the product. Perhaps you will uncover a new set of pain points from your customers. Whatever the case may be, you will have gained the insight you need to either pivot or quit.
Scenario 2: My observed metrics fell 40% or more short of my expected metrics
In this scenario, you must assess why your metrics fell short. For example, if you were expecting 1000 users to behave this way and only 600 did, you must investigate why. Did you set unrealistic expected metrics? Was the idea invalidated? Were you targeting the wrong audience? Do you need a bigger sample size? These are all the questions that you should be asking yourself before making a call. Chances are you probably gained new insight to the type of persona you should be targeting and would require more discovery with this specific customer segment.
Scenario 3: My observed metrics meet or exceed my expected metrics
Here, you have a positive signal that your users are behaving exactly or better than the way you anticipated. This could be the results of establishing realistic and obtainable metrics. However, it could also reflect low expectations so the key here is to find a balance and define what’s reasonable. When you receive a positive signal, you should replicate the experiment and roll out to a bigger sample size so by the end of the experiment, you are confident that a % of your users will convert into customers and behave the way you want them to.
Depending on the nature of your startup, here are some examples for what positive signals may look like:
- Marketplace: When you have at least 50 transactions between a buyer and a seller over a period of one week
- SaaS: When you have at least 20 beta customers (paying or not paying)
- E-commerce: When you have at least 50 customers buying your product in a month
- User-generated platform: When you see at least 20 user-generated content being posted every day
- Mobile App/Game: When you see 1000 users who came back and use the app at least once a week
Takeway: A scientific approach can only take you so far as there are many factors that would influence the results of an experiment yet there is not enough time or resources to prove things out one by one. This is where your experience and knowledge come in to give you the confidence level you need to make a decision.
Last words of wisdom
A great experiment is one that is additive. By executing this framework, you will produce more accurate data that will inform better business decisions. The learning you accumulate from each experiment will add up and eventually guide you to a problem-solution fit. Then you will have a solid foundation to work towards a product-market fit, but that’s a discussion for another day.
If you are looking to learn more about this framework, we’ve produced a free 10 chapter email course that covers these topics in far more detail. Click here to check it out.
In Part 3, I will discuss building towards product-market fit.
Alex Chuang is the Co-founder and Chief Strategy Officer at Launch Academy, Vancouver’s leading tech incubator. Alex is a serial entrepreneur, UX designer and growth hacker.