How to build products like a scientist
Just like the beginning of every scientific discovery, every decision and step we take at a startup is based on some hypotheses. Often it’s the founders’ belief that the product is needed in the market, but once you get over the initial discovery and launch, you get into the mode of how to survive and thrive.. The stakes are higher, and you can’t just rely on the founders’ hunches to come up with the best course forward. A better approach would be to do things in such a way that each activity, each marketing campaign, each new product feature becomes an opportunity to learn and get better at learning things that make a difference to the bottom line.
That’s where a rigorous scientific approach comes in handy. A scientist is as interested in hypotheses that turn out to be wrong as the right ones. They run an experiment to learn rather than mess with the parameters to somehow get a result in line with their hypothesis i.e. as much as we would want a winning outcome, the approach we take towards anything we do at the startup is to turn our efforts into learning opportunities.
When building a product, a significant part of the effort must go into writing down the assumptions you have when building the product or a feature. Usually, these assumptions are some variation of “Our users will love this feature and use it so much that they would be willing to come back to the product and even pay for it, which will lead to a positive business outcome for us”.
Once the assumption/hypothesis is noted, we craft an experiment to validate our beliefs.
Here’s what makes for great Product experimentation:
- Hypotheses: A good hypothesis lends itself to true or false results e.g. if we increase our outreach by 3 times, we will increase our user signups by at least 2 times.
2. Experiment Design: Good experiment design enables us to conduct conclusive experiments leading to clear data in support of, or against, our beliefs. Below are 5 aspects that make for a good experiment:
🧪 Variables (increase outreach messages, while everything else remains as is)
🎭 (Optional) A/B Testing (have a group of users who get proportionally less (or none) of the new messaging to see if the effect is true
📈 Metrics to record (# of messages sent, # of user signups, # of connections accepted, # of new visitors to a website)
⏳ Time period (a reasonable amount of time to run the experiment, e.g. 6 weeks)
🔚 Conclusion criteria (After 6 weeks, the results should tell us if our hypothesis was correct or wrong, i.e. whether increasing outreach had a proportional impact on user signups).
If the results show that there’s no difference, or negative impact, or the positive impact is not significant, you now know what other parameters could be potentially tweaked i.e. time for the next experiment.
How are you experimenting at your startup? Do you think there are areas that this approach doesn’t apply to? Let me know! You know where to find me;)