Hypothesis-Driven Development

I’ll bet that you can’t explain why you’re working on whatever it is you’re working on right now. Scary, right? Because we know that the why of what we’re doing is perhaps the most important question to be able to answer. Without a clear understanding of how the task at hand affects your goals, you’re taking a gamble, and if you come up short, you’ll have wasted time and energy.

Hypothesis-driven Development (HDD) is a methodology that helps us avoid this gamble and approach product development with more certainty. Borrowing from the Scientific Method, it prescribes a process that when followed forces clarity about why we’re doing the work we’re doing.

First, form a hypothesis. For example, “releasing feature x will lift signups by 2x,” or “patching bug y will increase daily actives by 5%.” If you’re like me, and your gut instinct informs a lot of your decisions, you’ll naturally want to skip this step. “Doing z will make things better,” is an all too easy go-to hypothesis. Don’t skip this step.

Now, challenge the assumptions of your hypothesis. For example, the hypothesis “moving the submit button above the fold will increase form submissions” makes an assumption that people aren’t submitting the form because they aren’t seeing the submit button. But what if you knew that everyone that saw that form did scroll down and did see the submit button? That insight would invalidate your initial hypothesis, saving you a bunch of time and energy! Meaningfully challenging the assumptions of your hypothesis will force you to write queries, build dashboards, and extract reports. In doing all that, you’ll become more intimate with your domain, application code, instrumentation, and data, all of which will pay off in the long run. Challenging the assumptions of your hypothesis will also force you to talk to your customers because some questions, like “how did that make you feel?” simply can’t be answered programmatically.

Finally, measure the results. You’ve vetted your hypothesis by unpacking the assumptions baked in it and now you’ve done some work too (e.g., fixed a bug, released a feature). Now it’s time to validate your hypothesis, which should be easy because it requires the same measurements, calculations, and queries you’ve already set up to defend your hypothesis in the first place. Like Test Driven Development (TDD), where you write the code to test a feature then build the feature to make the test pass, HDD forces you to take measurements first, then check in later to make sure they were correct. And, just like with TDD, you’ll always have an eye on the changes you’ve made, making sure that the hard work isn’t undone by another change down the line.

HDD isn’t a new idea. It’s just an adaptation of the Scientific Method to product development. But it’s a really helpful way to approach problem solving even beyond product development.

I teach front-end web development part-time at General Assembly. In class, my students will call me over to look at an issue with their web app. They’ll make a small change to a function, reload the browser, and expect the whole program to work. “It should work” is not a great hypothesis. Instead, I encourage them to construct a less interesting but much more helpful hypothesis: “when I click this button, function x should be called.” With software we’re often surprised when things work, so a hypothesis could even be, “when I add this line of code, I should get an error.”

So the next time you sit down to work on something, practice HDD. Form a hypothesis about the work you’re about to do. Then challenge the assumptions you’re making about how the work you’re doing will help you achieve your goals. Then, when you’re done, measure the results of your work. It definitely adds some overhead to each task, but be flexible with it. Some tasks require more due diligence than others.