Utilizing an assumption stack in product development
During a product’s development life-cycle its developers make a lot of assumptions — understandably so. There are many principles that we understand to be true, ideas we think that are true, and biases we want to be true. All of these are mixed in together on how a product is made, and I believe, to start with that is totally fine. However, to survive and make a product thrive, the development and management of a product will need to validate a lot of these assumptions. That’s where an Assumption Stack comes in and how it could potentially help product development by mitigating risks. I got this concept and exercise from Laura Klein’s book Build Better Products (highly recommended reference book and writer), with minor adjustments to align with the tools we use.
What it is
Let us define what an Assumption Stack is. An Assumption Stack is a collection of ideas or beliefs that we assume to be true that we either have used or plan to use as a factor in building a product or parts of it. It is not uncommon for assumptions to be built on top of another, further putting the success of your product in riskier territories. It is also important to note that an assumption can be true in a specific period of time but because of many factors of change (technology, social) can turn false. For example, there was a time when one needed to know how to code HTML and CSS and the ins and outs of setting up a web server to have a website but now not anymore.
The purpose of an Assumption Stack is to build a process that would prove assumptions in a “scientific” way for it to be correct or not, and make necessary adjustments to the development of the product. Simply put, it is a way to cover our bases.
How do we use it
This post won’t go into detail on the actual exercises to help your team identify, categorize, and prioritize your product’s assumptions, as well as the specific attributes to be used in the process. We’ll discuss them lightly but for more information, refer to chapters 9: Identify Assumptions Better and 10 Validate Assumptions Better on the Build Better Products book.
We started by identifying all core “problem” assumptions we’ve made in the ideation phase of our product. Problem assumptions are problems we believe our users are experiencing which we’d like to solve for them through our product. Next was relating the core “solution” assumptions we’ve formulated related to those problem assumptions and then “implementation” assumptions we used to develop features. By doing this step we were able to daisy chain assumptions revealing to us which assumptions are built on top of the other, and would also serve as a factor on how we tackle things in order. In Jira we simply linked them together and marking each assumption to be “blocked by” another assumption.
After we’ve followed the prioritization guideline detailed in the book’s exercise which is plotting your assumptions in a grid, placing them based on on how likely an it is to happen and and if it does, how bad are the consequences. Ultimately, all assumptions that have a good chance of failing causing catastrophic consequences to your product should be dealt first.
Detailing assumptions and hypothesis
In our assumption tickets, what we detailed what the assumption is and a hypothesis which is a falsifiable statement that we need to prove to be true. We’ve also added, as prescribed by Laura’s exercise, different attributes to the tickets that can help our team tackle the task — how long it would take to validate, metrics to look out for, etc.
After creating the hypothesis/es we select the different ways on how we can prove it to be true or false and create subtasks for the tickets which we’ve labeled as experiments. These experiments range from different UX testing methods, from qualitative interviews, A/B testing, etc. We’ve also included here old school research which we believe should suffice in some scenarios e.g. the number of websites being created is increasing per year. With our implementation of this process in Jira, we can see sub-dependencies (experiments) of the assumption that needs to be done first before declaring the assumption to be true or false.
Remember that validated assumptions can change
Ideas, truths and facts can change overtime which turns assumptions, validated or not to be not true anymore. We took this in consideration as well in our implementation by marking resolved transactions to be “To Revisit” and providing an approximate date in months on when to do a review an assumption.
How big or small should you go?
We have no method in determining how big or small an assumption to tackle but we usually let an assumption drive that decision based on its level of priority — how screwed are we if this happens and how likely it is going to happen. If the assumption unravels more and more falsifiable statements, we just go with it until the results from the experiments we run satisfies their hypothesis.
In closing
I personally am risk averse, and I personally like dotting my I’s and crossing my T’s and having a system like the Assumption Stack for product management is very much welcome. It does add extra work but I do think that extra work is necessary to be more secure that I’ll have some work in the future with regards to the product we’re developing.