Or how to have a rational conversation about impact.
Opportunity Sizing is the act of giving a numeric value or range of values to the potential impact of some course of action:
- “If we build feature X, we will make $ZZ MM more dollars in yearly revenue”.
- “If we can lower the cost of this drug by $XX, we will save ZZ million lives.”
- “If we can switch to electric vehicles, we will prevent ZZ million tons in carbon emissions”.
We regularly do Opportunity Sizing with our clients to help them figure out a product roadmap. The aforementioned examples were purposefully broad to help you understand that you can apply the same principles to any domain. We’ll focus on digital product development, since that’s the domain we typically operate in, and use examples from e-commerce since it’s well understood and approachable.
In this blog post we’re going to cover the basics of opportunity sizing, talk about why it’s important, and address some important objections to the practice.
The funny thing is you are probably already opportunity sizing, albeit in an implicit or intuitive way. Every business or organization makes decisions. At some point, someone decided that option X was the most impactful choice. Most organizations have some process they follow around making these decisions, but rarely do they involve doing math. The best and most common we’ve seen is teams assigning “t-shirt sizes” to potential impact. Even these “quantifications” mostly come down to the intuitions of decision makers or the ability of stakeholders to evangelize theirs.
The trouble with intuition-driven decision making is that even the intuitions of trained experts are frequently wrong or under-perform compared to simple statistical models . Intuitions can be easily swayed by a number of subtle biases, like the order of information available, the tendency to disregard or downplay competing evidence, or our intrinsic desire to pattern-match a new decision to our prior experience. Different stakeholders often have competing or inconsistent intuitions, and even the perspective of a single stakeholder can be noisy or change irrationally over time . Living under this sort of regime can feel arbitrary and often tumultuous as different perspectives gain or lose favor. Maybe it works if you’re Steve Jobs, but most of us aren’t, and he didn’t seem like a great deal of fun to work with anyway.
A New Hope
So that whole scheme sounds like a bummer. What’s the alternative? Let’s look at an example.
One of our clients has an e-commerce site where they sell clothing. They have a neat page, which we’ll call the Reorder Page, where you can explore previously purchased products, and then mix and match them to buy something new:
Think the “clueless closet”, except not quite as sophisticated as peak 90s technology. Undoubtedly a fun project, but was this really the right thing for a small team to invest in?
First, let’s understand what impact this page has on the business. Here’s a simple model: traffic arrives on the page, some of it converts to purchasing, spending some average order value:
Traffic * Conversion Rate * Average Order Value (AOV) = Revenue ($)
This may seem overly simplistic but we’ve seen models like this come pretty close to accurately predicting the revenue for an entire company.
If this project is successful, either users are more likely to purchase (i.e Conversion Rate raises) or they are likely to spend more money (i.e. AOV goes up). Let’s focus on the first scenario. We can model this by saying that we’ll make conversion on this page X% better (we’ll call this the Lift):
Traffic * (Conversion Rate + Conversion Rate*Lift) * AOV = Revenue ($)
If we want to isolate the value of our improvements, after some simple algebra:
Traffic * Conversion Rate * Lift * AOV = Incremental Revenue ($)
We can apply this same formula to any page in our theoretical e-commerce experience to understand the potential impact of UX improvements. Let’s say there’s a separate proposal to make improvements to the Home Page. Should we work on the Home Page or the Reorder Page? Here are some numbers we can use to round out our example:
These are fake numbers but they were engineered to reflect trends we might expect on a typical e-commerce site. Most users encounter the Home Page, while very few users ever visit the Reorder Page. Users visiting the Reorder Page are probably quite engaged (they have, after all, purchased before), so they will convert at a higher rate and spend more money. The 5% Lift is our best guess for how much we could improve the UX on these pages.
According to this simple exercise, working on the Home Page has 30X more potential than working on the Reorder Page.
Lies, Damn Lies, and Opportunity Sizing
Just because we’re using numbers doesn’t mean this analysis is objectively true. There will always be uncertainty, interpretation and judgement. The difference is now we can be explicit about it:
Most of the figures we used were ostensibly facts, but the Lift figures were a guess. As we share and discuss this analysis, we can highlight that this is a data point we’re not 100% certain about. In this way, Opp Sizing is more than just a tool for numerical reasoning. Opportunity Sizing is a framework we can use to have a principled conversation about impact.
If we feel particularly uncertain about a figure, we could prioritize getting more data. For instance, if this Lift figure is contentious, we could try reviewing the results of previous experiments. Have we ever shipped a 5% win? If not, perhaps this figure is a bit ambitious. If we’ve shipped 10–15% wins with some regularity, in similar circumstances, perhaps it’s too low.
Depending on the figure you have questions about, you could find more data by consulting your logs, data warehouse, industry research, surveying users, asking colleagues in the field, etc. The degree to which you want to spend effort getting better data depends on how uncertain you are, or how much the figure impacts your final opportunity size.
One of the great things about making a simple model is that you can easily play with the figures in a spreadsheet, or even model a range of options. For instance, here are two different scenarios for the Reorder Page:
The Good, the Bad, and the Average
On that note, let’s say you really love the Reorder Page idea, and you think the 5%, and even the 10%, were really uncharitable. What if we made it 2X (100%) better?
This is a really striking conclusion. Even if we make a HUGE improvement to the Reorder Page, it still wouldn’t amount to comparably marginal improvements on the Home Page.
If you do this sort of analysis across all your ideas, you will likely find that some ideas simply have no hope, no matter how much you futz with the numbers, some look really promising, and the vast majority sit somewhere in between:
Opportunity Sizing isn’t so much about making a precise forecast. It’s an estimate, usually done quickly, so it’s destined to be off by some margin. It’s really about creating this separation so it’s easier to make decisions. Ideally you should focus on the great ideas, and if you have extra bandwidth, take a gamble with some average projects.
Recipe for Success
Let’s think about why the Reorder Page doesn’t have much potential. Imagine for a moment if more users visited this page :
Suddenly, this opportunity looks much more promising. Our bottleneck is traffic — no matter how much better we make the experience, there simply isn’t a big-enough audience to make it worthwhile.
Maybe instead of focusing on UX improvements, we can experiment with driving more traffic there (this is tricky, see ), or better yet, take this idea of driving repeat purchases and embed it throughout the mainline commerce experience so it has a bigger audience. We could, for instance:
- Make it easy to re-order from the Order History Page.
- Email users that purchased something a while ago, ideally including products they may want to re-order.
- Feature previously purchased items on the Home Page, Cart, or Product Details Pages.
- Remind users they purchased an item on the Product Details Page.
In this way, Opportunity Sizing helps us understand the key ingredients to success, and potentially re-engineer ideas for impact. Often we’ll see teams focused on very specific, tactical ideas like “let’s fix the Reorder Page”. Going through this exercise helps them to zoom out and think about the bigger picture.
If you’re going to do this sort of thing it’s important to get everyone on the same page and supportive of the process. To that end, let’s discuss some possible objections to Opportunity Sizing.
The first, and most common, objection is this idea that you can’t quantify everything, or you can’t distill everything down to a single number. First, we should say, you would be surprised at the number of complex and philosophical ideas that people, especially governments and insurance companies, have quantified:
That being said, we agree. To re-frame the conversation a bit, we would say not everything you do should be about numbers. Some things you should do just because you believe in them. For instance, we are huge believers that the details matter, especially when it comes to a clear, smooth, and professional UX. Most of this “polish” would never show up on the map in terms of an opportunity sizing analysis, but over time, and across an experience, the little bits start adding up.
Every organization has goals, be it selling more clothing or saving more children from malnutrition. If your organization wants to actually affect that outcome, it’s probably measuring it in some way. We’re not suggesting every single initiative on your roadmap should be tied to that singular number. We’re suggesting that you spend enough time on the things that do affect that number, so you can confidently make space on your roadmap for the things you want to do for other reasons. In other words, maybe spend 80% of your time on the high-impact, numbers-driven projects, and spend 20% of your time on polish.
A related objection is that being driven by numbers can often have unintended consequences that are actively harmful to users or society at large. As the world becomes increasingly skeptical of the data-driven tech platforms that dominate our daily lives, it’s hard to argue with this perspective. This is a complex subject that we hope to write about more deeply some day, but for starters, we think the problem here isn’t necessarily the use of numbers, so much as the near-singular focus on one number (i.e. money) or a small set of myopic numbers (e.g. time-on-site, retweets, likes, replies, etc). If we’re concerned about our impact on users’ lives, and we should be, then we should try to measure it, gauge the potential impact of new projects on it, and take that potential impact into account in our decision-making process. Nothing is stopping us from using data for good (except maybe capitalism).
The last objection people usually have is “I’m bad at math!”. We’ve seen all sorts of folks do Opportunity Sizing, be it Sales, Marketing, Support, Design, Engineering, etc. If you can do simple math with two numbers, and you’ve ever written a formula in a spreadsheet, you can do this. It’s easier to do as a group, which can be a fun and educational way for a team to create a shared roadmap. There are obviously more complex modelling choices that require more advanced skills. But you’d be surprised by how far you can get with simple techniques.
We hope that was fun and approachable. In future posts, we’ll go over more complex examples, talk about rules of thumb, and share lessons learned after doing this across a number of companies. If you want help with Opportunity Sizing, or just want to learn more about Related Works, say email@example.com.
Acknowledgements & Further Reading
This blog post and our perspective on Opportunity Sizing is deeply indebted to the work of Dan Mckinley. Datadriven.club is Dan’s seminal take on the same subject matter. I had the good fortune of working with Dan and a lot of his good ideas rubbed off on me. Thankfully, flannel wasn’t one of them.
It’s also partially inspired by the book How to Measure Anything by Douglas Hubbard, which is akin to a crash course in quantitative decision analysis. We love Hubbard’s work and methodology, but we’re trying to take some of these ideas and make them a bit more approachable for the average human. If most folks are operating in an intuition-driven culture, introducing even the simplest modeling is a giant leap forward.
 — Meehl, P. E. (1954). Clinical vs. statistical prediction: A theoretical analysis and a review of the evidence. Minneapolis: University of Minnesota Press.
 — Goldberg, L. R. (1970). Man versus model of man: A rationale, plus some evidence, for a method of improving on clinical inferences. Psycholog- ical Bulletin, 73, 422–432.
 — We’re talking about driving traffic to highlight the bottleneck in this opportunity. If you were to actually drive more traffic to the Reorder Page, it would almost certainly convert at a lower rate. The visitors that are currently finding the page are fairly determined, since they found an odd corner in the experience, and thus probably have a great deal more engagement and purchase intent. The incremental visitors you would receive from increased exposure are likely to have less intent. This is a form of Selection Bias, which we’ll delve into more deeply in future posts.