When faced with a collection of potential features, design ideas, or research projects to prioritize, how do you move forward? Choosing which design effort the team will work on is especially difficult when every stakeholder, executive, and team member has a different opinion or favorite item they’d like to see tackled first.
I’ve seen a lot of prioritization schemes in my career. The one I favor most was taught to me by product manager extraordinaire, Bruce McCarthy.
What makes Bruce’s formula so great isn’t only that it delivers a straight-forward approach to identifying top priorities. It’s the collaborative method we get to those top priorities. Implementing the formula rewards teams that use their UX research strategically.
I love this. It gives me tingles just thinking about it.
Bruce’s simple formula looks like this:
(Value ÷ Effort) x Confidence = Priority
What Bruce’s formula says is this: we want to prioritize work that offers large value while taking a small amount of effort, assuming we’re confident in both our estimates of value and effort. With this formula, we can put what might otherwise seem like apples and oranges (or worse, apples and orangutans) on a single scale, where the highest results are what we should work on first.
How might we calculate Value?
To fill out Bruce’s formula, we need to arrive at a number for the first variable, Value. This number represents how much value we’ll produce when this item is delivered. As UX design leaders, we, of course, want to start with value to our users. Will this item provide a great solution to a challenging problem our users are facing? Does it get us closer to our vision of the ideal user experience? Or, will it be something they don’t really care about?
The beauty of Bruce’s formula is we can make this as simple or detailed as we’d like. A simple way to represent Value is 1, 2, or 3, for low, medium, or high. If we think the item is a critical solution to a big problem, we give it a 3. If it’s something users won’t care too much about, we give it a 1.
If we want to get more rigorous, we could estimate cost savings or how much revenue we might generate from implementing this idea. We could add what we believe is the value to the business.
Whatever we arrive at will be fine, as long as we arrive at every item’s Value using the same process. The rule is simple: the higher the number, the more valuable this item is.
How might we calculate Effort?
Next up, how much Effort might this take? Here, we can start with the effort to implement.
We can use a similar 1, 2, or 3 scale, representing whether it will be easy, medium-difficulty, or really hard to implement. Alternatively, we could use a more rigorous calculation such as the number of people multiplied by the number of weeks to complete the project. We could even use the dollars the organization will spend on it.
For more detail, we can add in other costs, such as how much effort it will take for our users to switch over. (This is especially important in products where new functionality is disruptive to habits our users have already formed.)
If we’re implementing this item to attract new customers, we can add in the effort to sell. Plus, we shouldn’t forget the effort to support the feature once it is released.
Like Value, we can adapt the amount of detail we consider for Effort any way we want, as long as, the higher the number, the more effort we believe this will take.
Dividing Value by Effort gives us a quick look at how the items rank. A design idea that provides a great solution (Value = 3) and will have an easy implementation (Effort = 1) resolves to 3÷1 or 3. Meanwhile, another idea that has a medium value (2) and medium effort (2) will resolve to 2÷2 or 1. The design idea with a 3 is a higher priority than the one that came out a 1.
How might we calculate Confidence?
(Value ÷ Effort) is a great start, but where are all these numbers coming from? That’s where Confidence comes in.
Bruce wisely puts this on a scale from zero to one. If the Value and Effort numbers are a complete guess, we’d give them a zero. If we’re absolutely sure we know the Value and Effort are correct, then we’ll give them a 1.
We might use this scale for each variable:
- Complete guess = 0.0
- We’ve found a little evidence = 0.25
- We’re fairly sure = 0.5
- We’ve done a ton of research and are very sure we’re right = 0.75
- We’re absolutely, incontrovertibly sure = 1.0
We can rate our confidence in both Value and Effort separately. If we’ve talked with a ton of customers who all told us basically the same thing, we can give Value-Confidence a 0.75. If we’ve worked with developers on a technical proof of concept that showed it can be done but didn’t explore edge cases, we can give Effort-Confidence a 0.5. By averaging them, we get (0.75 + 0.5) ÷ 2 or a Confidence of 0.625.
Using Bruce’s whole formula, we can see how this plays out.
(Value ÷ Effort) x Confidence = Priority
(3 ÷ 1) x 0.625 = 1.875
1.875 is our calculated priority for this design item. By itself, it’s a meaningless number. However, when we calculate other design item priorities the same way, we can see what we should work on first.
The biggest benefit? The discussion while rating.
It’s great that Bruce’s formula gives us a clear calculation to determine our highest priority items. However, what I love most is how it gets everyone talking about what should go into calculating Value, Effort, and Confidence.
We involve stakeholders, executives, and other key team members in coming up with each number. The numbers can mean whatever we want them to mean. Yet, to fill out the formula, we have to have an essential discussion about what these numbers mean.
We discuss the evidence we’ve collected for Value, hopefully from research we’ve conducted with our users. We discuss the evidence we’ve collected for Effort, hopefully from experimentation, iteration, and prototyping projects, that give us real insight into what it will take to deliver. We discuss our Confidence from how much evidence we have, versus how much we’ve had to guess.
Bonus: The reward from solid research.
The Confidence number penalizes us when we’ve wrongly guessed the value or effort. It rewards teams that invest in research to collect data that becomes the basis of our confidence. We’re not happy with a low Confidence number? Ok, let’s do more research and boost it.
Any process that pushes our teams to appreciate the contribution of research is a winner in my book. Bruce’s formula does just that.
This is how we make research a strategic ingredient in delivering better-designed products and services. (Ooh! There are those tingles again.)
UX Strategy with Jared Spool
This article was originally published in our new UX Strategy with Jared Spool newsletter. If you’re passionate about driving your organization to deliver better-designed products and services, you’ll want to subscribe.
The only way to deliver great products and services is to have a solid understanding of the value we’re providing our customers and users. And that comes by having a strong research capability, embedded within our organization. In our 2-day, intensive Creating a UX Strategy Playbook workshop, I will work directly with you and your team leaders to put together an action plan for strengthening your team’s research muscles.
We cap each workshop off at about 24 attendees. This gives me (Jared) plenty of time to work directly with you on the ideal strategy for your team. Spots fill up quickly, so don’t wait. Learn how your team can deliver better-designed products and services.