Priority Score: A Simple Product Prioritisation Technique
Awhile back, I was given an objective of researching and proposing a prioritisation technique at Trade Me Jobs.
Accepting the challenge, I ended up creating Priority Score: a simple way of prioritising user stories and ideas when you have no idea where to start.
Let me step back a little bit and explain how I got here.
Exploring existing prioritisation techniques
I started my research by exploring what already existed out there for product prioritisation.
I reviewed four different systems:
- Weighted Shortest Job First (WSJF)
- KUNGFU (a prioritisation technique that another team at Trade Me invented— stands for Key actions, Users, Negativity, Gold, Friends, and Us)
- Weighted matrix
- A priority scoring system that my previous workplace (Kiwibank) used.
My goal was to find a magic formula that would suit my team and our work environment. But all the systems that I reviewed had one or more of the problems below:
- Too many inputs were required. This made the system too complicated or time-consuming to use. Consequently, people didn’t bother providing all the inputs and the results were incorrect.
- Some input values were too hard to figure out up-front, e.g., potential revenue that this feature would make, how many hours per week does this take to resolve, etc. This would put people off from using the prioritisation system completely.
- The system’s recommendations were overridden by human judgement anyway, i.e., there was no trust in the system and product people believed that they knew better.
- The system itself was overly complex and intricate for what my team needed for our backlog.
Definition of a good prioritisation technique
When I couldn’t find anything that I liked, I attempted to create my own prioritisation technique. But I had strict requirements on what would make a successful prioritisation technique:
- Be able to quantify or score the priority, i.e., the system will spit out a number that will allow us to directly compare features or user stories against each other.
- Easy for anyone to use.
- Requires a minimal number of inputs.
- Reflects real-life decision-making process.
- Focused on solving problems.
- Product team buys into the technique, i.e., agrees to trust the system.
I believed that a prioritisation technique that met all the requirements above would be worth adopting for our team.
After a bit of playing around, I invented the Priority Score.
Priority Score
The Priority Score is a simple technique and concept. It requires only three inputs:
- Impact — How big is this problem?
- Likelihood — How often does this problem occur?
- Cost — How much effort will it take to resolve this problem?
Priority Score was influenced by my own experience in prioritisation. One of my requirements for my new technique was that it “reflects real-life decision-making process”. So, I asked myself how I usually approached prioritisation. That’s when I realised that I often relied on these three simple factors to keep the backlog prioritised.
I’m a huge fan of processes. So, naturally, I looked to systemise Priority Score.
This is my magic formula:
Priority Score = Value Score + Effort Score
Priority Score (PS) can be any number between 2 to 10.
- 10 means that it has the highest priority and that it’s a no-brainer to pursue.
- 2 means that it has the lowest priority and that you should consider not doing it at all.
- PS is the result of adding up the Value Score and Effort Score together.
Value Score (VS) can be any number between 1 to 5.
- 5 means that it has the most value for your business or users.
- 1 means that it has the least value.
Effort Score (ES) can be any number between 1 to 5.
- 5 means that it has the smallest effort to do the work.*
- 1 means that it has the largest effort to do the work.
*The scale of the Effort Score often trips people up. “Why does the highest score equal to smallest effort, and vice versa?”, they ask. This is the general rule: A high score is a good thing in my Priority Score universe. And a small effort is a good thing in anyone’s universe. So, that’s why 5/5 = smallest effort.
Value Score
The Value Score determines, as the name suggests, how valuable the user story is. It’s calculated by using the grid above.
- Firstly, ask yourself how likely is the problem to happen, or how often has it been happening in the past? Does it happen all the time? Just sometimes? Or hardly ever?
- Secondly, ask yourself how big the problem would be if it did ever occur. Would it be a huge problem? A moderate problem? Or just something minor or trivial?
My colleagues have commented on how un-scientific and oversimplified this approach seemed. My advice to them was to think of how they’d describe the problem to someone else in a conversation.
For example, would you be saying, “Bob, we’ve got this huge problem! But it only comes up once in a blue moon.”
Or would you be saying, “Sally, I’ve got this issue that comes up for our customers all the time, but it’s such a minor issue.”
Using Plain English and your natural thinking process is the key to coming up with a Value Score.
Remember the other prioritisation systems I evaluated, and how nobody wanted to use them because of their complicated inputs? This is exactly what I’m trying to avoid with the Value Score.
Now, just because you’re using simple phrases, it doesn’t mean you can’t back them up with hard data. Depending on the problem, you might still want to dig around for some numbers on how often it occurs. For example:
- If you’ve got a problem that consistently impacts your customers every week, you might classify that as “happens all the time”.
- If it’s a problem that surfaces once or twice a year, you might classify that as “hardly ever happens”.
Every industry and company is different. So, it’s up to you to decide how to translate the quantitative data to a Value Score.
Effort Score
You shouldn’t spend too much effort on calculating the Effort Score (no pun intended… if you can even call it that).
Required:
- One senior developer.
- The Effort Score table (above).
Steps:
- Ask your developer if a piece of work is going to be small, medium, or large effort to deliver. You don’t need any more details than that at this point. They shouldn’t spend too long investigating it. They shouldn’t give you any story points or time frames either.
- Using the estimate that the developer gives you, calculate the Effort Score from the table.
- Done!
Now that you’ve got both your Value Score and Effort Score, add the two scores together. That’s your Priority Score.
Other things to keep in mind
Before you make up your mind on whether Priority Score is for you, there are other things for you consider.
- This isn’t extra work. Using the Priority Score might seem like additional work, but you’re probably already doing it in your head. Just make your thoughts consistent and transparent by adding Priority Scores to your user stories.
- Priority Score doesn’t suit every situation. Sometimes, it’s OK to ignore the Priority Score — e.g., user stories that are grouped for a specific project, or user stories that need to be delivered in a certain order due to technical dependencies. Priority Score is most handy when you don’t know where to start with your backlog.
- Don’t stop using your common sense. Any critical bugs that are no-brainers to get expedited can override your Priority Score.
- Your roadmap is a different beast. Quite often I get asked whether the Priority Score could be used for roadmap prioritisation. Short, disappointing answer is that it can’t be. That’s not what the Priority Score was designed for. Roadmap prioritisation would be one of my future challenges to tackle.
What did you think?
I’d love to hear your feedback on Priority Score. Or any other advice you have on product prioritisation.