Backlog Prioritization

Peter Zalman
Enterprise UX
Published in
5 min readJan 15, 2020

Backlog prioritization is an essential activity for any high-performing team designing and developing successful digital products.

We now use Scrum as a standard Agile framework to organize work in cross-functional teams, from Design to Development, QA and Ops. We limit the unknown and uncertainty of our deliverables, be it a piece of code, research synthesis, or a prototype screen, by breaking it down into smaller parts (Stories).

Not surprisingly, there is always more work-to-be-done than we can process. So how do we decide on what comes next?

There are a thousand no’s for every yes - Jonathan Ive.

The practice

During Scrum training, we hear about the urgency of aligning priorities to business objectives, and then prioritize both Product and Sprint backlogs. There is a surprising lack of resources describing step-by-step advice on how to apply an empirical model to the Backlog prioritization. Scrum Guide only defaults to a magic wand solution “There is an ordered list of Stories” and does not provide any how-to’s.

Higher ordered Product Backlog items are usually clearer and more detailed than lower ordered ones — Scrum Guide.

I want to share one specific story of applying the RICE empirical framework to the Backlog prioritization as a one-time-only collaborative activity. There are many more models available to choose from, such as Kano, MoSCoW, or WSJF. I found RICE a convenient option for the workshop session, and it contains several common patterns of all the other prioritization techniques.

I will use the RICE workshop to highlight these common patterns as numbered Learnings. These can help you to build your intuition and become mindful of the biases when prioritizing anything — from holiday plans to design deliverables. I also want to note that experts in any role flipping a Jira value for individual items to High-Medium-Low is not a backlog prioritization.

Activity

Prioritization is not a one-time-only activity and needs to be deeply embedded in product team DNA and routine. However, life is not always perfect. I want to share one story of a product team that was flooded with user and stakeholder feedback after the initial product launch. I was the Design Lead at the team, and already compiled a decent list of future innovations that I synthesized from ongoing User research. We had it all— Defects, Stories, Future Improvements ideas, Appreciation messages, Go-to-hell messages, surprising discoveries, and well-known issues too.

To make sense out of this, we invited the team to a collaborative workshop to decide what to do next.

Objectives

It was important for the team to set the baseline objective that we use as a central theme of the relative prioritization. You can focus on revenue, competitive advantage, or specific business goal you want to achieve.

Our central theme was user feedback. After the launch, we needed to make sure we hear and act on all the insights that we received from our existing users. Together with quantitative metrics such as CES and NPS scores, we collected more than 300 open-ended answers and feedback points from surveys and support channels.

We started with affinity diagramming and grouped user feedback into main themes that would later become our OKRs for the next quarter. We ended up with three distinct areas where we want to invest our effort next.

Learning #1: Always start prioritization with insights from Users or Business. For a collaborative session, avoid dry start such as, let’s prioritize this list of things. Prioritization is also a great opportunity to refine the categories — Objectives, OKRs, Epics.

Criteria

RICE stands for Reach, Impact, Confidence, and Effort, and we divided our collaborative session into these individual blocks. At the beginning of each section, I introduced the criteria with a clear rationale and a couple of example scenarios. For example, the Reach section was introduced with the headline:

How many users will this feature impact over a single month?

We also narrowed down the estimative dimensions with relative scales e.g., S, M, L, Moonshot for the Effort. The main challenge was the act of reaching out to the consensus. We used the Poker planning technique where each participant estimated value, and I moderated the follow-up discussion where we’re reaching towards a group consensus by explaining our stand-points.

The primary hurdle was keeping the discussion topical only to the criteria that we were assessing. For example, I often heard:

“I think this Story is very important for a group of users as big as {X} (Reach), but it will take us {Y} weeks to develop it (Effort)”

My role as a moderator was to remind the team to separate the criteria.

“We are now evaluating only for Reach; let’s look at how we can limit the uncertainty there. Let’s park your note about the Effort for the following criteria, and we will get there soon.”

Learning #2: They key to a successful empirical prioritizing is to separate the criteria. Discussions become quickly abstract when reacting to a verbal matrix of multiple dimensions that are impossible of relative comparison.

Wrap up

From RICE Score to Backlog. Working only on high-priority items is not always the best idea.

At the beginning of the activity, we set the Objectives of our OKRs based on user feedback. During the prioritizing Poker planning, we categorized each item into one of the Objectives. After transferring individual RICE criteria and OKRs into an Excel file, we were able to calculate the final RICE score used to order the Stories within the Objectives. These Objectives later become Epics, and we left the workshop with a prioritized Product Backlog.

Learning #3: Always agree about the follow-up activity. Ordering backlog is an ongoing iterative process, and a prioritization workshop is only one of the available inputs. The workshop model is the right choice for important product milestones, but you should aspire to order and prioritize smaller batch more often.

Conclusions

Breaking down complex problems into smaller items and prioritizing team efforts is a tough job. There are a lot of biases surrounding the decision-making process, and it is tempting to focus on smart new ideas first over difficult challenges that impact product objectives and goals.

In the Enterprise environment, any decision is always made as a consensus and compromise made within tight constraints. Building trust and long-lasting relationships with a broad group of stakeholders who influence the product requires transparency and the use of objective and empirical methods.

Prioritizing has been removed in favour of ordering as the term for organizing the Product Backlog” — Scrum.org

Prioritization is not the only variable for ordering a product backlog, and sometimes low-priority items can win an important battle on the market or internally. Alternatively, you can decide to wrap-up complete Epics first. Without a repeatable framework applied to the Backlog ordering, many team discussions quickly become abstract and subjective.

Learning #4: Prioritization is not the only variable for Backlog ordering. Using empirical prioritization can help you to build trust with your decision-making process and often lead to surprising insights.

Resources

RICE: Simple prioritization for product managers by Intercom

Ordered Not Prioritized by Scrum.org

Understanding the KANO model by Jared M. Spool for UIE

Learn How to Cluster and Bundle Ideas and Facts

--

--

Peter Zalman
Enterprise UX

I am crafting great ideas into working products and striving for balance between Design, Product and Engineering #UX. Views are my own.