Automated backlog prioritization

Tomas Nosek
8 min readApr 14


Prioritizing backlog is a task requiring a lot of contextual information. If your tasks are similar — the same part of a product or one type of activity — it’s easier. But not everyone has been dealt with these cards. In our Customer Education team, we’ve decided to prioritize according to a calculated priority. What has led to it, and what are the benefits/drawbacks?

How Did We Get Here?

For seven years, we were writing documentation within sprints of development teams. We managed the delivery in sprints without significant problems, so we wanted more. Now, it’s been four years since we started focusing on educational opportunities, not development ones. We established our own backlog, where I, as the team leader, prioritized all requests and our own plans.

Our “educational backlog” had multiple shortcomings.

However, it was not all sunshine and rainbows. Our “educational backlog” had multiple shortcomings compared to a typical Scrum backlog maintained by a Product Owner for a dev team: (Who doesn’t care about those, feel free to scroll down to the solution — the Transparency Is Power section.)

All four mentioned shortcomings in one picture

1. Requests from different parts of the company

Our education team is often working based on internal requests. For example:

  • The dev team requests documenting a new feature
  • A support engineer asks for adjustments based on support tickets

These tasks usually come with contexts that don’t connect to each other. One task relates to activity A, while another refers to activity B. Which one has a higher priority, activity A or B?

Unless there’s a super-clear company-wide priority structure, it’s hell to find out. It meant assuming the priorities or delegating this decision upwards in the corporate ladder, which didn’t feel right for everyday tasks.

My brain just didn’t have enough space.

2. Requests’ hefty contexts

As the backlog’s Product Owner, I needed to know all the tasks to understand them, including their ins and outs. I needed to prioritize new requests among the older ones later. So, this usually meant remembering the structure of linked tasks and information.

Besides remembering the inter-team priorities from the previous point, this added the contexts of the teams creating the requests. My brain just didn’t have enough space to cover it all. I was forgetting details and sometimes wasn’t sure which tasks should have what priority.

Luckily, a lightly mismatched position in the backlog isn’t the worst thing. No one typically cares, but it was still an unpleasant feeling for me.

Why couldn’t I have prioritized their request higher than the others?

3. Cutting in line

Even if you create a codified bullet-proof process, someone will always want to get something more quickly or doesn’t know about the process. It’s true for backlog prioritization as well. So, the requesters went straight to me.

Since I did the prioritization manually, why couldn’t I have prioritized their request higher than the others?

This added much more complexity to the thinking process. I had one timeslot weekly reserved to prioritize — to load all my backlog memories and thoughts to my active memory. When someone asked me at a different time, it was tough to evaluate this quickly and give a suitable estimate.

4. Intuition can make people unhappy

When the requesters asked how the prioritization works, I explained it to them. But, in the end, it was based on my intuition. Ultimately, when their task wasn’t prioritized high enough, they weren’t happy and sometimes complained that it wasn’t transparent.

I had been doing it for years, and I knew what I was doing and why, so I could handle that. But these little jabs just nagged me. Also, it made me harder to substitute when I was on vacation or sick.

With transparently disclosed categories, I’m breaking down a complex problem into smaller ones.

Transparency Is Power

So, I was thinking about how to fix these problematic areas. My idea was to set up a transparent prioritization matrix. It would help me address all four problems mentioned above:

  1. I wouldn’t compare tasks. If I put values into transparently disclosed categories, I’m breaking down a complex problem into smaller ones.
  2. Since I input values into categories, I don’t need to remember the contexts of older requests. Even though tasks are different, they still apply to customers the same within these categories.
  3. When someone wants to skip the line, I can only tell them it’s up to the system to calculate the priority.
  4. The calculation is transparent. I can only get feedback on parts of the calculation process. Even other people besides me can do the prioritization.

Our Product Managers at the time worked with the RICE scoring model, so I started at the same place. RICE is transparent and basically divides all the benefits by the effort needed.

What Benefits and Efforts Are There?

Over those four years, we’ve iterated over our calculation multiple times. However, it was always about benefits vs. effort. At first, these were the benefits and potential boosts to the priority:

  • Educational impact — On a scale of 0–3, how much of a problem is it, or would it be if we didn’t do it? This way, bugs or new features were automatically prioritized more.
  • Affected user base — On the same scale, how broad is the reach of the materials? We estimated our monthly active users.
  • Strategy alignment — On the same scale, how does the task align with the current strategic goals of our team and the company? This way, we could boost the work we wanted to do at a particular time due to an overarching theme.
  • Experimental — To support agile working, 10x thinking, failing fast, and other similar thoughts, we had a scale of 0–2 that boosted tasks that did something outside the box and were evaluable soon.

On the other side, the effort was Complexity. — The estimate of the effort we put into the task. It was an agile Fibonacci-ish sequence (0.1, 0.5, 1, 2, 3, 5, 8, 13, 21, 40, 100), where the number roughly translated to person-days. 0.1s and 0.5s were getting more considerable boosts than the others to support processing quick tasks with quick wins.

We used one more special field — Due date. — If a task had a due date, it would bubble up through the backlog the closer it got to the deadline. This was calculated dynamically. It took the work estimate and gave the task a turbo boost when we reached the time needed to fulfill it.

Each attribute had its weight in the final formula, so we could tweak it easily when needed. After a few weeks, I let the team members rotate in assigning these values besides the work estimate. We all voted for that one together during our grooming with planning poker.

Over time, two problems emerged:

  • Team members sometimes weren’t sure how to fill in some of the prioritization fields. This led to making the values more precisely defined, which backfired soon. Then, the request needed to be discussed when something didn’t fit the list. This prolonged this administration time.
  • The team started getting numb to the numbers, and half of the tasks got twos for the attributes with the 0–3 scale. This devalued the whole prioritization system, and adjustments were necessary.

What Do We Do Now?

The last big bang to our process came a year ago when we introduced the third main version of prioritization. Ultimately, we decided to narrow it down to just three attributes:

  • Urgency — How fast we want a task to be finished. It uses the “full” Fibonacci sequence. Currently, it doesn’t reflect the due date to simplify the calculation process.
  • Educational impact — What value the goal of the task has for the customers. It also uses the “full” Fibonacci sequence.
  • Complexity — The good old work estimate in person-days as described before. This one uses the agile Fibonacci-like sequence.

While it may seem just a historically-caused misstep that we use both the true and the -like Fibonacci sequences, it’s actually on purpose.

Urgency and impact are the typical candidates suffering from normalization and numbness.

Different Fibonaccis have their purpose

Complexity is tied to specific days, and people could have always differentiated the number of their workdays. With the increasing estimates, the differentiation abilities decreased, so the increasing but limited agile sequence (0.1, 0.5, 1, 2, 3, 5, 8, 13, 21, 40, 100) keeps making sense and we won’t likely have a task worth 100+ person-days (our maximum has been 21 so far if I remember correctly).

On the other hand, urgency and impact are the typical candidates suffering from normalization and numbness. That’s why we decided on the infinite, standard Fibonacci sequence for those (1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, etc.). If we don’t have any cap, there’s no middle value to fall back to. This may increase all values over time, which is fine again, though, as there’s no cap, so all the values will increase.

The formula

Each of the categories has its weight. If one of the attributes increases disproportionally, we can adjust the weight and fix it quickly.

Spreadsheets are the best for playing with formulas and calculations.

We also simplified the formula. We don’t divide by the effort but instead, we add a boost for easy tasks. The formula now is:

urgency*urgency_weight + impact*impact_weight + 1/complexity*complexity_weight

In our case, this translates to:

urgency*12 + impact*8 + 1/complexity*7

On a Technical Note

But what about the tools, you ask? At the beginning of this, all was an Excel spreadsheet. I know, I know. It’s not fancy. It’s not cool.

But spreadsheets are the best for playing with formulas and calculations. It’s simple to adjust the formulas, and you can quickly and easily see the impact of your changes on them.

Even a non-technical person can adjust the prioritization.

Our original prioritization spreadsheet with a formula field highlighted

In, we’ve been using Atlassian Jira for task tracking, the cloud version. Back in 2019, there was no built-in automation, so I wrote a utility working with Jira’s Rest API.

However, Jira now does support automation. So for about a year, the prioritization has been fully operational from Jira. Even a non-technical person can adjust it well.

Our prioritization is now so simple it fits one screenshot:

Our current prioritization with Jira Automation

If you’d like to reuse our Jira formula, I’m adding it as text:

{{#=}}{{Urgency}} * 12 + {{Educational impact}} * 8 + 1/{{Story Points}} * 7{{/}}

The important part is to sort the backlog by the prioritization value. Our backlog JQL sub-filter is:

ORDER BY "Total prio number" DESC

Looking back

Within four years, we’ve had three major formulas for prioritizing our issues, which may sound inefficient or inaccurate. Yet, it decreased the time spent on prioritization significantly.

Besides the benefits mainly to me, I’ve recently passed the team leadership to a new team leader. With this prioritization automation in place, the transition process was simpler.

Thanks for reading the article. If you’d like to get notified when I publish something new, follow me on LinkedIn or add my RSS feed to your reader.

I’m glad for any comments, tips, or questions. Share how you prioritize your work! :)



Tomas Nosek

Customer Education and Consulting team leader, occasional blogger, a movie person, comfortable traveler. Find all my articles at