Building a balanced Product Team backlog

Rico Surridge
9 min readFeb 7, 2022

--

There have been a few times when Product Teams (or as we call them, Squads) have come to me unsure of what to work on next. On a few occasions, this has been because the software engineers in the team have run out of tickets created by the Product Manager or Product Designer. Naturally, this sets off alarm bells and tells me that the team isn’t thinking about the full gamut of inputs that make up a good backlog. It isn’t always this way round, of course, sometimes a team will have very strong technical leadership and it will be other areas of the backlog that are waning.

Note, this article is about digital Product Teams and their delivery backlogs. It isn’t coupled to any particular frameworks, it is loosely assumed that the team is making use of an agile methodology, such as Scrum or Kanban, and writing segmented and iteratively deliverable tickets (users stories and acceptance criteria).

First things first, it’s important that the team has a vision, a strategy and a clear set of near-term objectives or goals. I’m not going to go into detail on these in this article but without them, there is no guiding light against which a backlog can be built. What’s important is that the strategy is robust and inclusive of the product vision and the technology vision.

It’s all too easy to think of a backlog as a list of feature tickets designed to deliver a particular piece of functionality. It is, of course, this in part, but it’s also so much more. When a team takes on accountability for a product, they also take on the responsibility for maintaining it and ensuring it continues to drive value. With this in mind, a good Product Team backlog should be made up of several streams of different types of tickets. The development of these tickets should be led, and then collaboratively refined, by different areas of the team.

These streams should include (as a minimum):

High priority incidents — It probably goes without saying, but you should plan for the worst. That’s not to say that the backlog should have pre-planned time dedicated to incident tickets, we often don’t know an incident will occur until it does (see the section on monitoring below), but it should be assumed that if a high priority incident comes into an area owned by the team that it’s triaged and responded to effectively. Having a team define its own severity ratings and putting in place its own SLAs can be a good way to manage this and keep honest. It is important to make sure that when these incidents do occur that they get their own tickets in the team’s backlog (not just in a separate and central incident backlog). By doing so, you ensure the team can track and understand how they’re spending their time and how high priority incidents impact their overall capacity/velocity. Managing this at a team level automatically ensures ownership in an area; afterall, who wants to be called out for writing brittle code or designing a poor UX knowing they’re also the ones who’ll have to fix it?

Big bets & innovation — Of course, the team is here to deliver new (sometimes innovative) features and functionality. We think of these as the Big Bets, the features that require a degree of upfront investment in order to get to the point where we can meaningfully assess what value they might drive. We want to do this in as lean a way as possible. This might mean they never go beyond a prototype, or it might mean several weeks (maybe even a couple of months) of dedicated development. It’s important to call out the Big Bets and be transparent about them; they often require a level of considered investment by a business and, in my experience, tend to work best when backed by a business sponsor.

Optimisation tests — A team shouldn’t only work on the Big Bets, though. Sometimes the greatest value can be derived from tweaking something we already have and any good Product Team should always be running a series of optimisation tests against existing areas of their product. There’s no magic number that I’ve come across but I like to see Product Teams kicking off 2–3 small optimisation tests every fortnight. I’ve heard of Product Teams that use a temporary vanity metric for test frequency to help get them started on this journey (i.e. aim to complete 12 a/b tests a quarter), so it’s not a one size fits all suggestion. I do think, however, that optimisation tests are an integral part of successfully balancing the backlog.

Side note on Analytics — I’m not calling out analytics as a separate stream, and you’ll note that it’s not directly represented in the diagram above, however, it’s so important that it’s worth its own mention here. Every Product Team should be thinking about how they measure the value and success of what they’re doing. This isn’t one person’s job and it isn’t a separate ticket, it’s a foundational piece of acceptance criteria on everything that gets implemented. Product Teams should be asking themselves “how will we know if this has been a success?”. Every member of the team should be looking at and making sense of the gathered analytics to inform the decisions they make as a team. This is a self-fulfilling circle of virtue as the more time the software engineers spend analysing the data, the more likely they are to ensure that the tracking is accurate and relevant when they implement it.

Engineering excellence & technology innovation — Good code, healthy codebases, managed access control and a pace to getting quality experiences into live production environments doesn’t happen by accident. Product Teams need to invest in modern engineering practices. At any given moment, many practices will be well established (i.e Continuous Integration and Deployment) but some will be emerging and both require time and investment. Implicit in this is a well-maintained test suite. It is all too easy for the tests to be an afterthought or to become bloated and slow to run, or worse still outdated and ineffective. The technology leadership should be supporting and working with Product Teams to ensure that engineering excellence is more than simply a consideration, but rather a committed stream of ongoing work. This needs to happen horizontally, as many of these areas do, to ensure that teams don’t create divergent, clashing approaches.

Performance & monitoring — No digital experience can compete, particularly if it wants to rank somewhere meaningful in search results, if it doesn’t consider performance as a critical stream of work. Measuring and continuously monitoring page load performance and using frameworks like Google Lighthouse metrics to assess the quality of service has well-documented benefits for the users and the business alike. A more performant experience has been shown time and again to be a stickier experience that sees greater retention and conversion. Monitoring, and alerting, is a thing in its own right though, and arguably should have its own separate row on the diagram. You don’t want your customers to be the ones that are telling you something isn’t working as intended. Good monitoring should be in place and maintained across your services with threshold-based alerts that will tell you the moment things start to go wrong. Better still are leading indicators that point towards a potential future issue that can be resolved before it even hits. Operational resilience is so important and requires dedicated investment.

Maintenance — Product Teams are responsible for the full life-cycle of their experiences; from cradle to grave, as we say. Ensuring the experience is well maintained, that an appropriate level of bug fixing is taking place, and that third-party libraries are kept up to date is essential if an experience is going to remain credible in the eyes of its users. Maintenance time isn’t a luxury, it’s the bedrock of a reliable experience.

Paying down Tech Debt — Tech Debt goes hand-in-hand with Technology Innovation, however, it’s important to emphasise the difference. Technical debt is inevitable and important, it isn’t always a bad thing, as we know, and is often a key consideration when delivering a new feature. Also, if you really don’t have any Tech Debt at all, you’re probably over-investing too early in your product development and not moving quickly enough, unless you’re building an air traffic control system of course! We make trade-offs all the time in order to find a balance of quality and expediency. We don’t want to over-invest in something until we know it’s going to drive value and as such carrying a certain level of Tech Debt is to be encouraged. However, we also need to pay that debt down and have a strategic approach to ensuring the areas of the experience that are going to exist for a meaningful period of time are well built and in line with the technology strategy. Spending time refactoring and improving the code of features that have already been delivered is essential (and explaining why this is important to business stakeholders is an important part of the engineering leaders role in the team).

Decommissioning — There comes a time in every product’s lifecycle when we need to make the call to pivot or decommission. If a product isn’t consistently meeting the thresholds we have defined for success and the hypothesis hasn’t rung true, it’s time to either try something different by perhaps taking some of what we have and using it to pivot in a different direction, or to simply sunset and fully decommission the thing. Switching things off makes for a better product experience, because as we know “simplicity is the ultimate sophistication” — Leonardo da Vinci; or in the words of one of my heroes from yesteryear “simplify, then add lightness” — Colin Chapman. It’s critical not to leave mediocre experiences hanging around in perpetuity. Today’s forgotten product is tomorrow’s web of interconnected technical headaches. You don’t want to spend several weeks or months undoing it or your team will become demotivated and you’ll see attrition. Continually assess and switch-off/decommission as you go.

Spikes, knowledge transfer and up-skilling — I’ve bundled these together but in many respects they’re separate. It’s important to acknowledge that things take time and that this also applies to learning and understanding. Technology doesn’t stand still for very long and neither do consumer trends or competitive markets. Most teams create “spike” tickets, timeboxed briefs designed to get a better understanding of a particular technology or how something might be implemented. In my experience, not enough teams put in tickets to ensure knowledge is transferred across the team nor do they give themselves permission to carve out time for learning. This becomes even more important as a team takes on full operational responsibility for their product out of hours.

In summary
A Product Team’s delivery backlog doesn’t come from a single source and isn’t one person’s responsibility. It’s complicated so the product itself can be simple, reliable and effectively deliver value to its users. This is one area where routine team retrospectives, or wash-ups, are an important practice that sees learnings fed back into the backlog so that it can continually improve.

One question I often get asked is, how much time should be apportioned to each of the areas above? Every team will be on its own journey here and every product will be at a different stage of its lifecycle. What I can say is don’t spend too much or too little time on any one of the above. The key is to find a healthy balance of all of them at any given moment in time. There are well-established backlog management tools that make it possible to retrospectively review commitment to each area of a backlog. It can be useful to then compare this against the team’s perception of the backlog and how they’re really spending their time. Ensuring work is laddered up into Epics where appropriate will help keep the backlog ordered. The bias of where a Product Team spends time will ebb and flow, but ultimately the team itself should be building a balanced backlog and steering its own ship.

All the thoughts are my own.

--

--

Rico Surridge

Chief Product & Technology Officer - writing about Leadership, Product Development and Product Engineering Teams.