Prioritising your product backlog by risk, impact, evidence and value

Craig Strong
The Lean Product Lifecycle
8 min readOct 25, 2019

The following post is based on some content of The Lean Product Lifecycle book, taken from the GROW stage which is where you have more known-knowns and are scaling your business or product. To learn more you can order your copy directly from Amazon here. If you would like to subscribe for updates, you can join our mailing list at https://www.leanproductlifecycle.com/

No matter where your product is in its lifecycle, its environment, competition or the future changes ahead, risks and uncertainty will exist and continuously change. To help you adapt, apply and recognise new learnings and challenges that have emerged, we recommend regularly facilitating a prioritisation and roadmap review on a learning cycle. These reviews should reflect your continuously improving knowledge. If you are undertaking these reviews regularly and there is little change to the previous detail and/or priority of the backlog, this might be a signal that your feedback and validation loops are not operating as they should.

There are lots of techniques that you can use to help and try to figure out what is the next most valuable thing to work on. Commonly though there are some important points to consider when reviewing and evaluating your priorities are:

1. Evidence and data will reduce prioritisation subjectivity. Contribute learnings from previous experiments or outcomes and surface examples containing antilogs (things that have failed in the past) and analogues (similar things that had worked in the past). Remember internal validation has limited to no value. There is no substitute for direct customer feedback.

2. Re-state the problem you are solving for who and why. Overtime assumptions creep in and can blur the focus or goals previously set. It is beneficial to recap on the goals extract and outcomes to help re-prioritise goals as well as helping new ideas to be prioritised against.

3. Ensure you have a cross-section of representatives. If you’re organising a group facilitated a session, ensure you have a small, but the decisive group which contains a spread of stakeholders and/or functional roles who are impacted by the changes.

4. Ensure you understand and are aligned on the strategic objectives. When we engage in our day-to-day activities, many opportunities surface for pivoting. Although all learnings are captured, it is important to ensure the correlation with strategy is clear as this may affect stakeholder value and continued sponsorship. This also helps align efforts to wider objectives.

Prioritising Risky Assumptions

Through the Idea-Explore-Validate phases explained in The Lean Product Lifecycle book, we share how to identify and experiment with risky assumptions. This acts as a useful foundation to collect evidence and de-risk such assumptions, obtaining higher confidence in the potential outcome, whilst minimizing effort to learn. These techniques are particularly useful when referencing risk as a confidence indicator when planning your product roadmap.

To help explore new risks and assumptions and act a foundation for exploration, the following six steps provide a useful exercise to help your team learn and prioritise what to do next.

1. Business/Operating model review — Review your business model canvas and call out and untested introductions, opportunities, risks, changes as well as new learnings. For instance, to grow to the next level you may introduce a new channel or key relationship and may want to test the value of this before committing. You can also review any new inputs and learnings at this point around your business model as a whole.

2. Learning reflection — Reflect on your growth hypotheses and learnings to date. Then review your proposed growth hypotheses for the next period 1–3 months. It’s recommended you reflect on your previously defined success criteria and take a data drive/evidence approach to any reviews to avoid subjectivity. Future hypotheses should not be absent of, ignore or contradict learnings from historic activities.

3. Capture outstanding assumptions — With your teams, individually list assumptions, goals and risks. These could be new assumptions from new knowledge or derivatives of those previously known.

4. Group visibility — Group all the results and list them on a board for everyone to see and input into. Knowledge needs to be shared and accessible to the whole for a healthier and diverse contribution.

5. Scoring — Score each item by the risk of failing and the impact to the business if not true. It is recommended that you establish and define a few examples for each scoring reference to help maintain consistent relativity to scale. The impact rating should be informed by the dependencies of its success against your business model projections. For instance, if you will cease to operate if regulatory needs aren’t met or you will put at risk continued sponsorship, then the outcomes of this should be scored 10 out of 10.

6. Refine — Discuss and refine through conversation as a team and refine any scores. Be careful that refinement isn’t influenced by the hierarchy in the room. Ratings should be consistent and objective.

By then multiplying the scores together you can then have a single list to help stack rank the items accordingly, which sets a base for prioritisation as shown in the following table. This fairly quick way to numerically quantify some of the risks helps you further explore and prioritise your backlog for experimentation. It will also help invoke interesting conversations as a team, surfacing further insights whilst providing improved team alignment.

Note that prioritising on risk and uncertainty alone should not be a deciding factor. You may have a low risk of failing (1) with a high impact (10), revealing a score of 10. Consider legislation being such an example. A score of 10 regarding uncertainty wouldn’t be high comparatively against a risk of 3 and an impact of 5 resulting in a score of 15.

There are many aspects to consider regarding the risk of failing, such as time to completion. Continuing the legislative example, if the remaining time to complete these tasks decreases, the risk of falling increases as a multiplier; so an increase to 2 doubles the score to 20 and so on.

What the uncertainty score is particularly useful for is to highlight the combined high-value items, where doing so allows you to get head and de-risk the item and reduce the rate of failure, aligned to the impact. This list is dynamic and should be reviewed regularly, where findings are further explored and managed. You can use this rating as an exercise to explore items and tease out priorities and assumptions.

Next Steps

Taking this further it’s sometimes useful to incorporate other dimensions such as the cost of delay or weighted shortest job first (WSJF). This is useful where you have a lot of ideas channelled through a bottleneck competing for resources and need a logical prioritised path of value.

When your business model has a level of increased certainty you will be able to call upon more evidence and lean more towards operational efficiency. The weighted shortest job first (WSJF) can be beneficial as a prioritisation effort, especially considering the effort versus returned value. It’s common to see priorities change when this is introduced, which generally reveals a more optimum path of value and return on investment.

When taking a hypothesis-driven approach to calculating or estimating the WSJF, we would strongly advise that the values are informed by evidence. When working with some teams and particularly those in larger enterprises with mature products, we advocate using evidence as a multiplier. This is particularly important if you’re operating within a complex or complicated domain space where discovery and emergence are expected. Evidence helps calculations of WSFJ to have a higher degree of certainty, removing some subjectivity from the scoring. Experiments and research contribute to the collection of evidence, supported by clear hypotheses success criteria.

To support the inclusion of evidence in a more structured format, we would recommend that you establish an agreed understanding of weighting towards the evidence provided using a confidence indicator. The more evidence and data provided the more confident you should be with your projected outcomes, which will help understand risk and value.

If something is deemed as high value for growth, then supporting evidence should be strong where the investment of resources and peoples time and efforts are expected.

You should extend and contextualise your scoring system for your business and environment. It’s important that this is explicit and shared before any scores are applied (sample available in book). An example of a combined rating is shown below :

It’s important to note when using evidence as a multiplier, that this doesn’t become one dimensional. As you collect evidence, the evidence should simultaneously inform the value and impact, so that all the values are reflected. By not considering this, you run the risk of collecting evidence and the volume of which negatively affects your priority. The main purpose of using evidence as a multiplier is to tease out the distinction between an assumption and an assertion. Asserted facts not supported by evidence are deemed assumptions through lack of evidence.

The confidence value is shown in the above table which combines weighting and the evidence level, helps teams tackle features with the most likely outcome. So in the above example, you would change the approach which is expressed by WSJF alone from feature priority A B C to B C A. You may challenge this by stating that the weighting impacts the business in a greater fashion, but the confidence indicator offsets the assumption with evidence and allows you to consider it’s likeliness of achievement. Investing in resources only on desired outcomes and effort cost can present a high risk in achieving outcomes. Where this is challenged as the weighting is significant, then we would recommend experimentation is considered to gather evidence and the scoring system re-adjusted accordingly. In doing so experiments should be constructed well and objective as the goal isn’t to confirm bias. A larger pool of evidence demonstrating that the assumption may not achieve the desired outcome should reduce the score.

Remember, where there are high value and high impact but low evidence, this should trigger experiments to capture evidence. As the evidence is collected and this informs or reaffirms the impact and value presumption, this multiplies the assumption to more or a likely fact which should begin to push the idea closer to productisation. Product backlogs are best guesses based on limited knowledge at a point in time to achieve desired outcomes.

--

--

Craig Strong
Craig Strong

Written by Craig Strong

Product Enterprise global practice lead at AWS, helping companies innovate and grow through people, operations and product. Author of Lean Product Lifecycle.