RICE and BREaD

Arunkumar Jayaprakasam
5 min readApr 9, 2020

--

First off excuse the ‘cheesy’ title –considering the time spentto manage the food in these COVID-19 items, I could not help myself.

Prioritizing the list of items for a software release is one of the key PLM challenges as there are various factors to juggle with — revenue impact, deadlines for the release roll-out, effort involved, inter-team dependencies technical risk associated with new features, and so on. In this blog I want to share some experiences in using RICE — a prioritization framework for product management (in a B2B context, where release cycle is of order of ~3 months) and an alternate proposal.

Why product management prioritization framework?

In the presence of framework, PLM can avoid some of the common pitfalls, namely:

  • Avoiding ‘recency bias’ of including the items w.r.t the most recent hot topic whereas other items, with potentially more revenue impact, gets potentially ignored
  • PLM Bias can happen, especially when technically strong, whereby prioritizing favourite new/innovative ideas without the sufficient backing.
  • But most importantly, a tool to align PLM, Sales and Engineering on the list of items for the next release, in an objective fashion — and not based on a personal choice.

What is RICE?

RICE — stands for Reach, Impact, Confidence and Effort. The four factors are used for the prioritization framework.

1. Reach: This is a measure of the number of customers this feature affects (say in the next quarter).

2. Impact: This quantifies how important this feature (for the target customers) OR how much of revenue impact this feature is expected to have. Typically a relative scale of say 1–3 is used to grade different features.

3. Confidence: This reflects the probability of the above estimates being correct — so 100% indicated high confidence on the estimate, 80% medium confidence and so on.

4. Effort: The estimated effort for development + testing for this feature, typically estimated in man months.

Then RICE score for each feature is computed as:

RICE = (Reach * Impact * Confidence) / Effort

And subsequently a sorted list (highest score first) is then used to take the top RICE ranked items subject to the available bandwidth to meet a target deadline.

Experiences with RICE

This framework is very useful as it gives a powerful tool to go about the selection framework instead of say a largely gut-feel based approach. It does give an insight into the ranking between different items, and gives the engineering items a development order — which is desirable. But there were few issues also, in our (B2B) context:

  • There were some items that simply need to be present in the release — irrespective of the revenue/effort factors. So there is a tendency to play around with the number to ensure some of the features are on the top.
  • Reach is not a particularly valid criterion, due to the customer asymmetry in B2B, there may be few customers who have a much better share of revenue (and thereby say) than others.
  • Getting the precise effort estimates is a challenge as well and when this comes late changes the order of the items significantly.
  • Everything is an estimate anyways, so confidence was mostly a drag on the process — mostly the tendency was to set to 100%.

BREaD — an alternate proposal

Considering the above below is an alternated set of metrics to plan the prioritization framework

1. Blocker: This is proposed to a binary (0/1) field that categroizes some features as blockers — important for various reasons including but not limited to managing the relationship with a key customer, to address some key frustrations/UX improvements (need not necessarily be a revenue impact say due to customer stickiness), an absolute need for a sales win — whose current quarter revenue impact may be minimal but longer run expected to bring more revenue, etc. It is suggested to keep the number of blockers within 5–10% of the total features under consideration for a release

2. Revenue/Effort: Instead of reach, it is better to estimate ‘potential’ revenue impact of that feature. So among the target set of customers estimate the revenue impact of this feature and then take a cumulative measure across target customers. If simpler this could be a relative scale as well. For Effort we propose just a relative scale between 1–4 (say) to avoid a potentially length prioritization process. And then Revenue/Effort is a normalized metric in a scale of 0–9 as shown below.

3. Delay: Then there are those items, relatively low-medium w.r.t. the attention it gets, but they have been requested and de-prioritized due to other more attractive items. But over a period of time, following the classic kano’s model, today’s delighters will become tomorrow’s hygiene features. Similarly what may be an acceptable limitation, mostly grows into a considerably limiting operational need over time. So this is proposed to be included as a relative measure of 0–9 w.r.t accumulated delay since the time of request (could be months or release cycles since the original need).

So with the above the ‘BREaD’ score is calculated as, the 3-digit number formed by the below for each feature:

Then Similar to RICE, the features are sorted w.r.t ‘BREaD’ (highest number first) and the items are picked until the DEV capacity is utilized for this release.

Related Points

Some other related issues during prioritization are:

  • Effort is a cumulative metric — effort of different teams with different capacities. So while picking the items some team’s capacities may be exhausted earlier in which case we may need to pick lower ranked items (not affecting that team) OR find ways to allocate more resource to that team, etc.
  • Dependent items is an issue that needs to be managed — sometimes due to engineering reasons, a lower ranked feature may need to be developed for various technical reasons. Such cases gets handled by allocating the same score as the dependent higher priority item and picking them together.

--

--