Rice Scoring: A framework to resolve product prioritization conundrum

Digvijay Singh
5 min readJul 7, 2020

--

Product road mapping is a herculean task for PMs and product teams because it demands activities such as:

  • Brainstorming new ideas.
  • Extensive market and user research to find the right product-market fit.
  • Collect, refine & synthesize the feedback.

By carrying out the aforementioned activities correctly and iteratively, we will have a solid product roadmap full of good ideas, features, and initiatives that will allow organizations/product teams to achieve their North Star metric/ goal, or product vision.

Building a product roadmap with an adequate number of features is one thing; the order in which these features are tackled or built is another, and this is where prioritization comes into play. An appropriate prioritization technique addresses the critical aspect of determining relevancy in order to build the right features at the right time, ensuring that your product or offering has a longer shelf life in the market.

In this article, we will explore the RICE scoring model, a prioritization framework designed to assist product teams in determining which ideas, features, and initiatives should be prioritized first.

About RICE Scoring Model & How it Works?

This prioritization model was created by Intercom in order to improve its decision-making process. Product teams can use this scoring model to evaluate competing ideas/features/initiatives by scoring them based on these four factors (Reach, Impact, Confidence, and Effort) and applying the following formula to generate a RICE Score.

Formula to calculate Rice Score

Reach

Reach, as the name implies, is a factor that estimates how many users a feature or an initiative will be able to reach in a given time period. To determine reach, make sure your estimates are supported by real measurements from your product metrics and not just a needle pulled at random from a haystack.

Let’s look at some examples to understand this better:

  • We hope to reach 500 new users by the end of this quarter with this new feature X. In this case, your reach score would be 500.
  • The new landing page can help us achieve 1000 new page visits, and with a 30% conversion rate, we can have 300 new signups on our websites this month. Your reach score would be 300 in this case.

Impact

Impact can be measured through a quantitative objective like:

How many trial users would become paying customers if this subscription plan/model were introduced to them?

OR a qualitative objective such as:

  • We need to increase our customer delight by localizing our app in order to positively impact our non-English speaking users.

Because the impact is difficult to quantify precisely, we can estimate the scale of impact using a five-tiered scoring system: 3 for “massive impact,” 2 for “high,” 1 for “medium,” 0.5 for “low,” and 0.25 for “minimal.”

It may appear to be an unscientific method to choose an impact on a scale of 3 to 0.25, but it is always preferable to the alternative of measuring/deciding based on gut feeling, which is always doomed to failure.

Confidence

Confidence is a controlling factor that can suppress enthusiasm for exciting but deficient features, especially when you believe a certain feature will have a significant impact but the supporting data is lacking. To assign a certain level of confidence to one feature over another, we must assign a percentage score to each feature. 100% for “high confidence,” 80% for “medium confidence,” and 50% for “confidence.”

If anything is less than 50%, consider that you are shooting in the dark, with a high to a certain chance of missing the target.

Let’s look at a few examples to understand this better:

  • Feature A: The supporting metrics indicate that we can reach X number of users, and our user research indicates that Y% of users will benefit, and we have a design, engineering, and QA estimate for releasing this feature by the end of this month. In this case, our confidence level would be 100 percent.
  • Feature B: We have supporting metrics for reach in which we can get exposure to X number of users, but we are unsure about the impact. We also have a ballpark effort for this feature from the design, engineering, and QA teams. In this case, our confidence level would be 100 percent.
  • Feature C: The supporting metrics show that our reach and impact score are not what we expected during the initial planning, and the estimated design, engineering, and QA efforts are also significantly higher than our initial ballpark figure. In this case, our confidence level is 50%.

Effort

For a greater competitive advantage, features/initiatives should always be completed and launched in less time and with less effort. To calculate the effort, simply determine how many resources the product team (marketing, design, engineering, QA, etc.) will need to complete that feature/initiative. The effort is estimated in person months which is your effort score as well.

Let’s look at a few examples below:

  • Feature A: For rolling out this feature, we need about a half-week of planning, about two weeks of design work, 2–3 weeks of engineering time, and about a week of QA and bug fixing. Here, our effort score would be 2 person-months.
  • Feature B: For rolling out this feature, we will need 2–3 weeks of planning, about 2 weeks of design efforts, a month of engineering time, and more than a week of QA and bug fixing. Here, our effort score would be 4 person-months.
  • Feature C: For rolling out this feature, we need a half-week of planning, no design efforts because we can reuse the same design components and layout, 1–2 weeks of engineering time, and less than a half-week of QA and bug fixing. Here, our effort score would be 1 person-month.

Now Let’s Calculate & Measure!

Once we have estimated scores for all four factors and quantified them through the formula, we have our resulting RICE scores, which are your “total impact per time worked."

To understand how the formula works a snapshot of the spreadsheet can be found below where we have calculated RICE scores for different features so that you can compare their scores.

Rice Scoring Template

Conclusion

The feature priority determined by the RICE score does not have to be strictly adhered to because you may want to proceed with a lower priority feature if it is an interdependent feature for any ongoing or WIP feature/initiative. Another example is that you are not allowed to expand in a certain market due to a regional compliance/regulatory requirement, so that feature has become “table stakes” because addressing that compliance feature would result in expansion to a new market.

The overall goal of the RICE scoring model is to make better-informed decisions, reduce personal bias in decision-making, and finally, assist in defending priorities against key stakeholders such as executive leadership.

Please check out my previous article on Moscow prioritization where it talks about how the said prioritization technique can streamline your feature prioritization process while developing an MVP. link here

For professional connections, you can add me on Linkedin.

--

--