Guides
About airfocus

Try for free

RICE Scoring

❮ Previous Chapter 4 Next ❯
Contents
Building better
products starts here
Join thousands of product managers and makers who already enjoy our newsletter
Join newsletter

What is RICE scoring?

The RICE scoring model was introduced by Intercom a few years ago and has been widely adopted and used by product managers and product owners to prioritize feature releases and projects. This framework is structured into four key criteria that form the acronym “RICE”. The RICE prioritization framework helps avoid bias towards features and projects you personally prefer.

The acronym consists of four criteria (reach, impact, confidence, effort):

Why and when should you use RICE scoring?

The RICE scoring model is one of the best frameworks if your product team is trying to work out in which order they should prioritize initiatives or features. It is a time-saving and consistent framework, and its objective scoring can be a great help when a product team is trying to derive the importance of an initiative. When used correctly, it allows the team to evaluate which features are crucial and which are not.

How does RICE work? A formula for success

A RICE score is calculated by dividing overall value by how much effort is needed to get there. This formula provides a standardized score, which allows us to objectively decide which items to prioritize. We will break down each of the factors to get a good understanding of their importance and how they work.

Reach

Reach represents the number of users or paying customers that would be directly affected by this feature during a set time period.

This could be customers per month, and in the case of events, they could be transactions per month or actions.

That can mean things like the number of people that would interact with a certain feature in a month or a reduction in churn over a month following the release of a new feature.

Examples:

  • New onboarding: This feature will reach an estimated 3000 new users per month

  • Google Calendar integration: This feature needs to be activated right after onboarding. Therefore if 3000 users finish the onboarding and 80% choose to turn it on: 3000x0.8 =2400 users per month.

  • Slack bot: Every user who uses this feature every month will experience the upgrade. The reach is 1,000 customers per quarter.

Impact

Impact is defined by the overall contribution of a certain feature to your product, reflected by the benefit your users will get from the said feature. Depending on your use case, it can also mean how much a feature will increase your conversion rate. Measuring how much benefit your users get from said feature can be hard, so there are several scales to choose from, with Intercom’s being a widely adopted standard.

There are numerous ways of determining impact. Some key questions to consider are: will this feature greatly improve our conversion rates? Will it help retain users? Does it improve the ease of use significantly? Perhaps it’ll make users have a eureka moment and realize this is exactly what they need.

The impact scale involves estimation, but this is much better than gut feeling. Here’s an example with airfocus T-shirt sizes:

  • New onboarding: it will have a large impact conversion rate therefore the impact score is 1(XL).

  • Slack bot: It will have a rather low impact on users, so, the impact score is 0.25 (S).

  • Google calendar integration: in terms of impact, it is somewhere in-between. The impact score is 0.5 (M).

Confidence

This metric accounts for the confidence you have in the estimations you made. Sometimes you believe a project could have a large impact but simply lack the data to back your assumptions up. In other words, how confident are you about your reach and impact scores? How much data do you have to back your scores up?

We use a percentage scale for confidence. Always ask yourself: how extensively can my data support my estimates? Typically 100% will represent “high confidence”, Medium equals 80%, and Low is 50% because anything below that would be a shot in the dark.

Let’s look at our example:

  • New onboarding

    : We have heavily researched users for impact, conducting live-tests, and have exact numbers for reach, with an effort estimate from the team. This feature gets a 90% confidence score.

  • Google calendar integration

    : I have data to back the reach and effort, but I’m still sceptical about the impact. This project gets an 70% confidence score.

  • Slack bot

    : The reach and impact may be rather ambiguous, and the effort may be our most accurate criterion. This project gets a 50% confidence score.

Effort

This represents the amount of work that is required from your team to build a feature or finish a project. Depending on the use case, the value type could be person-months or project-hours.

Keep in mind that we make our estimates in whole numbers and that effort is a negative factor, being the denominator in the formula.

You can determine effort quite simply by asking: how much time will a feature require from all of our team members?

Here’s a person-month example:

  • Slack bot

    : It’s a simple project requiring only a few days of planning, two weeks of design and few days of coding. We’ll give it a score of 2 person-month.

  • Google calendar integration

    : It’ll take a week of planning, 3-4 weeks dev team time, and 1-2 weeks of design. We’ll assign it a score of 5 person-month.

  • New onboarding

    : Planning this project will take several weeks, with at least 1 month of engineering, plus extensive design time. Therefore, the effort score will be 6 person-month.

Calculating RICE score

Implementing a RICE score should be easy to calculate with your team. Once you’ve determined your scores, you can plug them into a formula to get a final score, and compare your projects.

What do you do with the score?

So, you’ve got your scores and now you know which initiatives to prioritize first based on those that scored the highest. This will help you back your decisions with information and data and defend these decisions to other stakeholders.

💬

Don’t forget:

There are externalities or criteria that RICE fails to consider that might influence what you work on first, such as dependencies, available resources of key-personnel, or perhaps you simply feel like working on a project first due to other externalities. But RICE will allow you to see the trade-offs of making such decisions.

Your RICE scores are best visualized on a Value vs. Effort chart. It will provide a quick overview of your best projects, low-value items you should cut, quick wins, and valuable but time-consuming projects so that you can assess them against each other.

Why we love it

Bang for your buck perspective: it allows teams to determine how much their effort is worth relative to their overall value. Which is exactly what we would like to maximize.

Paints a big comprehensive picture: criteria is based on factors that have the biggest impact on the product and user through alignment of vision and initiatives.

Compatible visualization: as it can be plotted on a value vs effort chart for quick visualization and decision making.

Reduced impact of bias: due to quantification and confidence on how much data backs our factors.

Based mostly on metrics: as the product progresses through its lifecycle we can continue making further improvements.

A Few Downsides...

Lack of accuracy: sometimes evaluating the reach or future impact of a project can be difficult.

Dependencies are not taken into account: this fails to consider that a low scored product could take precedence over a high priority one.

Blind to criteria that are not considered.

❮ Back
Next ❯

Building better products starts here

Join thousands of product managers and makers who already enjoy our newsletter. Get free tips and resources delivered directly to your inbox.
airfocus is where teams build great products. Welcome home 💙
Company
All rights reserved. contact@airfocus.com
ENDE