The ICE Scoring Model is an agile prioritization tool, assessing projects, ideas, and features via three-set measurements: Impact, Confidence, and Ease. Unlike weighted scoring systems, the ICE method uses only these parameters and gives each a relative score from 1-10 (1 being low, 10 being high). Simply multiply all three scores in the ICE formula, and you’ve got your ICE result.
As you’d expect, the project, idea or feature with the highest ICE score should be prioritized first.
The ICE methodology (not the same as RICE) is a quick and easy way of deciding where to focus your energies, but it suffers a little from being highly subjective.
Agile environments are fast-paced and during short development cycles decisions must be made quickly. Decision makers need to consider as much information as possible without overwhelming themselves. This is a delicate balance to maintain, but the ICE Scoring Model does a fair job of giving teams a snapshot of the most valuable activities.
The I, C, and E in the ICE formula stands for Impact, Confidence and Ease and these are the 3 metrics on which projects are scored.
Impact refers to the potential for a project to support the main business objective. For example, if your focus is on getting more people to use your app, anything that could encourage a boost in sign-ups would have a high impact score.
Confidence tempers the Impact score a little by making you consider how confident you are that this impact would be realized.
Ease is somewhat self-explanatory: how easy is the project or test to complete? There are no fixed parameters here, all the scoring is done relative to other ideas on the table.
Here we have created an example using 3 possible projects that have been scored using the ICE method.
Project 1 will have a huge impact and we are confident of this. However, it is complex and long and scores low for ease, which drags the overall ICE score way down.
Project 2 has a middling potential impact, but we are not confident we can actually achieve this. It also scores low, despite a high ease value.
Project 3 is low impact, but carries high confidence and ease scores, meaning it can be completed quickly and is guaranteed to have a positive result.
So, under the ICE model we should prioritize project 3 and then once it has been delivered, return to the table to decide on the next project.
Without a scoring method like ICE, we may have been tempted to go after the alluring impact of project 1. But after scoring all three projects together, we can see that other activities should be done first.
Sean Ellis, CEO of GrowthHackers is widely credited with inventing the ICE Scoring Model. Ellis is an angel investor with significant experience helping to grow businesses.
He owes his success, in part, to divergent thinking followed by quick prioritization.
The ICE Scoring Model gave him a way to do that; sifting quickly through a long list of ideas, with a good degree of accuracy.
The ICE methodology is a quick and easy way to decide which projects should take priority. Because it is so simple to do, you might expect the model to be oversimplified.
But as the values are multiplied together, each of the 3 criteria has an equally large influence over the end result.
This amplifies the difference between a 7 and an 8 for confidence by a huge factor and means the final ICE scores will give a more accurate picture of how your projects rank relative to each other.
This assumes the values are accurate though, which is where the ICE Scoring Model hits somewhat shaky ground.
The main drawback of the ICE model is how subjective the scoring is. Different people using the ICE formula at the same time could all assign wildly different values for Impact, Confidence and Ease to the same project or feature.
For example, a business owner could view adding a new feature as very easy, but the head of the development team will know more about the coding hours required. It is possible for people to score their own ideas more favorably than others, too.
ICE decision-making starts by accurately assigning the score from 1 to 10 for each category: impact, confidence, and ease. To avoid subjectivity in the ICE prioritization, it’s a good idea to get input from multiple stakeholders.
Consider the ICE prioritization process between two feature requests.
The first feature request is a change to the UI. The request gets an impact score of 8 because it will likely improve the product. However, there are doubts because some users don’t like to change, so it gets a lower confidence score of 5. Finally, the UI change will be slow and difficult to implement, so ease gets the lowest score of 3. The total ICE formula score is 120.
The second feature adds the last login time to the home screen. It has a lower impact score of 4, but there is more certainty of improvement, so a confidence score of 8 and an ease score of 9. With a final ICE score of 288, the ICE management model pushes the second feature request to the top of the list.
ICE decision-making can be helpful for agile teams due to its simplicity and speed.
It ditches tons of documentation and allows new requirements to be integrated and prioritized throughout the development process.
Product managers can quickly prioritize between new features, enhancements, and bug fixes, without hindering the rapid progress of MVPs and product releases.
One of the best ways to use ICE in agile is for backlog management. The prioritization gives the development team the information they need to pull work from the backlog list as time allows, focusing on the most critical updates first.
The ICE Scoring Model really comes into its own when decisions need to be made quickly and — crucially — where a ‘good enough’ level of accuracy is acceptable. This tends to be in fast paced, high velocity development environments and high growth stages.
Sean Ellis’s team at GrowthHackers define ICE as a “minimum viable prioritization framework” — minimum viable meaning you can get what you need, with the least amount of effort.
If you choose to adopt the ICE Scoring Model, then over time you will develop a keener sense of what will have a meaningful impact, a more honed confidence instinct, and a better grasp on how easy it is to get certain things done. This will go some way towards removing the disadvantages of the model, and help you make better decisions faster.
When used in the right context, ICE scoring can be a highly effective way of prioritizing.
The last thing anyone needs is an overly complicated prioritization system that causes more problems than it solves. ICE is a fast and simple way to assign value to an item, helping you spend less time prioritizing and more time actually building the product.
Even with all the data we could ever need to make a decision, sometimes it’s just about gut feeling. The ICE Scoring Model takes that into consideration, allowing you to make data-driven decisions that you truly believe in.
The ICE Scoring Model is extremely flexible. This includes relative prioritization for when you're comparing just a few things, as well as prioritizing a longer list of things like your whole backlog.
As with all things, there are cons to go alongside the pros of the ICE Scoring Model.
Because we’re factoring confidence, a human emotion with no proven data attached, the results of ICE scoring can be skewed by personal bias. Different people using the ICE formula simultaneously could assign wildly different values for Impact, Confidence, and Ease to the same project or feature.
Using the ICE Scoring Model can lead you down the path of least resistance. While tackling many small tasks may feel like progress, you may ignore larger but crucial pieces of the product puzzle. This can be disastrous when working with dependencies.
As much as we avoid silos, they can naturally occur. There may also be essential information withheld for various reasons, meaning that ICE scoring relies on partial information to make big decisions.
While ICE is a great scoring tool, it’s not suited to every situation. It’s good to have a couple of alternatives to ensure you make the right decision. Let’s look at some other scoring tools that product managers can use to make prioritization decisions with confidence.
The RICE framework is a product prioritization method used in agile development. It stands for Reach, Impact, Confidence, and Effort. It helps teams evaluate and compare potential features or projects by estimating their potential impact, the likelihood of success, the effort required, and the potential audience reach.
MoSCoW prioritization is a technique for prioritizing features or requirements in software development based on their importance. It stands for Must have, Should have, Could have, and Won't have. It helps teams focus on delivering the most important features first while deprioritizing the less important ones, reducing the risk of project failure.
Opportunity Scoring is a method for evaluating and prioritizing business opportunities. It involves assigning scores to each opportunity based on its potential value, the effort required, the level of competition, and the market size. This helps companies focus on the opportunities with the highest possible return on investment and the best chance of success.
The Kano Model is a customer satisfaction framework used to classify product features into different categories based on their impact on customer satisfaction. It involves categorizing features into basic, performance, and delight categories based on how they meet customer needs. This helps companies understand which features are critical to customer satisfaction and how to improve their products to meet customer expectations.