This chapter will navigate you through 7 of the most popular prioritization frameworks:
Value vs. Effort
The weighted scoring decision matrix is a powerful quantitative technique. It evaluates a set of choices (such as ideas or projects, for example) against a set of criteria you need to take into account (strategic fit, revenue increase, risks, costs, hours/days required)
It also is known as a “weighted decision matrix model”.
This is often referred to as a "prioritization matrix", which is an umbrella term that’s also used to describe the Value vs Effort model. This crossover can often be a source of confusion. But don’t worry, we’ll be sure to be very clear about what we are talking about.
There are two main types of decision matrix: weighted and unweighted. The unweighted decision matrix assumes all criteria have the same importance while the weighted one applies different weights.
Weighted scoring and its decision matrix technique is not only widely applicable, but also one of the best ways to tackle important and complex decisions.
The weighted decision matrix is particularly useful, specifically when you have:
Many choices (such as different features, projects, and campaigns).
Multiple decision criteria to consider (such as strategic fit, costs, risk, and customer value) with similar or varying levels of importance.
It's exceptionally powerful when you have to choose between multiple promising options and need to consider many criteria, or when you need to allocate limited resources to multiple options.
By extensively evaluating your choices and quantifying the process, you'll be able to greatly reduce (and in many cases remove) emotion and guesswork from the decision process. This enables rational and objective decisions every time.
A dedicated prioritization tool like airfocus (shown below) allows you to set this up in minutes, but you can find how to recreate it with your own tools in our prioritization guide.
1. List different choices
Start by listing all the decision choices as rows. Don't forget any relevant choices, since these rows will form the foundation of your decision matrix.
In another VOOM Video App example they are:
Google calendar integration
2. Determine influencing criteria
Brainstorm what criteria will affect those decisions (this could be things like strategic fit, revenue increase, costs, project hours, and risk of failure, for example). List these criteria as columns.
Positive criteria usually represent your current product or business goals.
Using costs and or project hours (or something similar) is a good starting point for negative criteria.
Sometimes deciding whether to add criteria can be a bit of a trade-off
Having fewer criteria makes the prioritization process easier and less time-consuming.
Leaving criteria out makes your model completely blind to this type of impact.
3. Weigh your criteria
Weigh each of these criteria in the columns using a number (the weight) to assess their importance and impact on your decision. Establish a clear and consistent rating scale for each one (for example, 1, 2, 3, 4, 5 starting from an insignificant to greater impact). This helps to calculate the relative importance of each criteria.
4. Rate each choice for each criteria
Evaluate your different choices against the criteria. While using your defined rating system (in our case, from 1 through 5), rate each criteria individually. For example, if you think your mobile app has tremendous business value, give it a 5. Keep in mind: the values for each choice don't need to be different. Equal weighting is perfectly acceptable.
For each of these values, you have to make sure that higher values represent more preferable options. For example, a high ROI should lead to a high Business Value score because a great ROI is beneficial to your business. On the flip side, high development costs should result in a low Costs Value because high costs are negative.
Using a dedicated prioritization tool allows you to combine different value types like 1-5, any given amount of money (like $500 USD), or T-shirt sizes (S, M, L, XL) as well as scoring directions.
5. Calculate the weighted scores
Multiply each of the choice ratings by their corresponding weight.
6. Calculate the total scores
Sum up each of the choices and compare the total scores.
Now you know how to get started with a weighted decision matrix. Before you go ahead, check out these three essential tips to avoid common pitfalls:
1. Remove all unnecessary choices
Before you start creating your weighted decision matrix, identify what sort of criteria you think a winning choice requires. Does it need to meet a minimum amount of attributes? Does it need to align with a certain goal? By doing this, you will quickly eliminate unnecessary options. Removing all unnecessary items and criteria is a step towards simpler prioritization. Ultimately, this saves time and yields better results.
2. Rate each criteria separately
When it comes to considering each criteria, be sure to isolate it from all other criteria on the list. This will help you make an objective decision, putting this one criteria into perspective. You'll also be able to make a more unbiased score without being influenced by other factors.
3. Keep the decision matrix up to date
External realities (like a new competitor, for example), as well as internal goals and considerations (such as budget cuts), can change quickly. So, watch out for any new factors and update your decision matrix accordingly.
It creates transparency and agreement
about the importance of each prioritization factor in the decision-making process.
It’s one of the most comprehensive methods
of comparing numerous initiatives thanks to its linear layout.
It vastly reduces emotional bias
, as it is based on objective metrics that affect the viability of the feature in question.
Could be subject to inherent bias:
criteria weight can be under or overestimated based on other criteria.
Blind to externalities
: doesn’t consider changing internal and external factors (new entrants).
Dependencies are not considered:
this can be problematic as dependencies are an important consideration when prioritizing.
Value vs. Effort allows teams to assess their initiatives based on how much value they will bring and how difficult they’ll be to implement. This method has the advantage of visualization as the team will plot their items in a quadrant to decide how to distribute and prioritize initiatives.
In order to be more objective, the placement of the items could be defined quantitatively by a decision matrix (see weighted scoring).
Your team creates a prioritization matrix with Business Value as the Y-axis, and Effort as the X-axis. You then break the matrix down into four quadrants, as shown in the example below: high value & low effort; high value & high effort; low value & low effort; low value & high effort. From this starting point, your team will be able to plot each initiative in the relevant quadrant.
The Value Vs. Effort model is sometimes called a “prioritization matrix”, which is another term for decision matrix. Don’t get confused.
Items in the high value & low effort will be deemed top-priority as these are considered quick wins. On the other end of the spectrum, there are the low value & high effort items, which are likely items to cut as they are difficult to implement and promise low business value.
Due to its flexibility, intuitive implementation, and objective approach, it can be applied to numerous prioritization cases. This is particularly useful when you are timeboxed or have very limited resources, as well as when your team is developing a new product, or if you’d simply like to remove bias and have a more objective approach to initiatives that your team might feel strongly about.
Oftentimes we reject an initiative and assume it’s not worth it because of the effort it entails, or quite the opposite when our team is very keen, but upon shedding some light on its business value by plotting them in the quadrant we might reconsider.
In order to place your items on the chart, you first need to assess each item you want to prioritize against the following questions:
How much value will the item bring? Both on a business level, and directly to the user.
How much effort is required to build it?
Let’s delve into the subcategories:
While considering value, we must ask ourselves: what does value mean for our business? And what does it mean to our user personas?
This requires you to estimate how much value particular initiatives can yield for the company. This value can be determined by factors such as whether an initiative will generate new revenue, increase customer lifetime value, acquire new users, retain existing ones or reduce churn, among others. Another thing to consider is the impact on brand awareness.
This describes the value each initiative will bring to your user. You should consider their pain points and how far it goes to reduce them. Is the market demanding this feature? Will it improve your users’ efficiency (or other similar metric)? Will it benefit a large number of users or only a small group?
How do you measure the effort required? This question can only really be answered on a case by case basis. For most product teams this could be as simple as estimating the total amount of developer hours a certain initiative will require. However, oftentimes it involves a combination of other categories, including risk, among others...
Some of the most common considerations to score effort are:
Overall resource hours needed (man days, persons per month)
Overall operational costs
Risks (risk of failure, unanticipated perceived value upon delivery)
Costs (internal or buying external goods and services)
Which sub categories you use is up to each team to determine, depending on their resources and priorities.
Now that you’ve plotted your initiatives on the different quadrants of the matrix, it is time to decide which to include in your roadmap, and in what order.
This is how each quadrant will help you prioritize them:
Instead of placing your items on the quadrants manually, you can back your ratings with weighted scoring to make more objective decisions. Then place them on your “map” This allows you to have a specific weight for each subcategory depending on what’s most important to your business right now. By using a dedicated prioritization tool, this can be done with just a few clicks.
Extremely flexible and intuitive:
applicable across any type of product, organization, and industry due to value and effort criteria taking on a range of metrics.
Can be done qualitatively or quantitatively
**Resource allocation:**enables you to focus on items that will have the largest impact based on their goals and effort.
having inputs from different stakeholders or team members allows us to standardize how we prioritize different initiatives.
reaching a common ground on how to prioritize our initiatives allows for stakeholder buy-in.
Could be subject to systematic error:
usually this is introduced by those estimating how much value or effort each initiative represents. This may result in results being skewed too high or too low.
Best for small teams:
it's hard to implement for teams with large pipelines as well as large teams.
The RICE scoring model was introduced by Intercom a few years ago and has been widely adopted and used by product managers and product owners to prioritize feature releases and projects. This framework is structured into four key criteria that form the acronym “RICE”. The RICE prioritization framework helps avoid bias towards features and projects you personally prefer.
The acronym consists of four criteria (reach, impact, confidence, effort):
The RICE scoring model is one of the best frameworks if your product team is trying to work out in which order they should prioritize initiatives or features. It is a time-saving and consistent framework, and its objective scoring can be a great help when a product team is trying to derive the importance of an initiative. When used correctly, it allows the team to evaluate which features are crucial and which are not.
A RICE score is calculated by dividing overall value by how much effort is needed to get there. This formula provides a standardized score, which allows us to objectively decide which items to prioritize. We will break down each of the factors to get a good understanding of their importance and how they work.
Reach represents the number of users or paying customers that would be directly affected by this feature during a set time period.
This could be customers per month, and in the case of events, they could be transactions per month or actions.
That can mean things like the number of people that would interact with a certain feature in a month or a reduction in churn over a month following the release of a new feature.
New onboarding: This feature will reach an estimated 3000 new users per month
Google Calendar integration: This feature needs to be activated right after onboarding. Therefore if 3000 users finish the onboarding and 80% choose to turn it on: 3000x0.8 =2400 users per month.
Slack bot: Every user who uses this feature every month will experience the upgrade. The reach is 1,000 customers per quarter.
Impact is defined by the overall contribution of a certain feature to your product, reflected by the benefit your users will get from the said feature. Depending on your use case, it can also mean how much a feature will increase your conversion rate. Measuring how much benefit your users get from said feature can be hard, so there are several scales to choose from, with Intercom’s being a widely adopted standard.
There are numerous ways of determining impact. Some key questions to consider are: will this feature greatly improve our conversion rates? Will it help retain users? Does it improve the ease of use significantly? Perhaps it’ll make users have a eureka moment and realize this is exactly what they need.
The impact scale involves estimation, but this is much better than gut feeling. Here’s an example with airfocus T-shirt sizes:
New onboarding: it will have a large impact conversion rate therefore the impact score is 1(XL).
Slack bot: It will have a rather low impact on users, so, the impact score is 0.25 (S).
Google calendar integration: in terms of impact, it is somewhere in-between. The impact score is 0.5 (M).
This metric accounts for the confidence you have in the estimations you made. Sometimes you believe a project could have a large impact but simply lack the data to back your assumptions up. In other words, how confident are you about your reach and impact scores? How much data do you have to back your scores up?
We use a percentage scale for confidence. Always ask yourself: how extensively can my data support my estimates? Typically 100% will represent “high confidence”, Medium equals 80%, and Low is 50% because anything below that would be a shot in the dark.
Let’s look at our example:
: We have heavily researched users for impact, conducting live-tests, and have exact numbers for reach, with an effort estimate from the team. This feature gets a 90% confidence score.
Google calendar integration
: I have data to back the reach and effort, but I’m still sceptical about the impact. This project gets an 70% confidence score.
: The reach and impact may be rather ambiguous, and the effort may be our most accurate criterion. This project gets a 50% confidence score.
This represents the amount of work that is required from your team to build a feature or finish a project. Depending on the use case, the value type could be person-months or project-hours.
Keep in mind that we make our estimates in whole numbers and that effort is a negative factor, being the denominator in the formula.
You can determine effort quite simply by asking: how much time will a feature require from all of our team members?
Here’s a person-month example:
: It’s a simple project requiring only a few days of planning, two weeks of design and few days of coding. We’ll give it a score of 2 person-month.
Google calendar integration
: It’ll take a week of planning, 3-4 weeks dev team time, and 1-2 weeks of design. We’ll assign it a score of 5 person-month.
: Planning this project will take several weeks, with at least 1 month of engineering, plus extensive design time. Therefore, the effort score will be 6 person-month.
Implementing a RICE score should be easy to calculate with your team. Once you’ve determined your scores, you can plug them into a formula to get a final score, and compare your projects.
So, you’ve got your scores and now you know which initiatives to prioritize first based on those that scored the highest. This will help you back your decisions with information and data and defend these decisions to other stakeholders.
There are externalities or criteria that RICE fails to consider that might influence what you work on first, such as dependencies, available resources of key-personnel, or perhaps you simply feel like working on a project first due to other externalities. But RICE will allow you to see the trade-offs of making such decisions.
Your RICE scores are best visualized on a Value vs. Effort chart. It will provide a quick overview of your best projects, low-value items you should cut, quick wins, and valuable but time-consuming projects so that you can assess them against each other.
Bang for your buck perspective: it allows teams to determine how much their effort is worth relative to their overall value. Which is exactly what we would like to maximize.
Paints a big comprehensive picture: criteria is based on factors that have the biggest impact on the product and user through alignment of vision and initiatives.
Compatible visualization: as it can be plotted on a value vs effort chart for quick visualization and decision making.
Reduced impact of bias: due to quantification and confidence on how much data backs our factors.
Based mostly on metrics: as the product progresses through its lifecycle we can continue making further improvements.
Lack of accuracy: sometimes evaluating the reach or future impact of a project can be difficult.
Dependencies are not taken into account: this fails to consider that a low scored product could take precedence over a high priority one.
Blind to criteria that are not considered.
The Kano model is a framework used to prioritize features on the roadmap based on how likely they will satisfy or delight users. Your team should pull together a list of features to be considered, and plot them on a chart that visualizes satisfaction versus functionality. It points towards how desired or needed a feature can be, or if users are simply indifferent to it.
Whenever your team is considering the list of features to work on for your next release, and would like to figure out what mix is the best. This will also result in the best allocation of limited resources and time.
Kano allows you to work out the right combination of:
Minimum basic features that must be included
Performance features to start working or investing in
Which delight features will impress users the most
Here’s where things get interesting; to identify how satisfied or even delighted users will be with a product, we have to consider the two dimensions (or plotted axes), satisfaction versus functionality. How will users react to the level of functionality of our features?
Let’s break it down:
Satisfaction (Y-axis) :
The vertical axis measures the level of satisfaction, and it ranges from frustration (or complete dissatisfaction), to delight (or complete satisfaction).
Please note: This doesn’t always work as a linear scale, as you’ll come to learn in the following section, and it’s impossible to always stay at the top of the scale.
Also called investment or sophistication by some, this represents how well we’ve implemented a certain feature, how much has been invested in its development, or how much of a particular feature the user gets. It ranges from None to Best (or Very Well Done)
Looking at the graph we can quickly identify and classify the features into four different buckets:
Basic Features: also known as must-be features. These are expected by your users but they won’t satisfy them more. Without them, they won’t even consider your product. For example, we expect our email to be able to import or look up contacts, or a messenger app to send messages. If they don’t have this or the feature doesn’t work, users will simply go elsewhere.
We expect these features to be there and work, and therefore we see that as our product team puts more effort or money into making them more functional, our satisfaction grows. Yet it will never reach the positive quadrant.
Once it reaches its maximum potential, the team can stop investing effort into it.
Performance features (one-dimensional): The more of these we give users, the more satisfied they’ll be, therefore moving in a linear direction. As we increase functionality, so does our investment. Examples could be storage space on your cloud service or faster internet from your provider.
Attractive features: also known as delighters. These are pleasant surprises that the user isn’t expecting, but as the name suggests, once introduced they immediately generate excitement. Introducing these features can be what differentiates you. Think about the time Apple introduced Apple Pay from your iPhone, or the first time you were able to collaborate on Google Docs.
These are the kind of features that make you go, “Wow! How cool!”, and if plotted on the graph, it’s easy to see how the slightest increase of functionality will rapidly increase satisfaction.
Certain features simply don’t make much of a difference. The user feels indifferent towards their presence or absence, and they, therefore, don’t have an impact on the interaction or reaction. In other words, you should avoid working on these.
Categories change over time in a dynamic environment, just like user expectations. Our users will change their perception of product attributes in the future. What now seems to be a game-changer to you, might become a standard or expected a year from now. That’s why Attractive features will eventually transform into Performance and Basic features in time.
What our customers feel about some product attribute now is not what they’ll feel in the future. Attractive features turn into Performance and Must-be features as time goes by.
In order to discover customer insights about your product’s features, we must deploy a Kano questionnaire followed by an evaluation of the different combinations.
The questionnaire consists of questions about the feature we’d like to assess, and the questions are termed functional or dysfunctional forms:
If you have this feature, how do you feel?
If you don’t have the feature, how do you feel?
The possible answers for these questions are:
I like it
I expect it
I am neutral
I can tolerate it
The Kano model is undoubtedly one of the best models to highlight market fit, being customer-centric allowing immediate identification of product advantages and weaknesses through its features.
In order to discover customer insights about your product’s features, we must deploy a Kano questionnaire followed by an evaluation table of the different combinations, that we will then plot on the graph.
In practice, you should also consider adding an extra question, asking how important they consider a certain feature to be.
This answer allows you to distinguish which features are most important to users. It’s a tool to differentiate big from small features and the impact they have on your user’s view on the product.
The beauty of the Kano model is that when we pair up our functional and dysfunctional answers, we uncover how much a feature is wanted, needed or if our users are indifferent to it.
We use an evaluation table to uncover in which categories our features fit, by pairing functional answers with dysfunctional ones in rows and columns.
Before we proceed: You’ll notice that there are two new categories in the table - Questionable and Reverse.
Questionable suggests that someone didn’t quite understand the questions or feature being described.
Reverse suggests that what we suggest is the opposite of what the user wants.
First, you need to fully understand the table and what each pairing means in terms of categories.
Questionable: they are contradictory so you’ll always see them in a diagonal pattern across the table.
Performance features: these are features that users like having and dislike not having. The more performance features, the better.
Must-be (basic): Users dislike not having them, and when present they range from tolerating to expecting them.
Attractive: Users like having features they don’t expect to have. The dysfunctional range will therefore go from ‘expect it’, to ‘tolerate it’.
Indifferent features: Will always be in the middle of the table (as in the graph), as users are always neutral or can tolerate them, in both their functional and dysfunctional answers.
Now you know how each pairing is categorized you’ll want to get all your answers together and organize your data to see where the features go. The two approaches to organizing your data are called discrete and continuous analysis.
If you are new to this and don’t have much time, we recommend using the simpler approach- discrete.
The discrete approach gathers all of your respondents’ answers from the evaluation table. You should then count the total responses per category or demographic, and designate a feature’s category based on the most recurrent response. This will allow you to also place them on the quadrant.
In order to prioritize your results, use the following order:
You’ll end up with a table like this:
It highlights market fit:
fully customer-centric, it allows immediate identification of product advantages and weaknesses through its features.
It tailors a product to the needs of current and target users.
This allows for predictions about features and audiences based on expectations.
Subject to inherent limitations caused by survey delivery
It solely focuses on customer opinion:
this means it ****fails to account for a level of knowledge about the product and individual bias.
Prone to delayed time-to-market :
delays are due to surveying, data collection and processing time.
Story mapping is a widely used method of ordering user stories along different dimensions to provide a big picture of how they fit within the overall user experience.
This map arranges the essential steps of the customer journey on the horizontal axis in chronological order, guided by how the user would perform said tasks in their interaction with the product.
The vertical axis describes criticality or necessity, and different user stories are therefore arranged vertically, top to bottom, based on importance. This highlights the importance of differentiating importance in order to generate strategic release plans.
The beauty of doing this is that after you’ve completed your story map, you’ll be able to visualize all the possible ways in which your users could interact with your product.
This allows you to map the flow of their behavior as they progress from their first interaction through to their last step of their objective with your product.
This type of backlog organization has quite a few advantages when it comes to prioritization and execution of your product. Firstly, it is a visual tool that really allows stakeholders, your development team, and clients, to get a full picture of how users are interacting with the product. This creates common ground for those who often get caught up in their own details. It happens to the best of us! This big picture will help identify issues or gaps you might have previously overlooked.
Secondly, when it comes to prioritization, this framework provides teams with valuable insights as to how to release product iterations with increasing sophistication. By defining these, the team is empowered to complete and deliver end-to-end versions more quickly, allowing you to rapidly validate concepts.
Last but not least; you can apply it to any stage of your product life cycle.
The MoSCoW method is one of the most popular prioritization techniques to establish what is most important to clients and stakeholders. By using this method, stakeholders can better understand the importance of different features in a release. It is extremely quick and simple to apply as a prioritization solution, classifying features in four different priority buckets: Must Have, Should Have, Could Have, and Won’t have
Let’s break them down:
The MoSCoW model sets your initiatives by order of priority, and can therefore be applied to any phase of the product life cycle. However, it is most applicable to product launches, market launches, particularly early stage products and MVPs.
This is a good method to get the whole organization involved in the prioritization process, which in turn creates a broader set of perspectives by getting different departments involved.
Effort is another reason why you’d want to apply this method. Using it will enable your team to quantify the amount of effort allocated to each feature or initiative, resulting in the right combination of features per release.
In order to run the MoSCoW method smoothly, your product team and stakeholders need to decide on the objectives and factors that will be decisive to the criteria. This will be immediately followed by reaching a consensus on what initiatives or features you’d like to select.
Setting these ground rules is of extreme importance — in particular how to settle disagreements —as this can become serious bottlenecks down the line.
Lastly, define how much effort should be split between the Must-Haves, Should-Haves, and Could-Haves. This typically varies by team and project, but a rule of thumb suggests that you should dedicate about 20% of your total effort to Could-Haves.
Now your team is ready to sit down and discuss your initiatives. Let’s look into them:
The category name doesn’t come as a surprise as they will be the lifeblood of your product or release. These are non-negotiable features. Without them, your release could be a guaranteed failure. You should reach an agreement on how much time and effort you spend on your Must-Haves —you should focus on them, but shouldn’t allocate more than 60% of overall effort.
Ask your team:
Will this project work without this feature?
What happens if we release without it?
What’s the simplest way to accomplish this?
Being just under the Must-Haves in importance, they are still highly important to the product but not crucial. The product will still manage to function without them. On the other hand, you wouldn’t want to leave them out as they generate a significant amount of value.
To put it into perspective, you should include them in the release, but you could schedule them for a future release without having a negative effect on the current one.
These are nice-to-have initiatives, meaning that they are not necessary. They add to the product and generate value to the user, but they are not exactly a core component or function of your product. There would be no repercussions if you were to leave them out.
These initiatives are still important to take into account. You should always identify them as it’ll help the team decide what will not be included in the scope, thereby allowing them to prioritize other initiatives. They prevent you from wasting resources that your team needs for this release.
Lastly, the subgrouping of these is also beneficial. Perhaps there are Won’t-Have initiatives that will not be included within this scope, but could be included in the future, and others that simply won’t be included at all.
Why we love it
Gets business-side stakeholders involved in the feature prioritization process
Powerful and simple way to prioritize with timeboxes
Highly based on the expert opinion of the team, both technical and business side.
A few downsides...
Often prone to bias from managers worried that their initiatives will fall into “should” or “could”, which is exacerbated by bad KPIs.
Deciding where your initiatives will fall often becomes a never ending discussion when team members have different levels of familiarity with the product.
Priority Poker is an airfocus feature created to prioritize features and initiatives with a group of people in the most collaborative and time-efficient manner. It allows you to decide which new projects to start, or to estimate efforts among a number of criteria of your choosing.
Priority Poker can be done during a live-session or in your own time.
To make the best possible product decisions, product managers often need to incorporate stakeholders’ wisdom and expertise.
However, most of the time, teams and stakeholders have diverging opinions.
If you do eventually gather them into a meeting, you might risk having endless discussions without a constructive conclusion which means delaying development - the worst nightmare for all product managers and decision-makers.
Unlike traditional “democratic” methods of prioritizing items where we collaboratively discuss the rating of items one by one, each player rates items based on their own judgment, but just like a game of Poker, no one gets to see the ratings until it’s time to unveil them. Once the results are in, you can utilize the rating average or start a discussion when estimations differ tremendously.
Priority Poker can be used in cross-functional teams and can be adapted to various projects. It guarantees alignment while getting selected stakeholders involved in the prioritization process. So if you are looking to save time, and collaborate in a way that includes everyone’s expertise when it comes to prioritizing items, this might be your go-to tool.
No matter how large your team might be, or how many remote stakeholders you have, it can be done completely remotely in a time-efficient manner.
Priority Poker is very flexible, you can change the different levels of priority and even define criteria based on the teammates’ expertise and involvement within the project.
Here’s a snapshot of what Priority Poker is perfect for: :
Anything product-related, such as prioritizing features, initiatives and backlog items.
Agile product management or scrum to execute sprint planning sessions and estimate efforts.
Any design-related task such as choosing personas, user journeys and scenarios.
Testing phases to prioritize bugs and fixes.
The owner can play the game in two ways:
Asynchronous: Players do the criteria ratings in their own time (for example, the owner invites players on Monday and asks them to complete the ratings by Friday).
Real-time: Players join a live game where the owner controls the flow by selecting one item to prioritize at a time (owner and players play item by item during a live prioritization meeting).
The game owner can invite an unlimited number of players. Said owner can invite and manage players and even set criteria permissions. Players can join the game on mobile or desktop.
Each player rates the items based on their own judgement, but no one gets to see the ratings (other than the owner) until it’s time to unveil.
It resolves the logistical hoops of prioritizing in large teams. It ensures prioritization in a productive and efficient manner, greatly reducing decision-making time while involving and aligning everyone.
It allows you to invite experts to your prioritization workflow, be they team members, customers or other external stakeholders to make the best possible decisions.
It enables you to give everyone a say which maximizes buy-in from your team.
It allows you to beat the HiPPO effect and tackles the issue of people influencing each other, with extroverts typically overshadowing introverts.
Remote teams are as much part of the conversation as in-house ones.
Learn more about Priority poker here.
1. Loudest or highest-paid person gut feeling dictating priorities:
The HiPPO (highest paid person’s opinion) effect - along with that of the loudest person in the room - threatens this process.
We tend to default to decisions to please our seniors, but we’re causing more harm than good. How do we say no to this overpowering voice?
Surely their experience could help, but keep in mind that personal bias always underlies opinions.
Studies even suggest a correlation between successful projects and junior managers who are more prone to welcoming employee opinions, and having their assumptions challenged.
Jeff Bezos once said: “Customer feedback flattens corporate hierarchies”. Your superior may not agree with your opinion.
But he or she will have a hard time arguing against your priorities when they come from a systematic scoring of customer feedback and market intelligence.
2. Lack of external (customer) inputs
Your team might have rounded up some great features for this release, but if they lack customer validation, how will you know they’re desirable?
Always validate your internal assumptions with customer feedback and/or other feedback sources and incorporate it into your decision process.
3. Losing the overview of dependencies
Prioritizing an item to complete it early only works when all preceding cross-team deliverables have been delivered on time.
You might be working on the next big item, but if other departments fail to deliver their preliminary work on time, your ambitious launch date will be delayed. As your product grows, so will your dependencies, and keeping track is a crucial part of it.
Modern product management tools can help you out and make it easy to keep track of your dependencies in a visual way. Visualize all your dependencies and make them a critical player in your prioritization process.
4. Striving to be a me-too product.
You can source valuable intelligence from your competitors, but your product should seek to have a unique selling proposition and prove its value.
This is only achieved based on solid research and getting to know your clients and coming up with innovative ideas. Always strive to set the pace.
5. Don’t build based on the highest bidder.
You might be tempted to prioritize features based on a high paying client. But keep in mind that these should fit your overall business and user goals.
You can download the complete guide with an in-depth explanation of how to use them as well as their major benefits, drawbacks and major prioritization mistakes to look out for.
Learn how to prioritize by making it a simple process, to build products that stand out. Learn more about how to source insight, choose the right prioritization framework and much more.