Opportunity scoring shines a spotlight on the features that users deem most important and most satisfying. By applying the results, teams can dedicate more time and resources to those features demanding improvement, creating opportunities to enhance the user experience, increase satisfaction, boost retention, and attract newcomers.
Customers purchase a product because of its ability to solve their problems, but if they — or the users they bought it for — find it falls short of expectations, they may switch to a different provider.
Opportunity scoring provides a crucial chance to reduce this risk and work towards greater ROI.
Opportunity scoring (as product teams use it today) comes from Tony Ulwick, an innovation expert and founder of Strategyn. Specifically, opportunity scoring is derived from the outcome-driven-innovation (ODI) strategy he developed in the 1990s.
The user-centric process of opportunity scoring also has ties to the Jobs To Be Done methodology — another of Ulwick’s inventions. With JTBD, the user’s goals and desired outcomes are key to maximizing a product’s value — if a product empowers a user to achieve the jobs they desire, it’ll have value in their lives.
Opportunity scoring can be used as an additional layer to JBTD, as it enables teams to learn first-hand about their customers and what they want to achieve, instead of expecting the audience to describe the theoretical features they want to see implemented.
Product teams can undertake opportunity scoring by gathering feedback from participants within the target audience. A survey would be distributed, in which users are asked to rate a number of product features based on importance, before rating their own satisfaction level with each one.
Scoring can be as basic as choosing a number between 1 and 5 — and teams should strive to make the process simple for participants. When users identify those features and outcomes which satisfy them most, they’re telling product teams where to focus their creative energies and funds.
Any features which receive a high importance rating but a low satisfaction level can be considered a work in progress. The time, money, and resources invested in enhancing (or implementing) these important features would bring good ROI. Product teams must remember to consider the cost of seizing opportunities and filling gaps before starting work.
Considerable time and investment may be required to improve certain features, but if they’re low on the priority list, product managers will need to determine how worthwhile such commitments might be in the long run.
If a feature has a low satisfaction rating and is regarded as largely unimportant by most users, teams might decide to leave it as is (if it’s not doing any harm) or remove it altogether. While this could seem counterproductive, enhancing more important features will still lead to a better product overall.
In the standard opportunity scoring process, teams treat the ratings assigned to a feature’s importance and satisfaction equally. However, when utilizing a pure-breed version of Tony Ulwick’s ODI method, surveys may invite users to assign a score to the most important outcomes rather than features.
With Ulwick’s ODI method, product teams double the value placed on users’ importance ratings in relation to their satisfaction scoring, as this focuses on the desired outcome. This is why an in-depth awareness of user goals and motivations is critical.
The following ODI formula can be used to recognize the most valuable opportunities for improving a product:
Importance + max (importance - satisfaction, 0) = opportunity
This is followed by another equation:
Importance + (importance - satisfaction) = opportunity
As with the standard model, those features or user outcomes that afford the best opportunity to boost satisfaction should be given priority.
Opportunity Scoring shows product teams which features or outcomes must be prioritized when developing or overhauling a product. This helps to ensure their efforts align with user preferences and goals, as opposed to developers assuming they know what users want or expect of the product.
They can tailor features to users based on simple feedback, reducing the risk and expense of devoting months to elements that are either unimportant or unsatisfactory (or both).
Product teams can concentrate on innovations that resolve existing problems and provide users with features that help them achieve their desired outcomes efficiently. For example, creating personalized email marketing campaigns that drive conversions through a cutting-edge email automation platform.
The more closely aligned a product is with its target audience’s requirements, objectives, and expectations, the higher their satisfaction will be. Opportunity scoring allows product teams to respond to user feedback and make genuinely necessary changes, tailoring a product for greater ROI.
Ultimately, opportunity scoring is about listening and taking action, rather than guessing or telling users what they want. Opportunity scoring can be repeated across different products, marketplaces, and user types in the future. Companies can keep refining products based on user scoring to ensure the most important, beneficial features receive attention before others.