Prioritising Features

February 2, 2026

Compare RICE, ICE, MoSCoW, and Kano models to choose the right prioritisation framework for your team size, data availability, and stakeholder alignment needs.

RICE Score: Formula and Application

RICE scoring quantifies feature priority through four variables: Reach (users affected per month), Impact (effect on the target metric, rated 0.25 for minimal to 3 for massive), Confidence (expressed as a decimal representing certainty in the estimates), and Effort (in person-weeks). The formula is (Reach × Impact × Confidence) ÷ Effort. A feature with 2,000 monthly users affected, 2.0 impact, 70 percent confidence, and 3 weeks of effort scores 467. A smaller feature affecting 500 users at 3.0 impact, 90 percent confidence, and 1 week of effort scores 1,350 — nearly three times higher, despite the smaller reach.

The discipline RICE enforces is explicit honesty about confidence. Founders routinely assign 100 percent confidence to features because they believe in them, which mathematically inflates every score and negates the framework's value. Using a realistic confidence level — 60 to 80 percent for most estimates without direct user research backing — forces each item to compete on its actual probability of impact rather than the team's enthusiasm level. Productboard, Linear, and Jira all support custom numeric fields for RICE scoring; setting up the formula in the backlog tool takes under 30 minutes and produces a sorted list that replaces hours of roadmap debate.

ICE Framework for Fast Decisions

ICE scores features on three dimensions: Impact, Confidence, and Ease — each rated on a 1-to-10 scale and multiplied together. A feature scoring Impact 8, Confidence 7, Ease 6 produces an ICE score of 336. The calculation is faster than RICE because it does not require a Reach data point, which makes ICE the right choice for teams without reliable analytics or for decisions that need to be made in a meeting rather than after a data pull. ICE trades some precision for speed and is particularly effective for prioritising experiments and growth initiatives where the Reach of a proposed change is genuinely unknown.

The weakness of ICE is that Ease conflates two distinct variables: the effort required to build the feature and the confidence that the feature will be built correctly. A 10-point Ease score for a complex but well-understood engineering task is legitimate; the same score for a task with significant architectural unknowns inflates the ICE score and prioritises a high-risk item incorrectly. Teams using ICE should define Ease explicitly as "estimated implementation straightforwardness given current knowledge" rather than leaving it as an intuitive rating, which prevents the optimistic scoring that undermines the framework's discrimination power.

MoSCoW for Stakeholder Alignment

MoSCoW assigns each feature to one of four categories: Must have (the product fails without this), Should have (important but not launch-blocking), Could have (nice to include if time and budget allow), and Won't have this cycle (explicitly excluded from the current scope). The primary value of MoSCoW over other frameworks is not its scoring precision — it has none — but its forcing function for naming what is out of scope. Stakeholders who must agree on what the product "Won't have" are forced to confront trade-offs rather than accumulate feature requests without consequence.

The most common MoSCoW failure is overloading the Must have category. When every stakeholder's priority item is classified as Must have, the list of must-have features fills two sprints and the Should and Could categories are empty — the framework's value is destroyed because no trade-offs were made. The discipline that makes MoSCoW work is a hard cap: no more than 60 percent of the total identified features may be classified as Must have, and every Must have requires an explicit justification of what specifically fails if the feature is absent. This cap forces the difficult conversation that MoSCoW exists to facilitate.

Which Framework for Which Situation

RICE is the right choice when you have reliable product analytics, a backlog with more than 20 candidate features, and time for a structured prioritisation exercise. It produces a defensible sorted backlog that any team member can explain to a stakeholder because every score has a quantitative basis. ICE is the right choice for small teams without clean analytics data, for rapid experiment prioritisation, and for teams that need to make daily scope decisions without a full scoring exercise for each one.

MoSCoW is the right choice for release planning with external stakeholders — clients, investors, or executives — who need to understand and agree on scope boundaries. It produces a conversation rather than a number, which is exactly what cross-functional alignment requires. The Kano model supplements all three frameworks for customer-facing feature decisions: it distinguishes between basic expectations (whose absence causes complaints but whose presence is taken for granted) and delighters (whose presence surprises positively but whose absence causes no complaint). Features that are basic expectations must be built to avoid dissatisfaction; delighters build differentiation. Kano prevents teams from investing disproportionate effort in basic expectations at the expense of the differentiated features that actually drive acquisition and retention.

Frequently Asked Questions

What is the RICE scoring formula? RICE = (Reach × Impact × Confidence) ÷ Effort. Reach is monthly users affected, Impact is rated 0.25 to 3, Confidence is expressed as a decimal (80% = 0.8), and Effort is in person-weeks. The result is a number that can be compared directly across all backlog items.

When should I use ICE instead of RICE? Use ICE when you lack reliable Reach data from analytics, when decisions need to be made quickly without a data pull, or when prioritising growth experiments where the audience size of a proposed change is genuinely unknown.

What is the biggest mistake teams make with MoSCoW? Classifying too many features as Must have. When everything is must-have, no trade-offs are made and the framework adds no value. Apply a hard cap: no more than 60 percent of features may be Must have, and each requires an explicit justification.

What is the Kano model and when is it useful? The Kano model distinguishes basic expectations (must have to avoid dissatisfaction) from delighters (surprising positives whose absence causes no complaint). It is most useful for customer-facing product decisions to ensure differentiated features receive investment rather than only table-stakes requirements.

Which tools support RICE scoring in a backlog? Productboard, Linear, and Jira all support custom numeric fields that can be configured with the RICE formula. Setup takes under 30 minutes and produces a sorted backlog list that replaces subjective roadmap debates.

Related Turkish Products