Skip to main content
โšก Calmops

Feature Prioritization for Indie Hackers: The RICE Framework

A simple scoring model to prioritize features and experiments without bias

Introduction

Prioritization is the most underrated skill for indie founders. Without a systematic approach, it’s easy to fall into the “shiny feature” trapโ€”building what excites you instead of what moves the needle for your business.

RICE (Reach, Impact, Confidence, Effort) is a lightweight prioritization framework developed by Intercom that helps you make data-driven decisions about which features, improvements, and experiments to build next. Unlike gut-feeling decisions, RICE gives you a quantitative score that’s easy to explain and compare across your product roadmap.

This framework is particularly valuable for indie hackers and small teams because:

  • It requires no complex tools or spreadsheets
  • It forces you to think critically about assumptions
  • It reveals hidden biases in your decision-making
  • It’s adaptable to any product or service

What is RICE?

RICE is an acronym for four key variables in the prioritization formula:

Reach

Definition: The number of people who will be affected by this feature or experiment within your chosen time period (typically a quarter or 3 months).

How to estimate it:

  • Count active users impacted by the change
  • Consider user segments (e.g., “200 enterprise customers” vs “15,000 free-tier users”)
  • Be specific: avoid vague estimates like “many” or “most”
  • Time-bound it: “How many in the next 3 months?”

Example: If you’re adding dark mode and 40% of your 10,000 monthly active users have requested it, your reach is approximately 4,000 users.

Impact

Definition: How significantly this change will affect each individual user. It measures the magnitude of the effect on the user’s satisfaction, productivity, or value gained.

How to estimate it (use a scale):

  • 3 = Massive impact (transforms user experience, solves critical pain point)
  • 2 = High impact (significantly improves workflow or satisfaction)
  • 1 = Medium impact (noticeable improvement but not game-changing)
  • 0.5 = Low impact (nice-to-have, minor convenience)
  • 0.25 = Minimal impact (edge case, very small effect)

Example: A faster search feature might be 2 (high impact) because it saves time on a frequent task. A new color theme might be 0.5 (low impact) because it’s aesthetic but doesn’t change functionality.

Confidence

Definition: How confident you are in your Reach, Impact, and Effort estimates, expressed as a percentage (0โ€“100%).

Confidence levels:

  • 100% = Certain (based on data, past experiments, or strong evidence)
  • 80% = High confidence (educated guess with reasonable assumptions)
  • 50% = Medium confidence (uncertain, some assumptions may be wrong)
  • 25% = Low confidence (very speculative, lots of unknowns)

Why it matters: A feature with high reach/impact but low confidence (e.g., 25%) gets heavily discounted. This prevents you from over-investing in risky bets.

Example: If you surveyed users and 80% said they want feature X, your confidence might be 80%. If a cofounder just mentioned they’d like it, confidence might be 25%.

Effort

Definition: The total amount of work required to build and ship the feature, measured in person-weeks.

How to estimate it:

  • 1 person-week = 40 hours of focused work
  • Include design, development, testing, and deployment
  • Account for your team’s velocity and context-switching
  • Be realistic: add buffer for unknowns

Example:

  • Tweaking a button color: 0.25 person-weeks
  • Adding a new filter to your dashboard: 1 person-week
  • Rebuilding your authentication system: 4 person-weeks

The RICE Formula

RICE Score = (Reach ร— Impact ร— Confidence) / Effort

How it works:

  • The numerator (Reach ร— Impact ร— Confidence) represents the expected value of the feature
  • The denominator (Effort) normalizes for how much work it takes
  • Higher scores are betterโ€”features that deliver more impact with less work

Example calculation:

  • Reach: 5,000 users
  • Impact: 2 (high)
  • Confidence: 80% (0.8)
  • Effort: 2 person-weeks

RICE Score = (5,000 ร— 2 ร— 0.8) / 2 = 8,000 / 2 = 4,000

Compare this to another feature:

  • Reach: 500 users
  • Impact: 3 (massive)
  • Confidence: 40% (0.4)
  • Effort: 3 person-weeks

RICE Score = (500 ร— 3 ร— 0.4) / 3 = 600 / 3 = 200

The first feature scores 4,000 vs 200โ€”you should prioritize it first despite lower impact, because it affects far more users with acceptable confidence.


How to Use RICE: Step-by-Step Guide

Step 1: List Your Features and Experiments

Create a backlog of everything you’re considering:

  • New features your users are requesting
  • Product improvements or optimizations
  • Experiments to test hypotheses
  • Technical debt or infrastructure improvements
  • Bug fixes affecting many users

Pro tip: Limit this to 10โ€“20 items per planning session. Too many options dilute focus.

Step 2: Estimate Reach, Impact, Confidence, and Effort

For each item, have 1โ€“2 team members estimate independently, then discuss:

Feature Reach Impact Confidence Effort RICE Score
Dark mode 4,000 1 80% 1.5 2,133
Advanced search filters 2,000 2 70% 2 1,400
Mobile app redesign 1,500 3 50% 8 281
Fix login bug 500 3 100% 0.5 3,000
API rate limit increase 200 2 90% 1 360

Step 3: Calculate RICE Scores

Use a spreadsheet, simple tool, or even pen and paper. The math is straightforward.

Step 4: Rank and Review

Sort by RICE score (highest first). But don’t blindly follow the ranking:

  • Sanity check: Does the ranking match your intuition? If not, dig deeper.
  • Strategic alignment: Do top-scoring items align with your business goals?
  • Risk management: Are you balancing quick wins with bigger bets?
  • Dependencies: Do some items depend on others?

Step 5: Commit to Your Top Items

Pick the top 3โ€“5 items for your next sprint, build window, or quarter. Document assumptions so you can revisit them later.


Practical Examples

Example 1: SaaS Product

Scenario: You run a project management tool with 5,000 active users.

Feature Description R I C E Score Rank
Zapier integration Connect to 500+ apps 1,200 2 60% 3 480 3
Bulk task import CSV upload for teams 800 2 85% 1 1,360 2
Recurring tasks Auto-create tasks on schedule 2,500 2 75% 2 1,875 1
Dark mode Evening users 1,500 0.5 90% 1 675 4

Decision: Start with recurring tasks (1,875), then bulk import (1,360).

Example 2: Consumer App

Scenario: You have a photo editing app with 50,000 monthly active users.

Feature R I C E Score Notes
New filter pack 15,000 1 70% 2 5,250 Users actively request
Social sharing 5,000 2 40% 3 1,333 Unproven if users will share
Undo/redo 30,000 2 100% 1 60,000 Critical pain point, high confidence

Decision: Build undo/redo first (60,000)โ€”it’s a fundamental feature with high confidence.


Advanced Tips

Use RICE for Experiments, Not Just Features

RICE isn’t just for product features. Use it for:

  • A/B tests (change button color, pricing tier)
  • Marketing experiments (new landing page, email campaign)
  • Growth initiatives (referral program, affiliate partnerships)

Experiments often have lower effort and help you reduce uncertainty.

Update Scores with Real Data

After shipping a feature or running an experiment, measure actual impact:

  • How many users actually used it? (Real Reach)
  • Did it improve retention, revenue, or satisfaction? (Real Impact)
  • How much effort did it actually take? (Real Effort)

Use these learnings to calibrate future estimates.

Account for Your Team’s Velocity

If your team has a proven velocity:

  • Tight timeline: Prefer features with lower effort (quicker wins)
  • Longer runway: You can tackle more ambitious projects (higher reach/impact)

Beware of Anchoring Bias

When estimating, avoid anchoring on the first number mentioned. Encourage independent estimates before discussing.

Consider Seasonal Factors

If your product has seasonal patterns:

  • Plan Reach estimates for when the feature will have most impact
  • A summer feature might affect more users in June-August
  • A holiday feature might affect users in Nov-Dec

Common Pitfalls to Avoid

1. Overconfidence

Don’t assume 100% confidence unless you have hard data. Most estimates should be 50โ€“80%.

2. Underestimating Effort

Features always take longer than expected. Add a buffer (e.g., estimate 2 weeks but assume 2.5).

3. Ignoring Unknown Unknowns

If you’re entering new territory, lower your confidence score. Don’t estimate blindly.

4. Treating RICE as Gospel

It’s a framework, not a law. Use it to inform decisions, not replace judgment. Strategic bets with lower RICE scores might still be worth pursuing.

5. Not Revisiting Assumptions

Plan to re-score every quarter. Market changes, user feedback, and team capacity evolve.


Tools and Resources

Spreadsheet templates:

  • Google Sheets RICE calculator (search “RICE scoring template”)
  • Excel pivot tables for ranking

Further reading:

Related prioritization methods:

  • WSJF (Weighted Shortest Job First): Adds “job size” weighting
  • MoSCoW: Categorizes as Must, Should, Could, Won’t (simpler, less quantitative)
  • Kano Model: Prioritizes by feature type (basic, performance, delighter)

Final Thoughts

RICE helps you break the “shiny feature” trap and make more objective decisions about your roadmap. It won’t make decision-making effortless, but it will make it intentional and defensible.

The real power of RICE isn’t the formulaโ€”it’s forcing yourself to articulate assumptions, challenge biases, and think critically about trade-offs.

Action: Score your top 10 feature ideas using RICE this week. Document your estimates and revisit them in 3 months. Compare estimated vs. actual impact. Iterate.


Quick Reference Card

RICE Score = (Reach ร— Impact ร— Confidence) / Effort

Reach: # people affected in next 3 months
Impact: Scale 0.25 (minimal) โ†’ 3 (massive)
Confidence: % certainty in your estimates
Effort: Person-weeks required

Higher score = higher priority

Comments