Why Rating Velocity Matters More Than Your Average Star Rating
Most indie developers check their star rating obsessively. They celebrate when it ticks from 4.6 to 4.7. They panic when a single 1-star review drags it down a tenth of a point. And they completely miss the metric that actually determines where their app ranks in search results.
That metric is rating velocity -- the rate at which your app receives new ratings over time. Not the cumulative average. Not the review text. Not the total count. The speed at which fresh ratings arrive, week after week.
This distinction matters because it changes how you allocate your time. If you think average rating drives ranking, you spend energy trying to suppress bad reviews. If you understand that velocity drives ranking, you spend energy building systems that generate a steady stream of ratings from satisfied users. One approach is defensive. The other compounds.
If you are still building your foundational ASO strategy, this is one of the most overlooked factors in the entire discipline.
How Apple's Search Algorithm Actually Works
When a user types a query into the App Store search bar, Apple's algorithm performs two distinct operations in sequence.
Step 1: Relevance filtering. The algorithm scans indexed metadata -- your title, subtitle, keyword field, and to some extent your description and category -- to determine which apps are relevant to the search query. If you have not placed the right keywords in the right fields, your app does not make it past this gate. It simply does not appear. This is the step most ASO guides focus on, and it is important. But it is only half the picture.
Step 2: Ranking among relevant results. Once the algorithm has a pool of relevant apps, it must decide the order. App number 1 gets 10-20x more taps than app number 8. The factors at this stage include download velocity, engagement metrics, and -- critically -- rating signals. This is where rating velocity dominates.
The mistake most founders make is spending all their energy on step 1 (keyword optimization) while ignoring step 2 entirely. You can have perfect keyword coverage and still sit at position 15 in search results because your rating signals are weak compared to competitors.
It Is Not About Reviews, And It Is Not About Your Average
There are three common misconceptions about ratings that lead developers down unproductive paths.
Misconception 1: The difference between 4.7 and 4.8 matters. In practice, the ranking difference between a 4.7 and a 4.8 average is negligible. Both are well above the threshold where users trust an app. Both signal quality. Apple's algorithm does not meaningfully differentiate between them. Where rating average matters is at the extremes: a 3.2-star app will convert and rank significantly worse than a 4.5-star app. But within the 4.3-4.9 band, obsessing over tenths of a point is wasted energy.
Misconception 2: Written review text affects ranking. Users can leave a star rating without writing any text. Many do. The algorithm weighs the numerical rating, not the paragraph the user wrote. Review text matters for conversion (prospective users read reviews), and it matters for product feedback (you should read every one), but it does not drive the ranking signal. A 5-star rating with no text and a 5-star rating with a 300-word essay carry the same algorithmic weight.
Misconception 3: Downloads drive rankings. This one surprises people. The conventional wisdom is that more downloads equal higher ranking. But ASO practitioners have observed consistently that download volume alone does not sustain ranking positions. Apps that receive a burst of downloads from a press mention often see their ranking spike and then fall back within days. What sustains the ranking is continued engagement signals -- and fresh ratings are the most concrete engagement signal Apple can measure. A user who rates your app has opened it, used it enough to form an opinion, and taken an explicit action. That is a stronger signal than a download that may or may not have been opened once.
Rating Velocity as a Popularity Signal
To Apple's algorithm, a steady stream of new ratings means one thing: people are currently using this app and finding it worth their time to rate. This is a present-tense signal, and Apple weights present-tense signals far more heavily than historical ones.
Consider two apps competing for the same keyword:
- App A has 50,000 total ratings with a 4.6 average. It receives about 3 new ratings per week. It was popular two years ago but growth has stalled.
- App B has 800 total ratings with a 4.4 average. It receives 25 new ratings per week. It launched six months ago and is growing.
App B will outrank App A for competitive keywords, despite having 1/60th the total ratings and a lower average. The algorithm interprets App B's velocity as evidence of current relevance and growing popularity. App A's 50,000 ratings are historical -- they tell the algorithm what happened in the past, not what is happening now.
This effect is amplified by Apple's version-specific rating display. When you release a new version, the App Store can show the rating for the current version separately from the all-time rating. If your current version has few ratings, users see a small sample size even if your all-time count is large. Maintaining velocity across version updates prevents this display problem.
The practical implication is clear: 20 ratings per week, every week, is more valuable than 1,000 ratings in a single week followed by silence. Consistency beats volume.
Case Study: How Liftoff Engineered 30,000+ Ratings at 4.9 Stars
Liftoff is a gym workout planner that has accumulated over 30,000 ratings with a 4.9 average on the App Store. Those numbers are not accidental. They are the result of deliberate UX psychology applied to the review prompt timing.
Here is what happens when a new user opens Liftoff for the first time:
Step 1: Basic onboarding. The user fills in standard information -- height, weight, fitness goals. This is normal. Every fitness app does this.
Step 2: The medal. Based on the inputs provided, the app awards the user a rank. Something like "Bronze II." This is not a real assessment -- the user has not done a single workout yet. But it creates two psychological effects: a sense of being categorized (people want to know where they stand) and an implicit challenge (Bronze II suggests there is a Gold to reach).
Step 3: The prediction. The app then shows a prediction: "We estimate you can reach GOLD by [date]." Followed by a line like "You have amazing potential!" This is a manufactured emotional peak. The user has done nothing yet, but they feel seen, motivated, and optimistic about their future progress.
Step 4: The social proof screen. Before asking for anything, the app shows a loading-style screen that prominently displays its 4.9-star rating. This serves two purposes: it normalizes 5-star ratings (the user sees that everyone else rated highly) and it creates a brief pause that maintains the emotional momentum.
Step 5: The ask. Now -- and only now -- the app triggers the review prompt. "Would you rate us?"
The result: most users tap 5 stars. They are in a positive emotional state, they have just been told they have amazing potential, and they see that 30,000 other people rated the app 4.9 stars. The friction to giving a high rating is essentially zero.
What Makes This Work
The standard advice for review timing is "ask after the user completes a core action." Open the app 5 times, complete a workout, finish a lesson -- then ask. This is decent advice, but Liftoff does something subtly different and more effective.
Liftoff does not ask after value delivery. It asks after an emotional peak. The user has not received any real value yet -- they have not done a workout, lost weight, or gained muscle. But they feel good. They feel potential. They feel acknowledged.
This is the critical insight: the review prompt should arrive at peak emotion, not peak utility. A user who just completed a hard workout might feel tired and want to close the app. A user who just unlocked an achievement or hit a streak milestone feels a burst of satisfaction. The rating they leave in that moment reflects the emotion, not a rational assessment of the product.
Engineering Emotional Peaks in Your App
You do not need a fitness app with medal systems to apply this principle. Every app has moments where users experience small emotional wins. The key is identifying those moments and placing your review prompt immediately after.
Productivity apps: The moment a user checks off the last item on a list, completes a project, or hits a personal best for tasks completed in a day. Not when they create a task -- when they finish one.
Finance apps: When the user sees their savings hit a round number milestone, when their portfolio is up, or when they successfully stick to a budget for a full month. Not during budget setup -- when the result is positive.
Learning apps: After a streak of correct answers, when they level up, or when they complete a lesson with a perfect score. The "perfect score" screen is the ideal moment.
Utility apps: After the app successfully solves the user's problem. A scanner app should ask after a successful scan. A converter app should ask after a conversion. The moment the user got what they came for.
The pattern is consistent: identify the moment of small victory, and place the prompt there. Not before (the user has not experienced the win yet) and not much after (the emotion fades quickly).
The Implementation Formula
A practical approach that works across app types:
- Session count gate: Do not ask on the first session. Users who have opened your app 3+ times have demonstrated baseline engagement.
- Time gate: Do not ask in the first 48 hours. Let the user form a relationship with the app before asking for something.
- Emotional trigger: Wait for a positive action -- completion, achievement, milestone, or success state.
- Single prompt: When all three conditions are met, trigger the system review prompt once. Do not nag.
This formula produces steady velocity because different users hit the trigger at different times, creating a natural distribution of ratings over days and weeks rather than a single spike.
Sustainable Velocity vs. Spikes
A common pattern for indie apps: you launch on Product Hunt, get featured in a newsletter, or run a promotion. Downloads spike. Ratings spike. Your keyword rankings improve. Then everything settles back to where it was within two weeks.
This happens because the algorithm detects the spike as temporary. A sudden burst of 200 ratings followed by 2 per week tells Apple the app had a moment, not momentum. The algorithm adjusts rankings back down because the present-tense signal (2 ratings/week) does not support the position the spike temporarily earned.
Sustainable velocity looks different:
- Week 1: 15 ratings
- Week 2: 18 ratings
- Week 3: 14 ratings
- Week 4: 20 ratings
This consistency tells the algorithm the app has genuine, ongoing engagement. It is not gaming the system with a launch spike. It has real users who keep coming back and keep rating.
Building this kind of consistency requires the timing formula described above, combined with the retention mechanics that keep users coming back. The review prompt system generates ratings from the users you already have. Retention ensures you keep having users to generate ratings from.
For the tactical implementation details -- exactly how to use SKStoreReviewController on iOS, the In-App Review API on Android, handling negative reviews, and the specific rules each platform enforces -- see the companion guide on how to get more App Store reviews.
Measuring Your Velocity
You cannot improve what you do not measure. Here is how to track rating velocity:
Weekly new ratings: Log into App Store Connect, check the Ratings and Reviews section, and note the count each Monday. Calculate the delta from the previous week. This is your velocity number.
Velocity relative to competitors: Your absolute velocity matters less than your velocity relative to apps competing for the same keywords. An app getting 50 ratings/week in a niche category is dominant. The same 50 ratings/week in a category with competitors getting 500/week is not enough.
Velocity trend: Is your weekly count stable, growing, or declining? A declining trend, even from a high base, is an early warning sign. It means your app is losing active users, your review prompt timing is off, or both.
StoreLit's ASO Audit analyzes your rating velocity relative to your direct competitors -- not just your average star rating. A 4.5-star app with stagnant velocity will lose ranking to a 4.3-star app with 10x the weekly rating rate. Understanding where you stand versus the competition is the first step to closing the gap.
The Compounding Effect
Rating velocity feeds into a virtuous cycle. More fresh ratings improve your ranking. Better ranking increases your visibility in search results. More visibility drives more downloads. More downloads produce more users who can rate your app. More ratings improve your ranking further.
The cycle works in reverse too. If your velocity drops, rankings slip. Lower rankings mean fewer impressions. Fewer impressions mean fewer downloads. Fewer downloads mean even fewer ratings. The decline accelerates.
This is why treating rating velocity as a one-time optimization is a mistake. It needs to be a permanent system built into your app -- the right prompt, at the right emotional moment, shown consistently to engaged users over time. Not a hack. Not a campaign. A system.
Build the system once, maintain it through updates, and let it compound. That is how apps with 800 ratings outrank apps with 50,000.
