When Velocity Metrics Lie: What Your Sprint Numbers Are Really Telling You

Velocity is one of the most widely tracked metrics in agile teams, and one of the most consistently misunderstood. I have worked with teams whose velocity charts looked healthy — stable, predictable, meeting their estimates sprint after sprint — whilst the underlying system was quietly degrading. I have also worked with teams whose velocity fluctuated wildly, and who treated that fluctuation as a crisis, when in reality it was the most honest signal their process was capable of producing.

The problem is not the metric itself. Velocity, when properly understood, is extraordinarily sensitive to dysfunction.

When Stable Velocity Hides Dysfunction

A stable velocity chart is often celebrated as evidence that a team has ‘matured.’ Sprint after sprint, the numbers come in close to the estimate. The burndown charts track neatly along the ideal line. Leadership sees predictability. The team sees consistency.

What they may not see is that the stability itself has become the goal. And when that happens, the metric stops being a reflection of productivity and starts being a target that teams optimise for — often at the expense of the work itself.

I find that teams optimising for predictability make subtle adjustments to protect the number. They sandbag estimates. They defer risky work. They focus on hitting the planned velocity rather than working to capacity at the level of quality the product actually needs. The result is a velocity metric that looks stable whilst the team’s actual capability to deliver value stagnates or declines.

This is not a hypothetical concern. I have observed it repeatedly. A team that treats velocity as the measure of success rather than as one input amongst many will eventually stop improving. The number becomes an anchor, not an indicator.

When Fluctuating Velocity Reveals Truth

The inverse pattern is equally common, and often more instructive. A team whose velocity varies significantly from sprint to sprint is frequently judged as immature, inconsistent, or poorly managed. But variation in velocity is not inherently problematic. In fact, it is often the most honest reflection of what is actually happening.

Velocity is based on story points, which are themselves a measure of complexity, not effort. When complexity varies — as it inevitably does in real software development — velocity will vary as well. When team composition changes, when external dependencies shift, when the nature of the work itself changes from sprint to sprint, the velocity will reflect those changes. That is not dysfunction. That is information.

The mistake is treating variation as noise to be eliminated rather than as signal to be interpreted. A wildly fluctuating burndown chart that shows work completing in irregular bursts may indicate that stories are not sufficiently independent, or that they have not been broken down to similar sizes. But it may also indicate that the team is tackling genuinely difficult work, that they are learning as they go, and that the estimates — rough as they are — are doing exactly what they are meant to do: reflecting uncertainty.

I worked with a game development company where daily velocity charts showed a distinctive stair-step pattern: long periods with no progress, followed by sudden drops as multiple stories completed at once. The immediate assumption was that the team was blocked or poorly coordinated. The actual cause was that the stories themselves were tightly coupled. Modifications to the game and its reward systems had prerequisites that permeated the story writing. Stories in sprint N had to be completed before work in sprint N+1 could begin. The solution was not to demand smoother velocity. The solution was to make the dependencies explicit, separate prerequisites across sprints (allowed), and eliminate within-sprint dependencies (not allowed). Once that structural problem was addressed, the velocity chart became genuinely informative again. For more on the technical distinctions between daily, sprint, and average velocity, I have written a detailed three-part series that covers the measurement theory and common pitfalls in depth.

When Teams Optimise for the Wrong Thing

The most insidious misuse of velocity is when organisations use it to compare teams, or worse, when they set velocity targets. Both practices are fundamentally misguided, because story points are not a standard unit of measure. A story point in one team does not equal a story point in another. The meaning is contextual, calibrated within a team, and it shifts over time as the team’s understanding of their own work evolves.

When velocity becomes a target, you trigger Goodhart’s Law: when a measure becomes a target, it ceases to be a good measure. Teams will game the system. They will inflate estimates. They will avoid difficult work. They will focus on hitting the number rather than delivering value. And in doing so, they destroy the one thing that made velocity useful in the first place — its sensitivity to variation.

Average velocity, in particular, is dangerous when misapplied. It is a planning tool, nothing more. It normalises variation across sprints to give teams a rough sense of capacity. But averages are also deceptive. If you always aim to hit the average, it only takes one sprint below the average to drag the overall number down. And if your sprint lengths vary, or your team composition changes, or the nature of the work shifts, then the simple average formula breaks down entirely.

The information in velocity is not in the average. It is in the variation. When you see a sprint where velocity drops, the question is not ‘how do we get back to the average?’ The question is ‘what changed, and what does that tell us?’

What Velocity Is Actually Measuring

Velocity does not measure value. It does not measure quality. It does not measure whether you are building the right thing. What velocity measures is throughput — the rate at which a team converts estimated complexity into completed work within a constrained timeframe.

That is useful information, but only if you remember what it does not tell you. A team can have high, stable velocity and still be building the wrong features, accruing technical debt, or drifting away from the outcomes their stakeholders actually need. Conversely, a team with low or erratic velocity may be doing essential foundational work, managing dependencies carefully, or simply operating in a domain where complexity is genuinely high and estimates are correspondingly uncertain.

The mistake is treating velocity as a proxy for success. It is not. It is a diagnostic tool. And like any diagnostic tool, it is only useful if you know what you are looking for.

What to Watch For Instead

When I work with teams on velocity, I ask them to watch for two things in particular. The first is pace during the sprint. If a team moves through their standups quickly, if the same voices dominate every retrospective, if the level of engagement feels performative rather than genuine, that is a signal. Velocity charts may look fine, but the underlying system is not functioning.

The second is what happens when velocity does vary. Does the team treat it as a crisis, or as an opportunity to learn? Do they ask ‘why did this sprint differ from the last?’ or do they simply adjust the estimate for the next sprint and move on? The teams that improve over time are the ones that treat variation as information, not as failure.

The Long View

Velocity is not a destination. It is a conversation starter. It is a way of making visible what would otherwise remain implicit — the team’s capacity, the complexity of their work, the stability or instability of their environment. Used well, it helps teams plan more effectively, identify problems earlier, and have better conversations about what is actually happening.

Used poorly, it becomes another target to hit, another number to defend, another metric that obscures more than it reveals. The difference is not in the metric itself. It is in how the team — and the organisation around them — chooses to interpret it.

Understanding what your velocity metrics are truly revealing requires more than tracking the numbers — it demands a shift in how you interpret variation and value. That’s where evidence-based coaching accelerates insight. Let’s explore how reading velocity as diagnostic rather than target can transform your team’s productivity. Reach out today, and let’s turn your metrics into meaningful improvement.


Discover more from The Software Coach

Subscribe to get the latest posts sent to your email.

Leave a Comment

Your email address will not be published. Required fields are marked *