Restrospectives (or reflection activities in projects) should be the engine for improvement. The spaces where teams identify real problems, commit to specific changes, and follow through with measurable results. However, most of them were not (1).
The pattern I see most frequently is this: the team gathers, discusses what went well and what could improve, generates a list of action items, and then returns to work. Two weeks later, they gather again. The action items from the previous retrospective are either ignored, forgotten, or addressed in passing before the conversation moves on to new concerns.
The retrospective has become theatre. It looks like reflection. It feels like engagement. But nothing actually changes.
The Problem Is Not the Format
When teams tell me their retrospectives are not working, the first question I ask is what format they are using. Start-Stop-Continue? Mad-Sad-Glad? Sailboat? The specifics vary, but the answer is almost never the issue.
The problem is rarely the structure of the conversation. The problem is what happens — or does not happen — after the conversation ends.
A retrospective that produces no follow-through is not a retrospective. It is a venting session. And whilst there is value in giving people space to voice frustration, venting alone does not improve performance. What improves performance is action. Specific, timeboxed, resourced action that someone is responsible for completing.
From Observation to Outcome
I have worked with teams where the retrospective notes were comprehensive, the facilitation was skilled, and the observations raised were accurate. The retrospectives were still failing. Why? Because the conversation stopped at the observation.
‘Communication needs to improve.’ ‘We need better testing practices.’ ‘Technical debt is slowing us down.’
These are all true statements. They are also not actionable. The question that separates a functional retrospective from an ineffective one is simple:
What specific, measurable action will we take, and what outcome do we expect to see?
If the answer to that question is vague, the action will not happen. If the outcome is not defined, you will have no way of knowing whether the action made any difference.
The Anatomy of a Functional Action Plan
When I help teams rebuild their retrospective practice, I ask them to structure their action plans with five elements. Not because structure is inherently valuable, but because each element addresses a specific failure mode I see repeatedly.
1. Action — What will we do?
2. Outcome — What measurable result do we expect?
3. Responsible — Who owns this?
4. Timebox — By when will this be done? And How much effort will it consume?
5. Resources — What does the responsible person need to succeed?
Let me illustrate with a real example. A team I worked with identified that their defect rates had increased and stories were taking longer. Their velocity had become unpredictable. Through discussion, they concluded that new team members were struggling with a specific technology component. This was slowing everyone down.
Here is how they structured their response:
| Element | Detail |
| Problem Statement | Defect rates up, stories taking longer, velocity unpredictable due to new members struggling with <technology component> |
| Root Cause | New team members need more structured support to understand the component |
| Expected Impact | Shortening onboarding will liberate capacity and improve productivity |
| Action | Establish shadowing assignments: only for tasks involving <technology component>, sessions capped at 4 hours, no consecutive sessions |
| Outcome | Consistent velocity and quality within 3 sprints |
| Responsible | Scrum Master incorporates assignments during planning; senior developer takes stories |
| Timebox | 3-hour shadowing session per new team member per sprint |
| Resources | Senior developer time allocated during sprint planning |
Notice what this action plan contains. It is not a vague commitment to ‘improve onboarding.’ It is a concrete intervention with a measurable outcome, a named owner, a defined timebox, and allocated resources. Three sprints later, the team could evaluate whether consistent velocity had returned. If it had, the action succeeded. If it had not, they had clear evidence to try something different.
The Resource Question Most Teams Skip
The element most frequently missing from retrospective action items is resources. Teams identify what needs to change, assign someone to drive it, and then leave that person to figure out how to make it happen alongside their existing workload.
This is why so many action items fail. Not because the responsible person is uncommitted, but because they were set up to fail from the start.
If an action item is worth committing to, it is worth resourcing. That might mean allocating time in the sprint — I often use Scrum Spikes for this purpose. It might mean temporarily reducing the team’s planned capacity to create space for improvement work. It might mean bringing in external support or temporarily redistributing other responsibilities.
The form the resource takes will vary. What matters is that the question is asked and answered before the retrospective ends. Otherwise, you are asking someone to succeed without giving them the means to do so.
The Long Game
Retrospectives that drive real improvement do not produce dramatic transformations overnight. They produce small, sustained adjustments that compound over time. A team that consistently identifies a meaningful problem, commits to a specific action, follows through with discipline, and evaluates the result is a team that gets better. Sprint by sprint. Quarter by quarter.
That discipline is harder to maintain than it looks. It requires facilitation that knows when to press for clarity and when to move on. It requires leadership that protects the time and resources needed for improvement work. And it requires a team that believes their retrospectives actually matter.
That belief is not installed through exhortation. It is earned through results. The first time a team sees a retrospective action item produce a measurable improvement, something shifts. They start to take the process seriously. And once they do, the retrospective becomes what it was always meant to be: the engine of their improvement.
While the structure outlined here is straightforward, transforming retrospectives into genuine improvement engines often requires an external perspective to see what the team cannot. That’s where evidence-based coaching accelerates progress. Let’s explore how making your retrospectives matter can work for your team. Reach out today, and let’s turn reflection into results. Download my free guide on making retrospectives your team’s improvement engine.
- Availability Bias. In a recent engagement, I was quoting this statement when I was drawn to the availability bias issue of this statement. Teams that consistently perform effective retrospectives are less likely to give me a call. I am not seeing these, so they may be more common than my experience suggests. I was amused to be caught up in this self-confirmation cycle.
Discover more from The Software Coach
Subscribe to get the latest posts sent to your email.
