Featured
No, you haven’t made your team more “predictable” 🙄
Empirically showing why improving say-do (forecasted vs completed) ratio per sprint is not making a team more “predictable”.
A recap on why say-do is the devil
As my colleague Paul Brown explained previously, Say-Do is a dead end.
In complex systems, like software/product development, it is not fit for purpose. It assumes predictability where there is none. Optimising for a high say-do ratio discourages adaptability, making teams hesitant to take on uncertain but valuable work. It also incentivises gaming the system, setting artificially low commitments rather than responding to emerging insights. Instead of fostering agility, say-do creates rigidity, stifling learning and innovation.
But even worse…
Due to the mass layoffs that agile roles have experienced over the last 18 months, many practitioners are now finally realising they need to show the value they are adding to their organisation. Why it took mass layoffs to happen before people came to this realisation is in itself a worry, however we are where we are.
What we see now is practitioners that are looking to differentiate themselves by showing their ‘measurable impact’. This is good and should be welcomed, however it requires contextual thinking. Blindly taking recognised business concepts (such as Return On Investment — ROI) and bastardising them to mean something different won’t help practitioners be taken seriously and get a seat at the table:
An increasingly popular approach I am seeing and hearing about are Scrum Masters/Agile Coaches/Delivery Managers who claim to have improved predictability with their team(s). Predictability being defined as improving the forecast vs completed items (say-do ratio) of a team in a sprint.
So what’s wrong with this?
Well, aside from the reasons at the beginning of the blog, we can also look at some very basic ways that this information is lying to us and presenting an image of predictability when it could in fact be quite the opposite…
And here’s why this is 💩
Let’s take that very same team our coach was celebrating in the visual above. As a reminder, here is their forecasted vs. completed rate over the first four sprints:
Here we can see that this team has increased their “predictability” from 43% (14 items forecasted vs. 6 completed) to 92% (13 items forecasted vs. 12 completed). Well done to our coach!
Except, that’s not really true.
Let’s look at which items were completed in the sprint they were forecasted for vs completed late (in a sprint later than the sprint originally forecasted in):
Here we can see that since the first sprint, this team has been playing catch up with carried over items. So whilst they’re getting closer to their ‘forecast’ it is misleading in terms of the carry over of work, which is also reflected in their cycle time.
Here we see with our percentiles that the majority of work takes longer than a sprint (14 days) to complete. In addition to this we can see through the ‘triangle’ shape in our scatter plot that items are taking LONGER as time progresses. This means that this teams cycle time is now more varied and therefore more unpredictable.
So, the solution to predictability is simple. Stop sprint carryover right?
Again, not necessarily.
Here we have another team being lambasted by their Agile Coach for being over ambitious in their sprint forecast and playing catchup.
But we need to look deeper if we’re trying to understand predictability. Let’s look at the cycle time for this team:
Here we can see that not only does the team have a ‘good’ cycle time in that their 85th percentile 15 days or less (they work in two-week sprints — 14 days). They are in fact improving their predictability by reducing the variation in their cycle times:
So they’ve nailed it!
Well, not really…
The third lens we need to always look at when it comes to validating claims about improved predictability is Work Item Age. This is when we look at all our ‘in progress’ items, calculating the elapsed time (in calendar days) from when a work item started and the current time, plotted against the column on the board and the historical cycle time percentiles:
Here we can see work items consistently being neglected (or blocked and not being unblocked) for the preference of other work that is likely more easier to complete. We are hiding the fact we have a bunch of work still in-flight that will ruin our predictability as, when it moves to done, it will negatively impact our cycle times.
Call to action
There are two main calls to action.
The first is that, for any time you hear of an Agile Coach/Scrum Master/Delivery Manager bragging about how they have improved the predictability of a team, do the following test:
- Check how many items were carried over from previous sprints and if this is skewing the metric as this is not improving predictability.
- Check the Cycle Time of completed items, if the variation of this has shown little/no change or even increased this is not improving predictability.
- Check the Work Item Age of items that are still in progress. If there are many aging items that are being sacrificed for finishing fast, easier work, this is not improving predictability.
The second, and of course what should be implied through all of this is to stop caring about planned/forecasted vs. completed (say-do ratio).
For those interested, the underlying data for this blog is available at this link.
Start focusing on flow, monitor the Work Item Age of items and proactively balance that with their (potential) value and your historical cycle times. Understanding variation (and more importantly what is real variation) in your cycle time data and bringing your percentiles closer together is what matters when talking about “predictability”.
About me
I’m Nick, Principal Flow Consultant at Thrivve Partners. I’m a huge advocate of a data-based approach when it comes to validating the impact and outcomes around adopting new ways of working. My mission is to help the organisations I work with build the best digital products in the most effective way possible.