“Past Performance Predicts Future Performance Better than You Do”
Sometimes we have to guess when something will be done. We may wish we would be in a situation where we didn’t have to do this, but that’s for another post. So when we have to make that guess, here’s what I’ve found works well.
The future is ultimately unknowable. Humans are subject to biases and fallacies, making us prone to consistently make predictions that are optimistic. The past doesn’t lie, and by simply looking at how things unfolded before, we dramatically improve our predictions of the future. If my prediction seems overly pessimistic, and it hurts a bit to share it, I know I’m probably on the right track.
You can consult the past in many different ways. If you have access to throughput & cycle times of work at the right level (projects, stories, etc) and understand how to use it, Monte Carlo simulations provide ways of quantifying the uncertainty. Visualising the distribution of possible future outcomes can be a very powerful way to communicate the probability of something being done at a certain time.
It can also be overkill, and simply looking at past completion rates, lead times and throughput may often be enough to calibrate my intuition a bit. When I find myself excusing future endeavours with stuff like “well this is just cleanup work, and this I’m sure will be faster than last time”, I ask myself if I thought so the last time too. Odds are that I did, and I was wrong. Looking at some examples from the past helps keeping me honest.
3 Key Bullets:
- The future is a distribution of possible outcomes. In delivery, this distribution is not normal, it’s right skewed. This means that you can be a lot more late than early.
- Predictions are best served as a range, preferably with probabilities attached. This makes it clear to the one asking that there are different answers, and they can choose based on their risk profile; mid June at 50% or mid July at 80%?
- Novelty is more unpredictable than “business as usual”. When you press for more predictability, novelty suffers. In other words; predictability hurts innovation.
The Cost of Being Wrong
Whether or not to care about predictability is very much about the cost of being wrong.
- If you over-commit, what is the cost of that and who will have to pay it?
- If you under-commit, what is the cost of that and who will have to pay it?
In highly dependent systems, both can cause significant problems up- or downstream from you. Depending on your vantage point and the size of the organisation, it may be very difficult to see who takes the cost and how high it is. Perhaps it doesn’t matter much to you and your team, but it might matter quite a bit to others elsewhere.
A Word of Warning
Lastly, optimising for predictability itself is a slippery slope. Most often I’ve found that optimising for flow ends up resulting in better predictability; understanding the flow of work makes it easier to quantify a distribution of possible outcomes. Some people equate “better predictability” with “higher precision”. I personally consider “better predictability” to mean “improved ability to predict a distribution of possible outcomes, based on the system(s) involved”. This includes how often you have the ability to do so (continuously is ideal). To understand and improve the flow of work you’ll have to measure it, and as you do that you’ll improve the ability to predict future delivery.
I hope some of this has been helpful, but for more on these things you should check in with the experts:
- Nobel laureate Daniel Kahneman coined the term “Planning Fallacy”, and has written extensively about what is called “Reference Class Forecasting”.
- Daniel S. Vacanti wrote two excellent books about flow metrics and how to use that for Monte Carlo forecasting — “Actionable Agile Metrics for Predictability” and “When Will It Be Done?”
- Troy Magennis offers plenty of tools and material for free on his site focusedobjective.com