Getting predictable in an unpredictable environment
Predictability — Product Owners love it and rightfully so, they want to know what features they will get and when they will get it. In the past, I have found getting predictability to be an attainable task through story estimation and user story mapping. User story mapping is an activity used to provide a graphical representation of a users journey through a product. If you want to know more about story-mapping, click here to read a past blog I wrote on leveraging user story mapping. By running a story mapping exercise and estimating our user stories, you can determine how many sprints a feature/epic would take based on the teams Velocity and Capacity at the time. However, is it this simple in every environment? With every product? Well Yes…and no.
The ‘Yes’ part
The key to getting predictable is finding the right estimation exercise that works for your team, once you can make your estimation more objective, getting predictable and determining how long a feature will take becomes as simple as estimating the stories and knowing your velocity.
The ‘No’ part
Here is where it gets complicated, finding the right estimation exercise for a team isn’t always as easy as trying the plethora of estimation techniques available to you. Don’t get me wrong, it’s good practice trying new estimation techniques but sometimes the problem isn’t always with the chosen technique but with the product or environment you work in.
New teams and new features
Sometimes development teams can be working in code that is new to them. If the team is new or the product is new it is hard to measure the complexity or level of effort required to achieve a story. Even with relative sizing exercises it is still quite subjective. 8 Points for 1 Developer can be 3 Points for another, development teams generally comprise of different experience levels. All teams and software are different and diverse and this can complicate estimation.
Not all products are one simple piece of software, some products live within a very complicated environment. Many downstream systems managed by different teams, some of which are overseas. There can be a major reliance on these downstream systems for Product Owners to achieve their desired outcomes. Subject matter experts (SMEs) for these downstream systems don’t always sit within the Development team. This means we hit a lot of dependencies on other teams and spend time liaising with these teams on how we send data downstream. This significantly affects estimation as we cannot always foresee work required by these groups or the level of complexity involved on their side.
Estimation does not factor in downtime such as environment issues and other factors outside our teams control. SIT environments go down, overnight builds fail and dependencies are well…not always dependable. Pointing a story as X amount of points cannot realistically factor in another teams work effort without that team being present or factor in unforeseen events.
So what do we do about it?
Have you heard of Throughput?
A technique which is working well within one of the current teams I am in is Throughput. Throughput is the number of work items completed through a unit of time. In our instance the number of stories completed in a sprint. The good thing about Throughput is, it factors in your external blockers, complex environments and unknowns because it is based on the number of stories you have delivered in past sprints and not how many points estimated prior to the story being delivered. So for example, for the last 4 sprints you have committed 20, 23, 20 and 23 stories respectively and only delivered 13, 14, 9 and 8 stories respectively. Then our average Throughput is 11 Stories.
The most common question which arises when using Throughput is; How can we determine what stories to commit if we haven’t estimated how big each story is? What if we chose 11 stories which take longer than a sprint to complete because we are not estimating their size? If you’re asking this question, then you’re not refining your stories properly. Refinement isn’t only about discussing what needs to be done in a story but if we can split this story down further. I like to think about white-flour vs whole-wheat flour, white flour is wheat so refined it becomes a fine powder. When baking bread with white-flour as opposed to whole-wheat flour, you get a product which bakes faster, lighter and fluffier. Similarly in story refinement, you want to refine your stories down to the smallest tasks possible, this will get your stories moving faster and lighter in terms of complexity and effort. I wrote a section on story refinement in a past blog post about effective user stories. You can view it here if you would like to know more about refinement. Refinement sessions are imperative for throughput to be effective because stories need to be consistently small.
My interpretation of what small is, is a subjective one. What a team considers small doesn’t really matter that much, as long as it is consistent. That means, if a development team agree that ‘small’ is a story which takes 3 days well then guess what? That’s small. Once a development team agree on what a small story is, we can work towards refining stories to be small.
How does this differ from estimation?
I know making stories small sounds a lot like estimating stories, that is because it is a type of estimation. It’s just how we go about the estimation and how we use it. When refining stories, give power to the the development team. Rather than presenting a story, elaborating the acceptance criteria and then asking; How long will this story take? Ask the question like this; How much of this story can you complete in 3 days? This gives the Developers and Testers the power to commit to what they can do rather than throw a subjective number of days at you. It may be agreed that of the 3 Acceptance Criteria, only acceptance criteria 1 and 3 can be achieved in a 3 day period while the acceptance criteria 2 can be done in another 3 day period. Split the story accordingly. You now have two consistent work items in the backlog.
Once you have consistently small stories you will find that overtime, your average throughput will become more predictable and your delivery will be stable.
To summarise, if you’ve found estimating stories with points or time has not helped bring predictability to your sprints or due to external blockers, complex environments or unknown factors your sprints are never complete, then give Throughput a try. Throughput is the number of work items completed within a unit of time.
Before you try Throughput, make sure your stories are refined and small. Make sure your team has discussed and agreed on a consistent definition of small and that all stories are small before they can be committed for work.
Now use past sprint throughput to get your average throughput and over time your throughput will become far more predictable.