Chapter 8 — Estimating, “What Else” (scope growth)

troy.magennis
Forecasting using data
13 min readSep 15, 2017

This chapter looks at estimating how much additional work needs to be considered. Project and features grow in size as we learn more about a problem space, and often (believe it or not) we don’t get things perfect from a customer or technical solution on first attempt..

Goals of this chapter

  • Learn the different types of scope growth: Time based, Rate Based, Scale Based andEvent based
  • Learn how to estimate growth rates
  • Learn what to assume until we have actual data to help refine initial estimates

Estimating Additional Work

Having a measure of the originally planned work is a great starting point. Other factors quickly conspire to slow your down and increase the work you need to do. The most common reasons for growth is additional ideas that can’t wait, rework (defects, alterations to original plan after seeing it), and risks. Risks in this context are things that in hindsight that cause additional work to be necessary in order to deliver, for example, the solution is too slow for users and response performance needs to be improved.

When working at a feature level, additional ideas don’t come up so much. They used to be the biggest cause of project blowout in Waterfall delivery processes. New ideas were added in great numbers the moment the project started, referred to as scope-creep. When teams would go dark heads down developing for a year, of course changes and new ideas are needed. If it didn’t get into this release then it has to wait another year. Agile helped change that attitude. By delivering more frequently, there is more likelihood that ideas get put on the teams backlog of other good ideas and decided on later. Pressure to creep scope dissipates.

There are a few types of growth we need to consider –

  • Time based. The longer we go the more alterations to original scope get added. For example, and I forgot to mention ….
  • Rate based. The more work we complete the more we learn about what we need to do to deliver. For example, defect rework.
  • Scale based. The size of completed work is different than the work in our backlog. Often work items are split as the team understands the feature story in more detail.
  • Event based. Feedback or things that go wrong in the approval to release process. For example, failed performance testing and needs more work than planned to be acceptable.

Each of these scope growth types occur in many forecasting problems, not just software. Car and air travel for instance. The longer we spend on the road, the more opportunity there is for a traffic delay to occur. If we travel one or two blocks there is little chance of exposure to traffic accidents, travel across town twelve blocks, there is higher risk of delay. The more airport connecting flights in a journey, the more likely at least one is delayed. These are examples of time based growth.

Rate based delays in travel would be stopping for gas, or stopping overnight to avoid falling asleep behind the wheel and crashing (an example of event based risk). Scale based risk is assuming the same rate of travel on highways as in congested city streets. Or that the clear weather travel time will be the same as the snowy pass we travel during Winter.

Any time we use a distance and pace based model for forecasting, we need to consider what might increase the distance, or modify the pace. Not doing so means poor forecasts. The same with our software projects, we need to know the original size and how that might increase as we begin delivery.

Time Based Growth

The longer between committing to delivery and delivery the higher risk feature requirements will change. These changes can be additional ideas to current plans, or reactions to a competitive market change. This type of growth isn’t bad, it’s just inconvenient from a forecasting perspective. Given that companies we work for need to be agile in a business sense, means that adapting to changes like this is an important part of doing software development.

One benefit of Agile development and continuous delivery is the reduced exposure to people changing their minds whilst waiting for us to deliver. If a delivery is only once every six months, it’s only natural that new additional ideas will be couched as must haves. The customer can’t wait another six months for the extra features. But if delivery is continuous or every few weeks, the argument can be made for shipping what you have now and adding the new ideas later.

I look at how often delivery commitment to a customer actually happens and make an adjustment of growth rates from there. I adjust the computed story count range, increasing it by a multiplier value from 1 (no growth) to 2 (doubled).

Table 8–1 lists suggested adjustments to story count numbers, but you should carefully consider what your organizational context has previously seen, and update these estimates with actual data of new requirements early into a feature development or project process.

Table 8–1: Starting adjustments for changing requirements based on delivery frequency to customers.

To estimate exposure, ask lots of questions around how the original feature epic and story counts were created. If you assess that the backlog of epics might be incomplete or poorly defined, then pick one multiplier higher than the delivery cadence would normally suggest in Table 8–1.

I often model this as “Added Scope” and call out the assumption of a multiplier rate. This open up the discussion about how certain product stakeholders are about the current scope. Working collaboratively with the product stakeholders on how to manage this growth early helps come up with a shared strategy of what changes are likely and acceptable. The goal is to avoid the misconception the forecast given can tolerate substantial scope additions, and to put a time and dollar value on that scope change. It’s obvious that zero growth might mean that new ideas aren’t being considered valuable, a symptom of following a plan versus adapting to change. Eliminating this type of growth often causes more harm than simply accounting for it in an assumption and discussing it like grown-ups.

Suggested Actions -

1. Releasing more frequently limits exposure to time based growth. Release more frequently!

2. Don’t try and eliminate ALL time based growth. Responding to change is an important skill, it incorporates recently learnt lessons into a product.

3. Bent Flyberg is a good resource of material on Mega-projects mainly in the major infrastructure initiatives. He uses 4x on major transport projects and recommends avoiding Mega-projects altogether!

Rate Based growth

This type of growth comes from actually completing work. Defects discovered whilst completing fall into the rate based growth category. Any growth in story count that has a relationship to completed work count falls into this category.

To estimate rate based growth, examine the completed work from prior projects and count the number of non-feature backlog items. Find the ratio of “y defects to z feature stories.” Obviously some features will have more defects reported than others, so this estimate is again best done as a range.

If there is no historical data, start with between 1 and 3 defects per story. If you think the work is highly technical and novel (using new technology for example), double that range to between 2 and 6 defects per story. Monitor actual rates as work is completed and adjust estimates with the actual range early.

Defects are one type of rate based growth. Some others I’ve seen are –

  • Localization and globalization tasks
  • Accessibility testing and remediation
  • Security auditing and remediation
  • Performance testing and remediation
  • Memory utilization testing and remediation
  • Deployment automation
  • Test data research and creation

Often a feature story has a number of additional non-functional work that needs to be performed. Although these are often performed as part of delivering the feature stories themselves, sometimes they are performed by other teams. It is these stories that need to be considered as part of scope growth. Ask surrounding and existing teams in an organization building similar stuff what additional work they encountered to narrow in on a pretty thorough set of growth vectors. Then get an estimate of how many stories might encounter each of these vectors.

Table 8–2: Example of capturing rate based growth factor assumptions for forecasting and discussion.

Table 8–2 is an example growth assumption document. This helps identify the occurrence rate and the additional work required. Often a great source of information about this growth is peripheral teams, so cast a wider net than the immediate teams to understand their needs. Sometime peripheral teams work in parallel, and the impact isn’t sequential on delivery progress. If this is the case, document the assumption is occurring, but set the occurrence rate for your feature delivery teams to zero. Someone needs to know this work is needed, make it visible.

Data captured in the form of Table 8–2 is valuable for other teams in the future. Keeping a consolidated list of what other teams have considered can really speed up identifying what gaps there are in an embryonic feature or project backlog. Avoid it getting overwhelming by keeping a top 10 list, and by refining the estimates of occurrence rate and impact over time by using actual historical records of what occurred.

Suggested Actions -

1. Create a rate based growth table for your feature and projects

2. Consolidate the rate based growth knowledge across other teams, limit to top 10

3. Use actual data to refine the occurrence rate estimate and the impact for growth items

Scale Based Growth (unit correction)

This is the most overlooked growth of work. It is more properly a correction rather than growth. It is caused when the pace of delivered items is assumed to be the pace remaining backlog items will be completed. Why isn’t this true? Commonly work is split into multiple items when the team is analyzing the details just prior to adding them to their sprint or pulling that work into the team.

Splitting work like this is a good thing and should be encouraged. Allowing the team to deliver work in distinctly useful parts helps get feedback and increases the chances of delivering some user value earlier. We just have to account for the apparent difference in delivered rate and the actual rate the backlog will be completed. It’s very much like the backlog is in miles per hour, and the completed items are in kilometers per hour. Both are right, as long as the driver, road signage and local law enforcement agree which one is in use and the right conversion factor is used.

To estimate the correction factor, I sample a few stories and look at how they may be split before delivery. Look at prior completed stories and ask team members what the originating story looked like and gauge if it was split or went through in one unit. If there is no such data, my first assumption is one to three times. Some stories won’t split at all, some stories are split into two parts, and some into three parts.

What is a safe starting scale estimate?

A good starting scale correction estimate is 1 to 3 times.

The most common form of this growth error causes bad decisions is the ubiquitous burndown or burnup charts. Historical velocity or story count completion is used in a linear forecast of the remaining backlog count. This is obviously flawed if the completed items were split, backlog items aren’t. The predicted completion line will cross the “remaining scope” line too early. The backlog scope line should have an increasing slope to account for story splitting.

The error is compounded, being a linear regression of averages the forecast had a chance of 50% (half of the outcomes were above average and beyond the date it returned). Add in the error of 1 to 3 times incorrect based on splitting, then you have no chance, matching what we see as forecast performance in the software industry generally.

The option is to add more analysis upfront and pre-split every story by heavy team involvement. This is an expensive and time consuming effort, and is only as good as the teams imagination for how stories may split, even those that eventually never get built. A range estimate will likely give just as good a result with far less effort. Real data will firm this assumption pretty quickly (first seven or eight stories).

I rarely see vendor tools account for this rate difference. Often the commercial tools offer some burn-up/down regression line starting from the remaining backlog count or points and extrapolate a delivery date based on the apparent completion rate. Don’t fall into this trap, it will make project appear to be on-track when they are desperately behind. Even those vendors that are performing probabilistic forecasts miss this point. They use historical completion rates and appear to be rigorous in forecasting uncertainty. If the split-rate of work is from one to three times, the forecast they give will be out up to three times.

Suggested Actions -

1. Be alert to anytime the backlog count is combined with historical data; The result will often (almost always) be wrong giving an overly optimistic forecast.

2. Start with the estimate range of 1 to 3 times. 1/3 items don’t split, 1/3 items split in two items, 1/3 items split into three items.

3. Measure what the actual rate is using historical data or by observation during planning meetings.

Event based growth (risk management)

Sometimes we have a hint that something might go wrong requiring something else to be done, but can’t be sure. For example, sometimes you build a feature but have little insight to how it will perform with thousands of users. You can guess you have the right architectural approach, but until its built and tested under stress, you can’t know for certain that it isn’t going to need improvement. This is an event based scope increase. And it’s my opinion these are the make or break of good software forecasts.

Construction and infrastructure projects have many of these. Weather related delays are a prime example. Pouring concrete requires certain weather parameters for proper strength. If it rains on that day, a delay. Normally some contingency factor is added to the known work to counter these delays, but its frankly a guess one way or the other.

You could go insane inventing all the events that might cause a software project delay. Marriage of the lead developer, the elopement overseas of your best tester, a large contract vendor not getting the production hardware installed on-time (OK, this is easier to assume). But it just takes a few of these events to come true to radically void all assumptions you have made about pace and size.

My advice is find the top five. Any more than five major risks then the project may well be unviable, or at least not viable to predict completion date using any forecasting technique not involving dark magic. Pick five, then seriously get a range estimate for probability, and for impact if it comes true. These range estimates will be used to understand in detail how these risks impact outcome (we cover this later in this book as Risk Management).

Table 8–3: Example of capturing event based risk assumptions for forecasting and discussion.

Table 8–3 shows a simple way to capture event based risks. The risks are described in a way that can be measured and tracked. Spend some time getting the wording as exactly as possible explaining the risk faced.

The probability estimates should be a range based estimate. I like to start projects with a pessimistic estimate (more likely to occur) of events coming true. This allows the team to focus on why probability is reducing. Increasing probability later in a project is to really be avoided at all costs. It’s always good to capture the assumptions that go into the range estimates. For example, the second risk about browser compatibility explains why the probability is 20–40% and not higher. Some mitigation is occurring early by beta testing and having access to all of the browsers in Virtual Machines for testing.

The impact if this event comes to fruition estimate is in the form of a range estimate (as is our way). It is the work needed should the event happen. Remember, the estimates scope is only IF the event comes true. If we dodge the bullet and the event doesn’t happen the work is zero. Set the scene to the team, OK, we have a performance issue, what do we do? How much work is there in doing that? Do a size estimate in the same way you did for the other work in the project as seen in the previous chapter.

Suggested Actions -

1. Pick the top five. If you have more than this, the project or feature in its own right might be too risky. Forecasting this risk is pointless, you are guessing.

2. There will ALWAYS be a few that are un-avoidable, so do what you can to decide on probability of these occurring earlier in the project.

3. Once you have a few risks with high probability, look for ways to reduce the impact if they come true. Don’t spend all your time in avoidance, accept they will occur and react accordingly.

Summary

The known size of a project or feature is incomplete. It takes some work to understand what else will be needed in order to deliver a feature and this chapter has outlines four different types of growth -

  • Time based. The longer we go the more alterations to original scope get added. For example, and I forgot to mention ….
  • Rate based. The more work we complete the more we learn about what we need to do to deliver. For example, defect rework.
  • Scale based. The size of completed work is different than the work in our backlog. Often work items are split as the team understands the feature story in more detail.
  • Event based. Feedback or things that go wrong in the approval to release process. For example, failed performance testing and needs more work than planned to be acceptable.

Scope will grow, its unavoidable. Trying to eliminate it was the approach taken by Waterfall style development process, “if we just analyze a little more there will be nothing missed,” guess what — it never worked. There is always good reason to change course after learning more about the problem and solutions and listening to customer feedback. Embrace the positives of adapting to change, but make sure you include it when forecasting.

This chapter has given you techniques to capture and estimate growth of various types. The next chapter loos at how to estimate how fast work is being completed to deliverable quality so that we can understand how much of the work has been done, and how much is left to do.

There is a lot of detail in this chapter. To make it easier to remember, it has been captured on a single page canvas that helps explain and capture the assumptions about a project or feature you are looking at. It can be downloaded from the companion website and is called Pace Assumption Canvas.pdf, and shown here as Figure 8–1. Use a team approach to capture growth types and estimate their impact.

Figure 8–1: Growth assumption canvas. Use it to capture the growth assumption for a feature or project being forecast. Have the team brainstorm the types of growth and estimate the impact.

--

--