Velocity is a House Built of Straw

Philip Rogers
A Path Less Taken
Published in
12 min readNov 8, 2021
Photo by Alex Kozlov from Pexels

This post delves into one of the most divisive topics when it comes to Agile metrics. The metaphor I’m using is the nursery rhyme that features the tale of the Three Little Pigs. I’m choosing velocity as the area of focus because: 1. As it is often applied and interpreted, velocity does at least as much harm as it does good, and; 2. There are other metrics that provide a fuller, more nuanced, picture, of the work that a team is doing, by focusing on flow. (It is important to point out that many teams choose to use flow-based metrics, sometimes in conjunction with velocity, and sometimes without.)

Photo by Leah Kelley from Pexels

To summarize, three little pigs are sent out into the world by their mother to seek their fortunes. Each of them decide to settle down and build houses, but from different materials:

  • One builds a house of straw
  • One builds a house of sticks
  • One builds a house of bricks

As fate would have it, there is a hungry wolf in the area, who has a particular love for the taste of pork. So, as the tale goes, the hungry wolf makes its presence known to each of the three pigs, and the verbal exchange goes like this:

Little pig, little pig, let me come in.
No, not by the hairs on my chinny chin chin.
Then I’ll huff, and I’ll puff, and I’ll blow your house in.

It doesn’t end particularly well for the pig who builds its house from straw. I’m guessing you know how the rest of the story goes …

Differing Interpretations of Velocity as a Metric

There are numerous challenges with velocity as a metric, as it is often employed in the wild. It could be said that velocity is a victim of its own “success,” and by that I mean that among Agile teams, its one of the most common of the metrics people employ to determine how well a team is performing. Velocity even surfaces in the contracts that various organizations write, to retain the services of consultancies and similar organizations that purport to be able to deliver what I’ll refer to with the shorthand of Agile-as-a-Service (AaaS).

How velocity is seen across the Agile community varies considerably, and has something to do with the splintering of the “House of Scrum” itself, where we have scrum.org, scruminc, and Scrum Alliance, not to mention a plethora of other websites that offer various training offerings, certifications, and other content associated with one or more of the aforementioned Scrum organizations. Consider these differing interpretations of velocity from just those three websites:

A Historical Note about Velocity

The usage of velocity is particularly common among teams that practice (or at least believe that they’re practicing) Scrum. Based on this alone, one might expect to find mention of velocity in the Scrum Guide; however, it’s not mentioned there. For anyone who might be interested in the history of the term velocity, in an Agile context, take a look at eXtreme Programming, or XP — specifically, the notion of project velocity.

A Personal Note about Velocity

When used as it is intended to be used, it’s possible to use velocity to make inferences about how much work a team can potentially take on during an iteration (or Sprint, in Scrum). I’m sorry to say, though, that in many organizational contexts, velocity is used in many ways in which it is not intended. I offer my thoughts here based on my own reflections in seeing how various velocity anti-patterns have surfaced in organizations I’ve witnessed in the past, and also based on what many others have said and written about their own experiences.

One of the many things that I value about being a part of the larger Agile community is the diversity of opinions that we bring to the table, which are based on our own experiences, interacting with others, and what we’ve learned from other sources. That being said, it’s easy to see why there is the potential for confusion about what many terms mean in actual practice, velocity being one of many examples.

What Velocity Is

Let’s take a closer look at velocity, starting from a more conceptual point of view, and then moving on to what it often looks like in actual practice. For this initial section, I’ll reference Doc Norton’s book on the topic: Escape Velocity: Better Metrics for Agile Teams. Specifically, let’s consider these three aspects:

  • Velocity as a vector
  • Velocity as a lagging indicator
  • Velocity as a measure of a complex system

Velocity as a Vector

Here are a couple of observations that Doc makes about velocity:

- A vector is an entity that has both magnitude (size) and direction. Velocity indicates the speed of something in a particular direction…
- Velocity is the rate at which a team delivers value to the customer. A team that completes lots of tasks, but delivers no value to the customer should have a zero velocity.

Note: Doc’s second observation has particular significance, because it differentiates between two divergent things: outputs and outcomes. Velocity is one of many metrics that fall under the category of outputs. Where things can get hazy is tying business value to the work that a team has completed, and unfortunately, many organizations spend comparatively little time measuring outputs, and considerably more time measuring outcomes, which is sub-optimal, to say the least. For more on this difference, see my blog post Outcomes as Enablers of Business Impact, which references the work of John Cutler, troy.magennis, and others.

Velocity as a Lagging Indicator

Another important attribute of velocity, as Doc observes, is that it’s a lagging indicator:

Velocity … is a measure taken at the end of a series of steps. We plan, we prioritize, we work, we test, and then we measure.

Lagging indicators tend to be aggregate or abstract. They don’t provide detailed insight into the operations, rather they provide an indication of results. Net profit is a lagging indicator for a company. While it tells us about how the company is doing, it gives us no indication of why the organization was or was not successful.

In other words, velocity provides little insight into root causes, which he elaborates on further:

…one of the problems with velocity as a lagging indicator is that while it can tell how much work we delivered over a given time period, it cannot tell how well the team is doing at ensuring consistent delivery or at improving their process overall. For some teams, velocity may be an indicator of team health or at least capability, but it provides no insight into root causes. Moreover, attempts to increase velocity directly tend to either create artificial increases that are not sustainable or result in other more detrimental issues. Whereas, efforts to address underlying causes more often than not result in improvements in velocity. Managing story composition and limiting work in process results in smoother flow, which leads to a more stable velocity and the ability for a team to mature into more rapid delivery.

Velocity as a Measure of a Complex System

A third attribute of velocity has to do with how it is a single measure for measuring what are, in reality, complex systems. As Doc points out:

Think about the number of factors that go into a velocity measurement. There is the organizational mission, the broader business objectives, and the objective of the product itself. There are product owners, designers, architects, developers, testers, subject matter experts, security specialists, database specialists, governance, and production specialists involved. There are stand-ups, planning meetings, and retrospectives. There are epics, stories, and tasks all tracked on a board or in a system with multiple lanes representing different key states in the delivery of each single piece of work. After all these individuals interact with one another, responding to change, and collaborating with the customer in pursuit of working software, we take a single measurement. That single measurement represents the interactions of the individuals and all of their adaptions to change in the delivery of working software.

… While a simple measure of a complex system may sound ideal, in this case, it is generally insufficient. Velocity doesn’t tell us enough to be particularly useful. From velocity alone one cannot ferret out root causes. One cannot determine conclusively that the team is doing better (or worse) from a rising (or falling) velocity. Velocity is but one dimension to consider.

… Knowing a team’s velocity cannot reliably tell you if the team is ‘healthy.’ Adding to the challenge is the fact that velocity has no baseline standard. There is no commonly accepted range within which velocity falls, so our assessment of health gets even more difficult. It is common to see a team with a consistent velocity of 100 or more points per iteration being outperformed by a team with a consistent velocity of 30 points. Points don’t translate from team to team, save basic trending. In general, more points is more software delivered and fewer points is less software delivered. This is assuming, of course, that you don’t point tasks and other non- creation work. If you do that, you’ll likely see little to no correlation between velocity and delivery of software.

Velocity Anti-Patterns

When used in the way in which it is intended, as a capacity planning tool for a single team, velocity can provide some insight. However, as I have observed, along with many others, the unfortunate reality in many organizations is that velocity is employed in ways that can do considerable harm. Let’s take a look at some common examples of anti-patterns (not a comprehensive list, but a representative sample). I describe these anti-patterns in terms of how harmful they are, with the most harmful at the top of the list, because they originate with behaviors at the leadership level.

Leadership Anti-Patterns

  • Using velocity to compare teams. Not only is this use of velocity arguably one of its most harmful applications, it’s also sadly all too common. Any leader who does this not only fails to understand velocity in a fundamental way, but also, in so doing, is likely damaging the morale of team members in the process.
  • Using velocity to shame teams. Unfortunately, in some organizations, leaders use what they might refer to as a “low velocity” as a means of exerting leverage on teams to “go faster.” Few things a leader can do are more toxic to a team than this.
  • Failing to meet the “Sprint Commitment.” A common variation on velocity shaming is where there is this notion that has persisted in the Agile community for quite some time, the “Sprint Commitment.” What this means is that the sum total of the estimates for the work items that are included in the Sprint — which we’ll call “planning velocity” — represents a “commitment” on the part of the team to finish at least that much work. In its worst form, the Sprint Commitment means that for any team that might fall short of that number, leaders and other stakeholders use that as an excuse to question how that that team is working, with the implication that heads will roll if they don’t hit the expected velocity next Sprint. For more about this, see my blog post The Difference Between a Forecast and a Commitment.
  • Measuring individual velocity. There are a number of problems with trying to measure velocity on an individual, vs. a team basis. To name one, it works contrary to the notion of a team being a team, not a collection of individuals. It also signals to team members that leaders are at least as interested in utilization at the individual level as they are in value delivery at the team level.
  • Comparing estimates to actuals. This particular anti-pattern sometimes manifests among people who have a history with traditional project management. Regardless of the reason, spending time calculating actuals, not to mention comparing them with estimates, is a form of waste, and this too signals to team members that leaders feel a need to monitor their utilization rates.

Team Anti-Patterns

When it comes to team behaviors that constitute anti-patterns, these anti-patterns happen either as a reaction to leadership behaviors such as those described above, because velocity is not fully understood, or both:

  • Inflating estimates. When faced with pressure to increase velocity, some teams feel they have no choice but to change how they estimate. There is nothing wrong with being conservative with estimates. However, this particular anti-pattern, when manifested as consistently increasing estimates across the board, has a number of negative consequences. For instance, when this occurs, it’s natural to assume that team velocity will increase, but under these circumstances, even if velocity does increase, there is probably little change in throughput (how much work actually gets done).
  • Taking partial credit. What this can look like is where a team finishes some but not all of the work for a story, and at the end of the Sprint, they split the story into two parts, “taking credit” for one portion of the work, and deferring the rest of the work to the next Sprint, even though no change was made during the Sprint to cause the nature of the work itself to change. If taking partial credit is a rare occurrence, it does little harm, but if it becomes a common anti-pattern, it’s important for team members to recognize it and take steps to stop it.
  • Splitting. There is nothing wrong with story splitting per se — in fact, it’s desirable to split work into reasonably small pieces, to help with flow. What we’re talking about here is spitting a story into multiple pieces, where the sizes for the resulting pieces, when added together, are MUCH greater than was the case for the original parent story, even though the parameters of the work remain unchanged. If this is due to a lack of clarity when the story was first sized, and new information has emerged that changes the nature of the work considerably, what we’re talking about here could be a natural consequence. However, when teams are pressured to increase velocity, they may also feel pressure to use story-splitting as an opportunity to get more “credit” for more or less the same amount of work.

Velocity and Probabilistic Forecasting

Having touched on ways that people employ velocity to to what is often negative effect, let’s examine a thought process within which velocity can actually support decision-making.

What is the Difference Between a Deterministic and a Probabilistic Forecast?

Let’s start with a conventional example of a forecast, where me might be looking about a calendar quarter into the future, and say something like this, if we’re working on the next release of the “Consumer Confidence Assessor” app …

“Based on the work in the Product Backlog that’s in-scope for Release 2.0 of the Consumer Confidence Assessor (and some additional assumptions, like how many teams will be working on it), we think we can finish Release 2.0 in 12 weeks” (or 6 Sprints, if we’re using Scrum and are on a 2-week Sprint cadence).

The example above, which is focused on a single possible outcome (finishing in 12 weeks, in this case), is a “deterministic forecast.”

A probabilistic forecast has two components, not one:

  • A range
  • A probability

Getting back to the example above, a probabilistic forecast would sound more like this for the “Consumer Confidence Assessor” app …

There is an 85 percent chance we can finish Release 2.0 in 12 weeks (6 Sprints) or less.

It’s important to point out that the outcome in the example above is one of a range of outcomes. As part of that same analysis, we might also have concluded that there is a “60 percent chance we can finish Release 2.0 in 10 weeks (5 Sprints) or less” (And conversely, if we set a date further out on the calendar, the probability would increase accordingly.)

What is the Purpose of a Forecast?

It’s important to acknowledge an important reality when it comes to forecasts, and where estimation potentially comes into play in this context, if we do perform estimation:

  • The goal is not to make estimates “more correct”
  • The goal is to make estimates (and therefore forecasts) “more useful”

Note: How teams estimate, not to mention whether they estimate, is an important topic when it comes to velocity. However, in the interest of brevity, I’m choosing not to delve into estimation here.

To summarize, when people use velocity as a data point as part of a probabilistic forecast, not only are they far less likely to do harm, in so doing, they put velocity to work in a way that is consistent with what velocity can and cannot tell us.

Conclusion

There is plenty more that can be said about the topic of velocity. Because there is so much information about velocity available elsewhere, in books and other sources, I’m going to pause here for now.

--

--

Philip Rogers
A Path Less Taken

I have worn many hats while working for organizations of all kinds, including those in the private, public, and non-profit sectors.