Calculating team velocity using iterations

The velocity of the team is calculated based on a three-week rolling average of the points associated with user stories during iterations. In other words, , at the end of every week you total up the number of points accepted by product management. The velocity of the team for a given week is the average points the team got accepted during the last three weeks, multiplied by team strength percentage, which should normally be 100%.


One of the ways to make sure that team velocity is a reliable metric is to track team strength. Assuming that a team has 4 engineers, then 100% team strength is any given week when all 4 worked full-time. If one of those engineers is on vacation and nobody will be filling in for them while they’re gone, then team strength drops to 75% while they are gone. If the team takes Friday off for a holiday, then team strength drops to 80% for that week.

One of the reasons that modern agile teams like Pivotal Tracker instead of JIRA is because Tracker automatically and naturally does velocity calculations, including accounting for team strength.

Some of my teams at 2U have seasonal strength issues, because we are basically a large online university. There are times of the year when those teams are besieged with more users and unforeseen issues than others. We use team strength to moderate expectations for velocity during those times.


Team velocity would be much more complicated to calculate if iterations were variable length. In fact, it would be harder to trust team velocity to be reliable if they were different from each other in any way. That’s why modern agile iterations are like hours on a clock (except that they last one calendar week.)


Iterations are nothing like Scrum sprints, so we don’t use those terms interchangeably.

First of all, real sprints (like in running) are unsustainable by definition. Given that our teams are trying to achieve sustainable pace, then it doesn’t make sense to talk about sprinting. An engineering project is more like a marathon than a sprint.

Scrum calls for sprints to be started and stopped manually by the team. This means that they are not exactly the same length. Iterations are always the same length. Iterations do not need to be started or stopped — they are simply the weeks on the calendar.

Scrum sprints are named by their teams during sprint planning. Iterations are time boxes that materialize without ceremony. They just happen automatically and perpetually until someone says the project is finished or on hold.

If we need to refer to a particular iteration in conversation, we do so using the iteration number or start date.

“What happened in Iteration 4? We had only one story accepted that week!”


One, their work product is released continually and automatically into an acceptance testing environment, sometimes referred to as staging. Second, with iterations being simply arbitrary one-week long time boxes used to calculate team velocity, time devoted to ceremony is drastically curtailed. There are no long and boring planning meetings chewing up productivity.


One of the most important benefits of our 1, 2, 4 point system is that it is easy to implement in a repeatable and consistent fashion. But no matter what, remember that consistency is key. The expectation is that the way that the team assigns points to stories should fall into repeatable patterns. The kinds of chores that they do, the way that they address technical debt, the amount of bugs; all these things should stay rather consistent from week to week.

It’s okay if your estimates are wrong, as long as they’re wrong in similar ways from week to week

Remember that velocity has one primary reason for existence: for product stakeholders to be able to predict when in the near future they’ll hit certain milestones and be able to release new features.

The reason that consistency is so important is that without it, the ability to predict the future with certainty is destroyed.


Once you get past the first month or two of a project, the velocity should settle into a tight, consistent range. Volatility is the measure of consistency (or lack thereof) in velocity over time. It can be measured by using the standard deviation or variance between velocity scores from week to week. The higher the volatility, the riskier the predictions made based on that team’s velocity.

High volatility means there’s a problem with the way the team is managing the project, or something else. It’s important that someone notice the volatility and investigate.


I’m hoping that over time I’ll be able to establish one of our most stable and mature teams at 2U as a velocity benchmark, in order to derive beta values for the other teams. Beta values are traditional measures of volatility in markets.

For example, a team with a beta value of of 1.5 will have historically moved 150% for every 100% move in the benchmark, based on their relative velocity. Conversely, a team with a beta of .9 has moved 90% for every 100% move of the benchmark team. The reason I think this makes sense, is because I think there are macro forces at play that affect all team velocities, just at varying degrees. I want data to start figuring out what those forces might be!

Now I must admit that I have no idea if this beta value idea will be feasible and/or useful, but if it does work, it could really give my product team an additional measure of risk mitigation when using velocity to predict milestones.