The “What” and “Why” Behind Performance Metrics: Part I

Francois Biron
SSENSE-TECH
Published in
5 min readJun 26, 2020
Image by Philip Uglow from Pixabay

What do we measure at SSENSE, and why? In this first installment of a three-part series, we’ll cover four of the key metrics we currently track for each Scrum team, also aggregating them at the department level for higher-level insights. But more importantly, we’ll delve into why we chose them. Let’s dive in.

Why the “Why” is So Important

Our Technology department has experienced rapid growth over the last few years. We now have over 20 development teams, all working with the Scrum framework. We’ve also undergone multiple reorganizations, both to ensure an adequate foundation for these new teams, and to reflect the evolution of our department’s vision. As we’ve expanded, with various teams specialized in different technologies and business domains, we launched an internal mobility program, enabling software engineers to move from one team to another.

Given this context, it became clear that we needed to reduce the friction that comes with changing teams, for both individual contributors and managers. We also wanted quick and high-level insights on how the teams were performing. As our management layer expanded, we decided to standardize our team metrics to ensure that everyone across our department tracked and reported on the same metrics, in the same format.

As we started to centralize team metrics in our aggregated Sprint Tracker and gave the teams more visibility, I realized that sharing “what” we track without a clear “why” could lead to misinterpretation. The “why” puts everything into perspective. For a team, knowing that we track their sprint velocity might lead them to believe they are being evaluated or compared to other teams based on this relative performance indicator. Here’s a scoop: they aren’t. They might also grow concerned that they are being individually evaluated on their ability to close more story points. Wrong again! We need to avoid turning morning stand-ups into daily shame sessions. Velocity, especially at an individual level, is not a reliable measure for performance or overall contribution. Individual performance management is not the focus of this article, but if your strategy relies heavily on velocity, chances are you’re doing it wrong.

So not only is it crucial to know why you’re tracking metrics, it’s equally important to share this information with employees so everyone is on the same page.

Completion Rate

This is probably the most common and easiest metric to track for Scrum teams. Simply divide the number of story points completed by the number committed to at the beginning of the sprint. But why do we care? Many teams tend to overcommit. Perhaps they feel pressured to take on larger commitments to meet deadlines. Or maybe they have trouble sizing what a realistic commitment would be. They may even be convinced that committing to more story points will make them more productive.

We believe that in the long run, the opposite is true: teams constantly failing to meet their sprint commitments can become numb and lose their motivation. It also blurs the predictability of their output. A healthy completion rate, where teams regularly meet or slightly exceed their commitments may result in a sense of accomplishment, reduce stress, and provide more time and incentives to participate in other activities that will contribute to their career growth and overall department performance.

That’s why we keep an eye on this metric: to detect patterns, raise awareness with the team members, and coach them into adjusting following sprints.

Average Daily Velocity

We average the daily velocity over five sprints. To calculate the daily velocity for a sprint, we divide the number of points done in the sprint by the number of days worked by each team member. For example, if five developers in a team all worked every day for two weeks, they have 50 working days in the sprint. If someone took a week off, they have 45 working days. Completing 36 story points in a sprint of 45 working days would give us a daily velocity of 0.8.

I mentioned earlier that we coach our development teams into making more feasible commitments. I also mentioned that teams don’t always have an accurate feel for what their commitment should be at the beginning of the sprint. This is where average daily velocity comes into play. Perhaps, there is a public holiday coming up, and some of the team members will be taking additional time off. What would be an acceptable sprint commitment under the circumstances? By understanding the average daily velocity and by considering the number of available working days, you can make a more educated guess. Of course, it always comes down to one thing: the team making the commitment needs to be comfortable with it. The metric simply provides guidance.

Goal Achievement

While our teams may work on various priorities at a time, every sprint should have one overarching goal, which represents the most important deliverable for the current iteration. If we are falling behind or face unplanned challenges in the sprint, everything else should be dropped to ensure proper completion of the goal. At the end of each sprint, we track whether the goal was achieved.

We track this mainly to identify bad habits and prioritization issues that may disrupt a team’s focus. It’s very different from the completion rate, because a goal may have been achieved without completing the entire sprint. On the flip side, perhaps we failed to achieve the goal and still ended up completing more story points than we had initially committed to. Perhaps, we received a lot of unplanned urgent requests, had poor communication within the team, or simply did not work on the right things. It’s important to understand what derailed us in order to regain our focus.

Groomed Backlog

Following the current sprint, how many story points do we have in the backlog ready to be worked on? Ideally, we want to keep a couple of sprints ready to go. This provides the team with insights on the roadmap and reduces the risk of not being able to groom stories on time for the next sprint. Having too many stories groomed is no better, as we risk requirements being outdated by the time we get to them. We strive to keep our backlogs clean and relevant.

Most of the data we track is readily available after each sprint and can be gathered with the team at the beginning of the Retrospective. The time investment needed to maintain our reports is minimal and can even be partially automated. The area you should spend time on is understanding what metrics are relevant for your team to track. What works for us won’t necessarily work for everyone else. As mentioned earlier, metrics can also be misinterpreted, so select them wisely and be transparent on how they are used.

In Part Two, we’ll discuss technical performance and reliability metrics. Stay tuned!

Editorial reviews by Deanna Chow, Liela Touré, & Prateek Sanyal.

Want to work with us? Click here to see all open positions at SSENSE!

--

--