Learning from the Accelerate “Four Key Metrics”

Gareth Bragg
Jul 25, 2019 · 5 min read
Photo by Heather Mount on Unsplash

There’s been a lot of excitement about the book Accelerate, which summarises research from the past several years of the State of DevOps report (which Redgate sponsors).

Perhaps the most popular topic is the “four measures of software delivery performance”, sometimes called “Accelerate metrics” or simply “four key metrics”.

We’ve started to try track these at Redgate, and we’re learning to use them to drive improvements in how we deliver software.

How to track them?

Instead we added tooling to our delivery pipeline to track and present this data for us.

Inspired by some open source work by the clever folks at Praqma, we developed some lightweight PowerShell that can interrogate Redgate’s various git repositories to generate the four key metrics using our source code history as a source of truth. We’re looking into open sourcing this code and sharing it with the community, so please let us know if you’d be interested.

[edit: This code is now available from the RedGate.Metrics repo on GitHub.]

How to use them?

We also want our teams to be able to optimise themselves. Redgate prides itself on encouraging autonomous, cross-functional teams to do their best work. We work in a high-trust environment, and teams take full responsibility for the delivery and quality of their work. This helped us to also expose these metrics at a per-team level, giving teams the tools to understand their own performance, on the strict understanding that we will not be comparing this across teams.

Our tooling makes it easy to report on the four key metrics for any subset of Redgate, so we do exactly that.

What have we found so far?

As a long-term agile house, we’ve always had a visible culture around the cadence of our work. That comes across in the tempo-based metrics, deployment frequency and delivery lead time.

Exposing this data has helped us identify some interesting working patterns that we may not have seen otherwise.

Understanding tempo

That means often having stable deployment frequencies of once a week, and delivery lead time around the five-day mark:

Tempo metrics for one of our products

According to Accelerate, that would put a typical team at the positive end of the “medium performers” bracket:

But what if we look at Redgate as a whole?

Tempo metrics for Redgate

Firstly, we see a lot more variation. More on that in a moment.

The big eye opener for us here is Redgate’s global deployment frequency. We’re releasing product improvements multiple times a day. While that’s great, it means our users are being asked to install updates far more often than we realised before. This is putting pressure on how we get those updates into their hands, and what our upgrade process looks like.

Why the variation?

Variation in delivery lead time took a bit more investigation, but highlighted teams who were struggling to deliver their work incrementally. This led to long-lived feature branches, and the team not getting valuable feedback on their work. Identifying this let us work with that team to determine how to slice a tricky piece of work into smaller deliverable pieces and get that value out to users sooner.

Understanding Stability

Because it was a seen as a special case we didn’t keep many records, but we’re starting to gather that data now. This has been made easier by another lightweight change to our deployment pipeline.

Current stability metrics for Redgate

We’re already seeing that our change failure rate is higher than we expected; a valuable insight for a company that prides itself on quality. Worse is that recovery can take us multiple days. We mitigate this by identifying failed releases and preventing more users from adopting them, but exposing the truth about our stability practices is showing significant areas for Redgate to improve.

Talking the talk

By framing conversations around these four key metrics, we’re in a far better position to talk objectively about how we’re delivering software and identify local or global areas to improve.

What comes next?

More importantly, we’ll use those insights in inform how we work, and better-judge the impact of introducing new ways of working.

We’ll be sharing our successes (and failures) at ingeniouslysimple.com.

Ingeniously Simple

How Redgate build ingeniously simple products, from…