Forget dumb productivity measures and focus on software delivery performance with Accelerate’s Four Key Metrics
The book Accelerate — The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations by Nicole Forsgren PhD, Jez Humble and Gene Kim has been really important for Redgate over the past 12 months. It’s helped us find a software delivery terminology that teams and the wider company understand and buy into, and allowed us to compare how we’re performing against the best in the business.
Over the years my fellow Redgaters and I have become resistant to “productivity” measures like velocity, lines of code, code coverage and time spent on tasks, and related techniques like story points and time estimates. We’ve used practices like that in the past but have found their usefulness is outweighed by their drawbacks. Those metrics have been used to give teams (and the company) a false sense of certainty or been unconsciously used to measure the performance of teams and individuals. Therefore, before Accelerate some Redgate development teams settled on using cycle or flow time to gauge how well they were delivering. And some teams didn’t use a measure for this at all, focussing squarely on outcome metrics instead — like feature usage. We never felt either approach provided the full picture of software delivery performance, though. For a start, it ignored quality and stability.
Then about a year ago Jeff Foster, Redgate’s Head of Product Engineering, discovered Accelerate and shoved a copy into my hands. We both immediately saw the importance of the Four Key Software Delivery Metrics it champions and how they could help move the organisation in a positive direction.
To kick off that initiative we really pushed to engage the dev organisation with the content of book in the hope people saw this positively rather than as a suspicious measurement activity by management. We ran a book club, talked about the underlying State of DevOps report and called out the positive methodologies & cultural characteristics the results supported. People seemed to ‘get it’. In fact, they were impatient to get started.
As a result of the lessons in the book, we’ve implemented the Four Key Metrics across all our product development teams. Thanks to the tenacity and skills of Gareth Bragg, one of Redgate’s Technical Coaches, each team has clear graphs available to track the measures. From there our Development Managers provided a little encouragement to get all the teams to put the graphs up on each team’s large dashboard screens and referenced in day-to-day work. You can see an example below.
I thought we’d have to nudge much more to see improvement in the measures but, it turns out, as our teams already have a principle of continuous improvement and are really keen to do great work, they have naturally taken action to push the accelerate metrics in the right direction. Sure, sometimes Development Managers have had to do a little coaching and introduce some agile/lean techniques to inexperienced teams, and you can see the results of one of those engagements in the above image (down and to the right on the graphs is great). But in most cases the teams have driven improvements themselves. I guess it’s the old Peter Drucker adage; what gets measured, gets done!
In addition to the benefits for teams, I’ve found that the metrics have finally given the development organisation a language to describe the performance of our software delivery engine to the wider business in terms we are all comfortable with…
In development we don’t have the impactful commercial measures of revenue, invoices raised or leads that our friends in Sales and Marketing can report on. Other product-focussed measures like active users, retention and churn are affected by forces outside of the influence of the development teams. Previously we might have had to rely on reporting the number of releases we had shipped or deadlines we’d met, both fairly numbskulled measures of productivity that ignore (at least) three maxims of agile product development:
- Focus on outcomes not output,
- Maximise the amount of work not done,
- Learn fast, learn often.
All of these are underpinned by one key idea; that you don’t really know, for certain, what is going to be a success in your product. You might have a really good idea that your new feature will be well received by users, you might have a super smart Product Manager who bets her reputation on it working in the market or you might have a insightful Sales organisation telling you exactly what customers are asking for. You might have all that, but you don’t have absolute certainty. You don’t know that what you build is going to work for people. You might have misread the market, mis-implemented the feature or mistaken a user want for a user need. You might have done nothing wrong at all, and still nobody buys the thing you made.
To channel Cynefin, your users, product, market and the wider economic environment are a complex system, not a complicated one. It is unpredictable. Unknowable. If you accept that then you’ll realise your strategy should to take an experimental approach — to probe, sense and respond as Cynefin would put it — and that productivity measures are folly. Measuring the output of your plans, releases and deadlines, no matter how carefully created, would be pretending the problem you have is a predictable one and your predictions were 100% “right” from the start.
We now measure and report to the business on the Four Key Metrics; Lead Time — how long it takes for use to get something done & released so we can learn from it, Deployment Frequency — how often we deliver something to our users, Change Failure Rate — how often we break something where users lose the ability to use our product, and Mean Time To Restore Service — how quickly we restore users’ ability to use the product. Aside from the obvious benefits of improving the performance of the organisation in these areas, thanks to the State of DevOps Report, we can also compare ourselves to other software organisations, sharing where were are in the Elite — High — Mid — Low performer spectrum. Equipped with that understanding we can drive further improvements.
As I mentioned before, Gareth Bragg played a big part in getting these metrics measured and embedded in our teams. If you’d like to know about our first lessons from the metrics and how actually we track them in our teams, he’s written a couple of great blog posts below:
- Learning from the Accelerate “Four Key Metrics”
- Sharing our approach to tracking the Accelerate “Four Key Metrics”
You should obviously read the book, if you have not already, and check out the 2019 State of DevOps Report itself.
If you have any more questions about Accelerate, the Four Key Measures or how we use them at Redgate, give me a shout at firstname.lastname@example.org.