4 Often Overlooked KPIs for Measuring Software Development Productivity

Most software development performance metrics suffer from 2 major deficiencies. Here are the 4 KPIs that will help you avoid those pitfalls.

Javier Hertfelder
Crowdbotics
5 min readApr 8, 2019

--

After reading “Technology Radar” from Thoughtworks, I decided to buy a book they recommend in one of the chapters. Accelerate by Nicole Forsgren presents a scientific approach to productivity in modern software development. After reviewing 2000 organizations, Forsgren outlines key takeaways on how to improve your company culture and performance.

In this article I am going to focus in how to effectively measure software development performance and giving examples of how we implemented them at FXStreet.

Over the years there have been many attempts to difficulty measure the performance of software teams. The problem is that most models suffer from 2 major deficiencies:

  1. They focus on outputs instead of outcomes.
  2. They focus on individual performance rather team or company ones.

During the gloried past, some companies used number of lines of code to measure performance and productivity. The flaws of this approach are obvious, less lines are usually more maintainable but, on the other hand, a few lines of code can be difficult to understand. So, this KPI is not ideal in most cases.

With the introduction of Agile methodologies, a popular metric was velocity. Despite being way better than the lines of code, it is still tricky and only works with isolated teams or companies where the team members do not change a lot. In large organizations, this measurement can be use to compare productivity between teams leading to inflate their estimates.

In Accelerate, Forsgren, Humble and Kim identify 4 metrics that resolve the 2 common pitfalls explained before.

  • Lead Time
  • Deployment Frequency
  • Change Fail Percentage
  • Mean Time to Restore (MTTR)

Lead Time

Lead time tells you how much time it takes from a customer making a request to the request being satisfied. We have changed a little bit the definition for the sake of simplicity and for us the definition is how much time it takes from a task since we start it until we give value to our customers (being in production).

We use Jira as our Task Management Tool and, luckily, Jira has the Control Chart report. It shows you how much time a task (story, bug, etc..) is in certain state. So after each sprint we use this chart to derive lead time. JIRA Service Desk

Number of deploys

Number of deploys tells you how many times a software development has been deployed to production in a period of time, usually a sprint.

Since we use Azure DevOps to manage the pipelines of CI and CD we have created a step (powershell) in the release process that stores what and when anything is deployed. Azure DevOps

Typical deployment pipeline at FXStreet

And this is stored in Azure Table like this:

Table where we store the name of the service and the environment deployed

Change Fail Percentage

Change fail percentage tells you the percentage of changes to production that fail, including hotfixes, rollbacks, fix-forwards, etc..

This is the trickiest one to measure, there could be various strategies in order to record this KPI:

  • Record in a table every time you deploy to production and some unit/integration/ui/system test fails.
  • If you have pre-deployment approvals, when someone rejects the deployment record this automatically.
  • If someone spot a bug after the deployment you can create a bug associated to the deploy and count it after every sprint.

In this case our approach is manual and we just review all the deployments (picture 2) in a sprint looking for the same service being deployed twice in a small period of time and taking notes if someone has spotted a bug after a feature being released.

Mean time to restore

Mean time to restore tells you how much time it takes to solve a critical situation like a service outage.

Unless you have this type of failures in a daily basis this is a measure easy to track manually, just time since the beginning of the outage until is solved. If you monitor all of your services with an APM like NewRelic, DataDog or Application Insights you can easily get this number. At FXStreet we also create a postmortem document every time something like this happens helping us to track this KPI. Datadog HQ

Conclusion

After reading Accelerate, I decided to put these 4 metrics into practice and see if could improve our software delivery process as a team. We have been using this approach for the last 3 months and I can see 2 advantages:

  • Finally, I can present something meaningful and digestible to the non-technical directors in the management committee, and to all employees in our general meetings, simply because these 4 metrics are easy to understand for everybody.
  • Even better, since our team is very competitive, we want to reduce the Lead Time to Production as much as we can. This lead us to a new and better way of splitting the SCRUM stories. Every sub-task coming from the backlogs must be able to be deployed to production independently.

Building A Web Or Mobile App?

Crowdbotics is the fastest way to build, launch and scale an application.

Developer? Try out the Crowdbotics App Builder to quickly scaffold and deploy apps with a variety of popular frameworks.

Busy or non-technical? Join hundreds of happy teams building software with Crowdbotics PMs and expert developers. Scope timeline and cost with Crowdbotics Managed App Development for free.

--

--