Top 4 metrics to measure your Software Delivery Performance

Victor Coisne
sourcedtech
Published in
6 min readOct 17, 2019

Initially published on blog.sourced.tech

For the past decades, static code analysis tools such as SonarQube and Coverity have helped engineering teams to ship more secure, higher-quality software faster than ever before.

However, in recent years, shifts to DevOps practices and the proliferation of developer tools introduced a big challenge for engineering leaders in charge of software delivery performance. That challenge is the lack of end-to-end visibility into their DevOps pipeline with critical data spread across silos.

In order to reduce their time to market, companies need to first measure it to establish a baseline. Similarly, tracking software delivery performance across all the tools involved in a given delivery pipeline is of paramount importance for most businesses. Extracting, transforming and loading the data from all these tools informs decisions made by and for engineering teams, which is a hard requirement in organizations that need to scale.

Accelerate, one of the most widely accepted books in science of lean software and devops, suggests the following metrics to measure the software delivery performance:

  1. Change Lead Time
  2. Deployment Frequency
  3. Change Failure Rate
  4. Mean Time to Restore (MTTR)

In contrast with more limited static code analysis tools, source{d} allows you to measure your software delivery performance by performing flexible analyses over all your software development life cycle data sources through advanced SQL queries. This higher level of abstraction introduces a new set of metrics that can be used to assess Software Engineering effectiveness and quality. As agility and speed become more and more important, it is inevitable that enterprises turn to data to balance out their ability to quickly innovate without impacting their software reliability and customer satisfaction.

In this article, we take a closer look at these four software delivery performance metrics Engineering leaders could be tracking with source{d} Enterprise Edition (EE).

Change Lead Time

In the Accelerate book, DevOps experts Nicole Forsgren, Jez Humble and Gene Kim describe lead time for change as “the time it takes to go from code committed to code successfully running in production”. According to the 2019 State of DevOps Report, “Elite” performers have a lead time for changes of less than 1 day and “Low” performers have a lead time for changes that is between 1 month and 6 months.

To get a better visibility into lead time bottlenecks or opportunities for improvement, source{d} offers a set of more granular metrics to track pull request and code reviews activities.

1.1. Pull Request Activity

Most software changes come in the form of pull requests (PRs) — or at least we can argue they should. Extracting PR metadata from your version control management systems such as GitHub, GitLab or Bitbucket can give you a lot of insight into the time it takes for your developers to collaborate on software changes without impacting quality.

Key metrics include:

  • Percentage of developers submitting PRs
  • Percentage of merged vs rejected PRs
  • Average & Median time before the PR is merged
  • Average time to merge based on # of lines added, deleted, modified in a PR
  • Activity & age of each PR
  • Average time for the CI to pass (in number of commits or time)
  • Average cycle time of PRs by ticket type (bug, feature,…) obtained from the issue
  • Average runtime by PRs

1.2. Code Review Activity

Similar to PRs, code reviews are important for both effective and quality software development. Version control management systems also provide a lot of signals about code review quality and potential red flags.

Key metrics include:

  • The average number of developers doing code review
  • The average number of Reviewers per Code Review
  • The average number of developers who approve a code review
  • Percentage of code reviews with 0 comments
  • The number of builds failure for PRs with no review comments
  • The percentage of CI pass depending on the number of reviewers

Deployment Frequency

This metric measures the number of times a given organization deploys code to production. According to the 2019 State of DevOps Report, “Elite” performers have an on-demand deployment frequency (multiple deploys per day of less than 1 day) while “Low” performers usually deploy to production between once per month and once every 6 months.

The raw data needed to measure deployment frequency comes from Continuous Integration (CI) and Continuous Delivery (CD) projects and platforms such as Kubernetes, Spinnaker, Atlassian Bamboo, Jenkins, GitLab, CircleCI and others.

On many occasions, the deployment is automated to happen after we merge a branch to the base branch (usually master). At that moment, a build is triggered. For this reason, and in addition to the deployment metrics, source{d} provides the following metrics to provide feedback to optimize and improve the quality of your deployment builds.

  • The Average number of builds
  • Number of builds by commit authors
  • Breakdown of build status
  • The number of jobs/steps per repository
  • The average of repositories running jobs in parallel
  • Runtime distribution for each job, environment

Change Failure Rate

The change failure rate is usually defined as the percentage of deployments that led to deteriorated service or requiring remediation. Based on the latest State of DevOps Report, “Elite” performers have a change failure rate between 0–15% and “Low” performers have a rate from 46–60%.

But how can we detect failure deployments? One way is by analyzing those tests that are validating our production environment. These tests are usually configured as special builds that run periodically. Therefore, to provide visibility into this metric, source{d} processes data from both version control management systems, Continuous Integration, and Continuous Delivery platforms.

  • Number of CI build pass/fail per PR merged or not
  • Number of CI build pass/fail based on PR size
  • Breakdown of failures by steps and reasons
  • The ratio of CI success/fail depending on the number of Lines of Code
  • The ratio of CI success/fail per branch
  • The average number of CI fail by language
  • The average number of build failures for each job, environment
  • PR builds by authors
  • PR CI builds by authors association
  • Sum and list of failing jobs
  • Failed jobs in last revisions of merged PRs
  • PR CI rounds with failures (Average in minutes)

Median Time To Restore (MTTR)

In the 2019 State of DevOps Report mentioned above, Median Time to Restore is defined as “the time it generally takes to restore service when a service incident or a defect that impacts users occurs”. “Elite” performers have an MTTR that is less than 1 hour and “Low” performers have an MTTR that is between 1 week and 1 month.

This metric is usually directly available in your Application Performance Monitoring tools, such as DataDog, Prometheus, Splunk, Dynatrace or PagerDuty. However, this metric is high level and would benefit from additional context such as the lead time of the different steps involved in the remediation.

Key metrics include:

  • Median time from incident to the next successful deployment, which usually corresponds to the previous commit.
  • Median time from incidents to close the corresponding tickets, usually created in a postmortem session

Performance through visibility

In summary, these four high level metrics from the State of DevOps report can be seen as north stars to reap the benefits of DevOps teams (a.k.a engineering and productivity teams). However, companies need far more granularity and visibility into their entire Software Development Life Cycle to tactically implement best practices throughout an increasingly complex DevOps toolchain.

We invite you and your team to ask for a source{d} Enterprise Edition demo below to learn how source{d} can help you improve your Software Delivery Performance.

Learn More:

--

--

Victor Coisne
sourcedtech

VP of Marketing at @strapijs. French. Open Source Community builder, Wine lover. Soccer Fan.