The Art of Continuously Delivering Quality

Around the middle of 2015, as I was leaving one position and looking for another, I decided that the next project I worked on should be able to use continuous delivery as the means of getting the code into production. In this article I want to cover some of the reasons why I made that choice, some of the benefits of going continuous, some of the problems that I encountered along the way and whether I would do it again.

Apologies in advance for the over simplification of some things and also for my somewhat opinionated views on other things. This is how I see Continuous Delivery working but it may not be exactly how you see it working.

Why Continuous Delivery?

My software development career started properly around 2000 and in the intervening years I have found that the single, most painful part of the process of getting new code to production has been release management.

The process ought to be simple, get your working and tested code into production without too many problems. Sure, systems come in all shapes and sizes, and if you are dealing with a large monolith with multiple dependencies and/or dependants you will need to pay close attention to interactions with other systems to ensure you don’t break anything, but fundamentally it should not be painful to get an application deployed.

The more that I read about continuous delivery, the more it made sense to me. Automating as much of the delivery pipeline as possible meant that code could be live shortly after being completed. Not only that but if you did it right, you didn’t have to sacrifice quality in order to achieve it. This sounded like the way I wanted to work in the future.

The benefits of Continuous Delivery

The road to continuous delivery can be challenging due to the nature of the software delivery process. I am sure many people will have been burned by putting code into production that was either not thoroughly tested or not tested at all. Let’s face it, sometimes even well tested software can go wrong when put into the production environment.

Many things can go wrong on that journey and it’s because of those well understood and commonly experienced problems that many people want the safety net of manual integration or regression testing before going into production and the ability to roll back to a previous good release in case of emergency when the code does finally go into production.

Most of those manual safety nets are simply dry runs through the systems and therefore could potentially be automated and run as part of a delivery pipeline. Cutting down the number of manually run checks when deploying code to production is only one of the benefits of moving to a Continuous Delivery pipeline, here are some more.

  • Automated running of unit/functional/integration/regression tests
  • Fast feedback on code commits that introduce bugs
  • Increased speed of delivery to production
  • A visual representation of the steps taken to build and deploy an application
  • Less fear when performing a release to production
  • Increased levels of quality delivered to production
  • Faster time to market for new features or products

It would be remiss of me to only focus on the positives without at least mentioning some of the negatives of using a continuous delivery pipeline.

  • Changing the way code is developed
  • Orchestration of environments and dependencies
  • Reliance of external third parties to behave correctly

I want to quickly take a look at what an example continuous delivery pipeline might look like before discussing few of the points listed above as they deserve a little more explanation.

What does Continuous Delivery pipeline look like?

Each pipeline to production will look different due to the different types of software that are written. For example a pipeline for code that is intended for a micro service may look very different from a pipeline that delivers and Android application.

That being said they will generally start with a fresh code build and end with a deployment to a live system somewhere.

An example continuous delivery pipeline

The diagram above shows a typical continuous delivery pipeline that starts with a build of the code base along with running unit tests on that code base. This is the first point at which that application can be proven to be wrong if the tests fail. Arguably those unit tests should be run as part of the commit to source control to avoid pushing broken code to your repository, but it may be a good idea to run them again from fresh when starting a new build. Subtle problems can surface during this phase that be attributed to a local development machine being set up in such a way that tests past when committing code but fail when run on a new test machine.

Each subsequent stage in the pipeline deploys the application to a different environment suitable for running other types of tests against it. At every stage the tests that are run are proving the quality of the build and adding much needed confidence in the software along the way.

At this point it is worth mentioning that you should only need to build your application once, at the start of the process, and deliver that build to each stage along the pipeline. Anything else would mean that you have much less confidence in your build as you could be testing a different version each time you build the application.

Each stage need to be useful and provide a meaningful result. In the example above the first deployment to an integration environment may be the first time your code has run against a live version of the rest of your system. A valuable stage in any pipeline. The second stage may be deploying to an environment suitable for testing for example, to a environment that more closely mirrors your production environment and can be used for performance testing, penetration testing etc.

After all of the quality gates at each stage in the pipeline have been passed you should be ready to deploy into an offline version of production. This is the blue/green deployment strategy and is a valuable last stage before actually going live. Having an offline and online production environment is useful when continuously delivering as it enables you to run the next release of code in the offline production environment and perform last minute smoke tests before switching it over to online. For more information on the Blue/Green Deployment strategy, please read Martin Fowler’s short article on the subject.

Once the offline deployment has been proven to be good it is time to go into production for real. The journey to production has been a tricky one for the application, it’s quality has been questioned many times along the way but by the end of the process you should have a well tested and working application in your production environment.

That’s the very straightforward overview of what a continuous delivery pipeline can look like. I would now like to talk about some of the benefits of using continuous delivery mentioned earlier.

Automated running of unit/functional/integration/regression tests

There is very little on the road to production that cannot be automated, if not fully then at least partly. Unit testing, functional testing, integration testing and regression testing can certainly be automated and run as part of the pipeline at every stage of delivery.

In fact this is a great way of more closely integrating parts of a team that would normally focus on writing tests into the development process. When developers and testers work together to deliver features, the natural outcome tends to be better quality features with greater code coverage.

Test automation and high test coverage is key to providing the level of quality assurance you will need if you want to move to a continuous delivery pipeline. A pipeline will traditional start with building the code base from source and then progress through a number of stages of deployment and testing before finally being put live. Automated testing is vitally important at each stage to ensure that the quality gates are being met. No code should go live if tests fail along the way.

Taking hints from the testing pyramid versus the testing ice cream cone, you should focus your efforts on providing a large number of fast unit tests that can be run during development, as part of committing code and also at the first stage of your pipeline. You should then decrease the number of slower tests such as feature, integration, end-to-end and regression until you end up with only a small number of smoke tests before production. Have more of the quicker tests and less of the slower ones unless you have a compelling reason to do otherwise.

Favour a pyramid of tests (left) rather than an ice cream cone of tests (right)

For more information on the testing pyramid versus the ice cream cone, please read Martin Fowler’s excellent article about it.

Fast feedback on code commits that introduce bugs

The earlier you find bugs, the cheaper they are to fix. We all know this and with a continuous delivery pipeline you can put this into practice in a way that helps ensure nothing bad can get into production.

Unit tests are fine and can be run during development to ensure the code behaves as expected but it’s often hard to run real world tests against integration endpoints in development. Because of that you can deploy your application to different environments as part of the pipeline and run integration tests there to get early feedback on the behaviour of the application when used together with it’s dependencies or dependants.

Anything that can be done to smoke out problems prior to going into production is a win and can only help you build better software.

Increased speed of delivery to production

Due to the fact that the pipeline can automate nearly everything you will need to do in order to get your code into production, the time taken to add a new feature, to fix a bug, to add new A/B testing to the code base and get that into production is much shorter than the traditional way of raising a change request, having it approved, arranging a slot for deployment, merging in branches and actually doing the deployment.

And because the pipeline performs all the checks you would need to do as part of the quality gates at each stage, you can quickly and confidently release your code to production within minutes.

Releases into production become common and quick when continuously delivering.

A visual representation of building and deploying an application

Most continuous delivery tools are able to visualise your pipeline in such a way that you can monitor your build as it progresses through each stage. This is an excellent way of promoting your pipeline to production as it lets those who would not normally be interested or understand the release process actively participate in the delivery. As each deployment to each stage in the pipeline happens and the quality gates are passed, you can see how the build is being tested in each stop along the road to production.

There is something quite satisfying about seeing your build pipeline lit up in green and knowing that pushing code changes into production can be done at any time with little concern as the quality is baked into the continuous delivery pipeline.

Less fear when performing a release to production

When releasing well tested software to production becomes commonplace and can be done at any time by any member of the team, the process becomes less scary and therefore does not require the need to get stakeholders involved or get permission from release management.

Releasing code to production should be easy and should not be a cause for concern as long as the code has been thoroughly tested before being put live. Continuous Delivery helps makes that possible.

Increased levels of quality delivered to production

Due to the fact that the application being delivered to production has been deployed to many different environments before production and that each environment has it’s own set of quality gates that need to be passed in order to continue, the quality of the code is much better than it would be if the traditional approach of bundling up a release, giving it to the test team to break before deploying it had been taken.

When continuously delivering through a pipeline with several distinct stages of deployment and testing the final release to production can be proven to be of a certain quality by virtue of it being tested along the way. No code should be put into production if any of the tests are failing and if code does break when put into production, the problem can be fixed and that fix fed back into the pipeline so that it doesn’t happen again.

Faster time to market for new features or products

Because you are constantly delivering to production and because you have a high degree of trust in your build pipeline to always delivery quality, new features can be pushed to production really quickly.

In fact if you are also using feature flags, new features may already be in production when you decide to turn them on. Feature flags, or feature toggles as Martin Fowler calls them, are a great way of enabling development of new features without using long live branches. Continuous delivery relies on work being put into the master branch and eschews the use of long lived feature branches so because of that, there needs to be a way of ensuring that code that isn’t yet ready for use in production is ‘turned off’. Feature flags enable that to happen and provide a great way of switching on/off parts of the application.

The increased speed of getting new features into production is one of the most important benefits of a continuous delivery pipeline for me. The ability to get code into production quickly and confidently means that turnaround for new features, as well as other things such as new products, bug fixes, A/B testing use cases, all become quick and almost trivial.

So with all the positives covered I would like to take a little time to discuss so of the problems faced when moving to Continuous Delivery.

Changing the way code is developed

As discussed earlier, when continuously delivering code to production a development team may have to change the way that it works in order to ensure that the master branch in source control is always ready to be deployed.

Many companies will be using a different software configuration management pattern, for example Git Flow, and would find the move to working on master using very short lived branches for features/fixes and constant peer review to be tricky. The benefits of doing this have already been mentioned but for me the single, most important benefit is simplicity. Long lived feature branches can become very hard to understand when they last for months (or years) and the resulting merge when preparing a release can be time consuming and fraught with danger. It can get so complex that I have known companies to hire people for this task alone.

The move to building and deploying from master continuously is an important part of Continuous Delivery for me and should not be avoided. If a development team does use a convoluted software configuration management pattern but wants to move to Continuous Delivery then it will need to think long and hard about how to transition across safely.

Orchestration of environments and dependencies

Unless your application lives in isolation from anything else, it will need to interact with other applications in order to be useful. Orchestrating these dependencies in such a way that you can use them throughout a Continuous Delivery pipeline can be problematic.

Databases may need to be replicated across different environments, third party web services may need to be able to offer different levels of access so that you can call them with no side effects outside of production and some collaborators may need to be stubbed or mocked out in order to allows tests to be run in every stage of the pipeline.

This is not straightforward as usually these dependencies are outside of your control. Hopefully most of them will be good citizens and provide testing environments that can be used outside of production but you may find others which simply don’t do that which is when you will need to start being creative.

Reliance of external third parties to behave correctly

If you find yourself in a situation where external third parties are working in a different way to you and cannot always provide guarantees that their systems will be in a fit state for you to use in your continuous delivery pipeline, you will have trouble using them.

If you cannot rely on third party services to behave the same way every time then your build pipeline could fail without it being any fault of yours. For that reason, it is important that external third parties that are part of your pipeline are reliable and if they are not, they should you should consider not using them.

Would I do it again?

Having implemented and used a continuous delivery pipeline as part of a large scale project I can confidently say that it would be hard to work on anything else in the future that wasn’t delivered continuously.

The benefit of running tests automatically in different environments against different back ends is massive for me. I love being able to demonstrate the quality of each build as it goes into production.

With code going live continuously your source code is never that different from your production code that fixing a live bug is tough. Getting that fix into production is often a simple process and the turnaround should be in the order of minutes and not hours/days/weeks/months.

Feature toggles make putting new code live so simple. The implementation of them can be challenging and maintaining them requires diligence but once they are there, life becomes so much easier.

It’s also worth repeating that getting to a point of continuous delivery can be tough and requires buy in at all levels but the pay off when you do get there is immense.