Continuous Delivery

Janice Laksana
gdplabs
Published in
10 min readJan 16, 2019
Photo by Pixabay

Recently I have finished reading “Continuous Delivery : Reliable Software Releases through Build, Test, and Deployment Automation” by Jez Humble & David Farley. I take a couple of notes which I think is the main point of the book.

Continuous Delivery : Reliable Software Releases through Build, Test, and Deployment Automation

Chapter 1. The Problem of Delivering Software

Most modern applications are complex to deploy because it involved many moving parts. Many organizations required a lot of steps to deploy an application, each performed by an individual or team manually, which lead to buggy deployment. By adopting automated build, test, and deployment, we can deliver high-quality, useful, and valuable working software to users in an efficient, fast, and reliable manner.

These are principles of software delivery to make our delivery process effective :

  1. Create a Repeatable, Reliable Process for Releasing Software. Releasing software should be easy because every single part of the release process has been tested hundreds of times before. Repeatability and reliability derive from two principles : automate everything and keep everything we need to build, deploy, test, and release in version control.
  2. Automate Almost Everything. Our build process should be automated up to the point where it needs specific human direction.
  3. Keep everything in Version Control.
  4. Do more Frequently and Bring the Pain Forward. If releasing software is painful, release it every time somebody checks in a change that passes all the automated tests. If creating documentation is painful, do it as we develop new features.
  5. Build Quality In. Techniques such as continuous integration, automated testing, and automated deployment, are designed to catch defects as early in the delivery process as possible, when they are cheaper to fix. Delivery teams must fix the defects as soon as they are found.
  6. Done Means Released. It is important for everybody — testers, build and operations personnel, support teams, developers — to work together from the beginning.
  7. Everybody is Responsible for the Delivery Process. Team succeeds or fails as a team, not as individuals.
  8. Continuous Improvement. The whole team should regularly gather together to reflect on what has gone well and what has gone badly, and discuss ideas on how to improve things.

Chapter 2. Configuration Management

Configuration management refers to the process by which all artifacts relevant to our project, and the relationships between them, are stored, retrieved, uniquely identified, and modified. Our configuration management strategy will determine how we manage all the changes that happen within our project.

There is recommended strategy for storing baselines and controlling changes to :

  1. Source code, build scripts, test, documentation, requirements, database scripts, libraries, and configuration files
  2. Development, testing, and operation toolchains
  3. All environments used in development, testing, and production
  4. The entire application stack associated with applications
  5. The configuration associated with every application in every environment it runs in, across the entire application lifecycle (building, deployment, testing, operation)

Chapter 3. Continuous Integration

Continuous Integration requires that every time somebody commits any change, the entire application is built and a comprehensive set of automated tests is run against it. Continuous Integration creates a tight feedback loop which allows us to find problems as soon as they are introduced, when they are cheap to fix. The goal of Continuous Integration is that the software is in a working state all the time.

Continuous Integration forces to follow two important practices : good configuration management and the creation and maintenance of an automated build and test process. Continuous Integration requires good team discipline : The build stays green.

Chapter 4. Implementing a Testing Strategy

Testing should be done continuously from the beginning of the project and involve the whole team. To improve build quality, automated test at multiple levels (unit, component, and acceptance) must be run as part of the deployment pipeline. The automated test must be triggered every time a change is made to the application, configuration, or the environment and software stack that it runs on. Manual testing is also essential to improve build quality. Showcases, usability testing, and exploratory testing need to be done continuously throughout the project.

Testers collaborate with developers and users to write automated test from the beginning of the project. These tests form an executable specification of the behavior of the system to ensure the functionality required by the customer has been implemented completely and correctly.

These tests not only test functional aspects of the system but also non functional aspects. Non functional aspects enable developers to refactor and rearchitect on the basis of empirical evidence. Testing establishes confidence that the software is working as it should which means fewer bugs, reduced support costs, and improved reputation.

Chapter 5. Anatomy of the Deployment Pipeline

Purpose of deployment pipeline is to get everyone involved in delivering software visibility into the progress of builds from check-in to release. It should be allowing everyone involved :

  1. To see which change had broken the application and which resulted in release candidates suitable for manual testing or release
  2. To perform push button deployments into manual testing environment and see which candidates are in those environments

Implemented deployment pipeline can then be used to drive out inefficiencies in building and releasing software. A deployment pipeline depends on having : good configuration management, automated scripts for building and deploying application, and automated tests to prove that application will deliver value to its user. It also requires discipline, such as ensuring that only changes that have passed through the automated build, test, and deployment system get released.

Chapter 6. Build and Deployment Scripting

Automation helps us build, test, deploy, and release our software. Grow automated build and deployment capabilities step by step, working through the deployment pipeline by iteratively identifying and then automating the most painful steps.

A wide variety of technologies exist for scripting build, test, and deployment process. The script should be version-controlled, maintained, tested, and refactored, and be the only mechanism to be used to deploy the software.

Chapter 7. The Commit Stage

Goal of the commit stage is to detect the most common failures that changes to, as fast as possible, so they can fix the problem quickly. Commit stage must be run every time someone introduces a change into application’s code or configuration. Practice of continuous integration on commit stage is to launch automated process on every change, builds binaries, runs automated tests, and generates metrics.

Chapter 8. Automated Acceptance Testing

The acceptance testing focus on the behaviour that the users need from the system. The adoption of acceptance testing represents a further step forward by :

  1. Increasing confidence that the software is fit for purpose
  2. Providing protection against large-scale changes to the system
  3. Significantly improving quality through comprehensive automated regression testing
  4. Providing fast and reliable feedback whenever a defect occurs so it can be fixed immediately
  5. Freeing up testers to devise testing strategies, develop executable specifications, and perform exploratory and usability testing
  6. Reducing cycle time and enabling continuous deployment

Chapter 9. Testing Nonfunctional Requirements

Nonfunctional requirements are a difficult area because they force technical people to provide more input into their analysis, which may detract them from the business value they are asked to deliver.

Technical people must work closely with customers and users to determine the sensitivity points of the application and define detailed nonfunctional requirements based upon real business value. Once this work has been done, the delivery team can decide upon the correct architecture to use for the application and create requirements and acceptance criteria capturing the nonfunctional requirements in the same way that functional requirements are captured. After that, delivery team needs to create and maintain automated tests to ensure these requirements are met. These tests should be run as part of deployment pipeline every time a change to application, infrastructure, or configuration passes the commit test and acceptance test stages.

Chapter 10. Deploying and Releasing Applications

The more frequently someone release the application into a variety of test environments, the lower the risk of releases. The release process will be more reliable and less likely to encounter a problem in a production release. The automated deployment system should be able to commission a new environment from scratch, as well as update a pre existing environment. The most crucial part of release planning is assembling representatives from every part of organization involved in delivery : build, infrastructure and operation teams, development teams, testers, DBAs, and support personnel.

Chapter 11. Managing Infrastructure and Environments

Environment is all of the resources that application needs to work and their configuration. Infrastructure should be autonomic and it is also essential that it should be simple to re create. Combination of automated provisioning and autonomic maintenance ensures infrastructure can be rebuilt in a predictable amount of time in the event of failure.

Testing environment should be production-like so we can catch environmental problems early and to rehearse critical activities like deployment and configuration before we get to production.

Chapter 12. Managing Data

The fundamental principles that govern data management are the same. The key is to ensure there is a fully automated process for creating and migrating databases. This process is part of the deployment process, ensuring it is repeatable and reliable.

It is also important to manage data that is used for testing. Instead of using a dump of the production database, create the state that our tests need and ensure the tests is independent of the others. Here are some important principles and practises :

  1. Version database and use a tool to manage migrations automatically
  2. Strive to retain forward and backward compatibility with schema change
  3. Make sure tests create the data they rely on as part of the setup process
  4. Reserve the sharing of setup between tests only for data required to have the application start, and perhaps some very general reference data
  5. Try to use application’s public API to set up the correct state for tests whenever possible
  6. Don’t use dumps of production dataset for testing. Create custom dataset by selecting a smaller subset of production data, or from acceptance or capacity test runs.

Chapter 13. Managing Components and Dependencies

Continuous delivery provides the ability to release new, working versions of software several times a day. We have to ensure that teams develop as efficiently as possible, while keeping application releasable at all times. The principle is to ensure that teams get fast feedback on the effect of their changes on the production-readiness of the application. One strategy for meeting this goal is to ensure every change is broken down into small, incremental steps which are checked into mainline. Another strategy is to break application down into components.

The use of components, dependency build pipelines, and effective artifact management are also the key to efficient delivery and fast feedback. Make sure to use technology’s toolchain effectively to write code that can be built as a set of independent components once it gets large enough.

Chapter 14. Advanced Version Control

Version control systems are designed to allow organization to maintain a complete history of every change made to their applications, including source code, documentation, database definitions, build scripts, tests, and so forth.

There are three good reasons to branch source code :

  1. A branch can be created for releasing a new version of your application. This allows developers to continue working on new features without affecting the stable public release.
  2. When someone needs to spike out a new feature or a refactoring; the spike branch gets thrown away and is never merged.
  3. It is acceptable to create a short-lived branch when someone need to make a large change to the application.

The main purpose of branches is to facilitate parallel development; the ability to work on two or more work streams at the same time without one affecting the other. In real life, there is a point where someone need to take the changes in one branch and apply them to other that arises conflicts between two branches. The longer you leave things before merging, and the more people you have working on them, the more unpleasant merge is going to be. There are ways of minimizing this pain :

  1. Create more branches to reduce the number of changes.
  2. Merge at regular intervals — every day, for example.

A manageable branching strategy is to create long-lived branches only on release. Merging is only performed when a fix has to be made to release branch, from which it is then merged into mainline. The model is better because the code is always ready to be released, and the releases are therefore easier.

It is important that every time you branch, you recognize that there is a cost associated with it. That cost comes in increased risk and the only way to minimize that risk is to be diligent in ensuring that any active branch should be merged back to mainline daily or more frequently. Without this, the process can no longer be considered to be based on continuous integration.

Chapter 15. Managing Continuous Delivery

Implementing continuous delivery involves not just buying some tools and doing some automation work, but also effective collaboration between everyone involved in delivery, support from executive sponsors, and willingness of people on the ground to make changes.

Good management creates processes enabling efficient delivery of software, while ensuring that risks are managed appropriately and regulatory regimes are complied with.

Iterative, incremental delivery is the key to effective risk management. Iterative delivery combined with an automated process for building, deploying, testing, and releasing software embodied in the deployment pipeline, is not only compatible with the goals of conformance and performance, but is the most effective way of achieving these goals. This process enables greater collaboration between those involved in delivery software, provides fast feedback so that bugs and unnecessary or poorly implemented features can be discovered quickly and paves the route to reducing cycle time.

This book helps us to make the delivery of software to be more efficient with less time and risks. This book also teaches us to have a better collaboration between people who are responsible for delivering the software. My favorite quote from this book is :

If It Hurts, Do It More Frequently, and Bring The Pain Forward

Which teaches us not to be scared with software releases. If releasing software is painful, do it frequently, so it will become easier. With frequent software release, the risks associated will be reduced significantly as the delta between releases will be small. With continuous delivery, release process becomes repeatable, reliable, and predictable. If you are interested in something like this, we are hiring 1000 great software engineers in five major cities: Bali, Bandung, Jakarta, Surabaya and Yogyakarta in Indonesia.

--

--