Changing the Way of Continuous Delivery with Docker (Part 1)
This post is the first part of the series “Changing the Way of Continuous Delivery with Docker” and discusses the background, challenges, and processes involving Docker. Docker is a service for reformed continuous delivery. In the second part of the series, we will explore the method of using Docker along with its delivery processes.
The internet industry has been constantly changing market demands and products, forcing organizations to adapt by making constant deliveries and updates to their production environment. This new approach to development is known as Continuous Integration (CI) and Continuous Delivery (CD), which combines and converts development, production and delivery processes into a cyclical process.
However, this approach introduces several problems in the long run. One such problem is the difficulty of transferring obscure production environments to successors who have limited experience with the environments.
Furthermore, debugging that occurs in the production environment prior to the final production launch is an obstacle. It is not only hard to maintain but also requires constant updates because demands and products are persistently changing.
Traditional CD Processes: Overview and Challenges
Traditional development solutions include processes, Continuous Integration (CI), and Continuous Delivery (CD). These practices combine and convert development, production, and delivery processes into a cyclical process.
- Integration (combining two things together): Upon submission of documents, code goes with code; upon compiling, code goes with logic; upon testing, code goes with features; from generation to deployment and upon generation for release, code goes with systems, and systems go with systems. Every time the system combines two things, integration occurs.
- Continuous testing: Just like a physical examination (for your system), it helps with the rapid discovery and elimination of faults in your system before the final launch, preventing a combination of untested features, missing features, or defective code segments. In general, this process should be continuous.
- Feedback loops: In the flowchart of continuous integration pipes, each step consists of a feedback loop. This allows for rapid feedback, simplifying problem identification and rectification.
For example, after submitting code, a developer should first make sure that the code does not conflict with other codes. The developer also tests if the code successfully compiles to pass the unit test. Upon meeting all these requirements, the developer can move on to the next item. However, if the developer submitted a non-conflicting code but receives feedback that the code disabled successful compiling of the whole system, the developer needs to fix the code.
Feedback must be promptly traced back to the point of development so that the developers can know what to do next. Additionally unit tests should be kept separate from system function or integration tests. This is because the speed of these tests varies, and the tests generate different types of feedback.
Building a CD process- Environment Requirements
Generally, a company or a project requires a multiple environments. Typically, the production environment of an enterprise is placed in a public cloud, while the development process is in an offline environment. The public cloud environment may be inconsistent with the offline development environment, resulting in problems during the final product launch.
CD Processes- Problems Encountered
Problems may occur even if a complete continuous integration system environment is built systematically. Developers may even rely on different language environments or packages, causing conflicts in compiling environments and difficulties in maintenance.
Origin of Problems
A majority of problems arise because developers only deliver code and related dependencies, while operations, in reality, also require an operating environment, environmental description, dependencies, databases, and cache.
Docker: Transforming the Way of Software Delivery
In Docker, all information required for the generation of the environment is included in the delivered code. The code contains a description file that describes the environment as well as its dependencies, caches, configurations, variables, containers, and jar packages.
This approach is analogous to packing a code and all other required components into a container, and delivering the container to the operations team. The core feature of this container is its portability. Since the container includes all environmental dependencies, we can realize the same result regardless of the operating environment.
Competency of Docker
Before Docker came into being, there were many specification constraints to creating containers. Docker is not only a software but also a new approach to implementing containers.
- Environment-describing capacity: Docker files may describe the whole environment required by software.
- Hierarchical file system: Docker images provide a solution to package management, in which we describe each operation as a layer of version management.
- Separation of OS: Docker shields the differences of the operating system upon running.
Docker is a Container Technology
In virtualization technology, hardware and software are virtualized into virtual machines. Each virtual machine is a complete operating system that is well isolated but may require several minutes to start.
The major difference of the container technology from the traditional virtualization technology lies in the fact that all containers lack a complete operating system layer and use the OS kernel of the parent machine as their own.
The prime advantage of the container technology is that containers can be started in seconds as they do not involve any traditional virtual machines. Additionally, since containers have lower overhead compared with virtual machines, users can deploy more containers on a server.
Three Steps to Software Delivery with Docker
- Build: describes the operating system foundation, the environment, the port to start, and scripts to run. The system saves the description file as a Docker image located within the local storage.
- Ship: pushes the image to the Docker Registry at the far end.
- Run: pulls the image from the public registry upon running. The container is an environmental description and at the same time an entirety, which will render the same result when running in whatever environment.
Case Study: BBC News
BBC News is a global news website company with over 500 developers distributed around the world. It has more than ten CI environments, as it uses different languages in different areas of the world. BBC News had to figure out how to unify the coding processes and manage the CI environments uniformly. The existing jobs took up to 60 minutes to schedule and run, and they were run sequentially. With Docker, the jobs are now run in parallel, significantly speeding up the process. Furthermore, by using containers, the developers do not have to worry about the CI environments. Visit Docker to learn more about BBC’s success story.
This post introduced Docker and its role in Continuous Integration and Continuous Delivery. Continuous Delivery with Docker mainly focuses on reducing application risks while delivering value faster through reliable software production in shorter iterations.
In the next part of this post, we will examine how to use Docker and describe its build and UT environment.