Containers: Enhancing CICD and Operations

Graham Wright
6 min readJul 13, 2018

[Article 5 of 6 in series Software Delivery, DevOps and CICD for the uninitiated]

Credit: Top 5 benefits of containerisation

In this article, we will cover:

  • Containerisation and Container Orchestration Technology
  • CICD of Containerised applications
  • The key benefits of developing Containerised applications

Introduction

In the previous 2 articles, we described a ‘Continuous Integration’ process which results in a new build version of the software application, and a ‘Continuous Delivery’ process which deploys and runs it.

Despite the efforts of the application developer, its behaviour after deployment depends on that environment.

The Configuration management scripts controlling the target environment are often written and maintained by a different person or team (the system administrators) to the application developers, which can lead to gaps.

Whether because of the above or for other reasons, the local machines on which developers write the application code is often inconsistent with the deployment environment.

“That’s odd, cos’ it worked on my machine…” — said every developer ever.

Is there something we can do to avoid this? What if the developer also controlled the environment?

(From Virtualisation to) Containerisation

Adoption of Virtual Machines (VMs) 10–15 years ago, brought organisations new levels of agility and scalability. It allowed multiple independent (virtual) servers to be cheaply created and destroyed as needed on a single bare-metal (‘real’) server, each running their own operating system and providing isolated environments in which to run applications.

A big problem with VMs though is bloat. Each VM:

  • consists of not just an application but a whole operating system, so they can be dozens of gigabytes large
  • consumes a significant and fixed portion of its host’s resources (e.g. memory, processing power) requiring powerful bare-metal, even if all these resources aren’t fully utilised.

However, there is an alternative technology in the ascendancy right now which provides the same benefits described above as VMs without these drawbacks. Introducing our newest team member:

I. Containerisation Technology

What does it do?: ‘Containerisation’ technology allows us to designate multiple slices of a single ‘container-host’ operating system as independent ‘containers’, with each functioning like isolated servers in their own right. Because there is just a single operating system (rather than one per VM) we are significantly less wasteful in terms of storage, memory and processing resources.

The difference between Virtualisation (left) and Containerisation (right) technology. Credit: http://www.serverspace.co.uk/

Examples: Docker

Docker (a particular containerisation technology) provides a language (a ‘Dockerfile’) for defining what a container should do and how. If a Dockerfile is written to run a particular software application, then it will also include all the environment variables, file permissions, libraries and port forwarding rules etc. that it needs. However the container’s responsibility is blinkered — it is concerned only with itself and not wider System Administration needs — therefore it can be written by the developers of the software application themselves. The resulting Dockerfile is ‘built’ to create a ‘container image’ which is ready-to-run on a container host (i.e. a machine running Docker). The same container will be run on both the developer’s machines and in all test and production environments.

We can convert all our developer stations, test and production environments to be container hosts by installing Docker, and we can ask our developers to extend the scope of their work to develop not only the application, but a container in which it will run. The Continuous Integration process now culminates in a built container image in our binary repository, and the Continuous Delivery process will deploy and run it.

Why develop Containerised Applications?

  1. Behavioural consistency

In including much if not all of the environmental elements needed and pushing the responsibility for this into the same team that writes the application, the container is far more likely to behave identically between developer machine and production environment. (To resume the trout analogy from Software Delivery 101 (Environments and Applications), if the developer packages both the trout and the river together, then we always have a happy swimming fish!) Ultimately this saves time that might have been lost investigating and resolving environmental issues.

2. Simplified, consistent deployment process

Instead of each application having it’s own deployment process, no matter what the application, the deployment process becomes not much more complex than ‘run this container’.

Equally it is no longer necessary to waste time setting up new Virtual Machines when you want to add a new, independent application, or indeed decommissioning when you want to stop.

3. Resource utilisation

The same amount of hardware can safely support the running of more applications. Whereas multiple bulky Virtual Machines may have been required previously to run applications independently, one operating system can run multiple containers with entire isolation between them.

Containerisation is an excellent enabler for the best practice in software architecture of breaking down your overall application into individual components with their own specified function. This might be having the application in one container and its database in another, or a more rigorous micro-service architecture in which multiple loosely coupled independent services that communicate through defined APIs comprise the application. Such architectures make it easy to update, replace or scale components independently. As such an organisation fully utilising the benefits of containerisation is likely to have dozens, or even thousands of containers running simultaneously. Each of these require deployment, connection to the outside world, load balancing etc. To avoid doing this all by hand, we have a tool to help us.

J. Container Orchestration tool

What does it do? Principally, Container orchestration tools are automatation and management tools for containerised applications on a group (cluster) of container host servers, enhancing:

  • Deployment:
    zero downtime deployments, automatic rollbacks, support for canary deployments in which a new release can be run in production in parallel with the previous version for testing; before being scaled up while the previous version is scaled down.
  • Scaling:
    automatically add or remove containers based on CPU utilisation or other metrics, to cope with peaks and troughs in demand/l
  • Availability:
    Health checks of containers, automatic replacement of containers if they crash and load balancing across healthy containers to cope with outages or peaks in traffic

Container orchestration tools will also assist in the management of the containerised application, including:

  • networking/DNS
  • storage system orchestration
  • resource monitoring, logging and alerting
  • configuration management and secure storage of sensitive values like passwords or ssh keys.

Example: Kubernetes

Summary

In this article we:

  • described containerisation and orchestration technology and supporting tools, giving OpenSource examples; and
  • described how we can release self-run software applications as container images
  • discussed the operational benefits this can bring, including consistent behavior, simplified processes, enhanced resource utilisation, scalability and availability

Next Article in the Series

The Cloud, Anything-as-a-Service (XaaS) and Infrastructure as Code (IaC)

Other links:

--

--