Docker in TBC

George Kopadze
TBC Engineering
Published in
6 min readApr 14, 2021
Docker in TBC

What is Docker? Why do we need and what problems does it solve? I will try to answer all of these questions and share my experience of how do we use docker containers in TBC’s development lifecycle.

What is Docker?

Docker is a set of platforms as service products that use OS-level virtualization to deliver software in packages called containers. A Docker container is an open-source software development platform. Its main benefit is to package applications in containers, allowing them to be portable to any system running a Linux or Windows operating system (OS).

What is a container?

The container is a way to package applications with all the necessary dependencies and configurations. They are isolated from one another and can communicate with each other through well-defined channels. It’s a portable artefact that is easily shareable and moved around between the development team or the development to the operations team. Everything packaged in one isolated environment gives it many advantages to make the development and deployment process more efficient.

Containerised Applications vs Virtualised Environment
Containerized Applications Diagram

As we see in the picture, with containers, instead of virtualizing the underlying computer like a virtual machine (VM), just the OS is virtualized.

Containers sit on top of a physical server and its host OS — typically Linux or Windows. Each container shares the host OS kernel and, usually, the binaries and libraries, too. Shared components are read-only. Sharing OS resources such as libraries significantly reduces the need to reproduce the operating system code, and means that a server can run multiple workloads with a single operating system installation. Containers are thus exceptionally light — they are only megabytes in size and take just seconds to start. Compared to containers, VMs take minutes to run and are an order of magnitude larger than an equivalent container.

In contrast to VMs, all that a container requires is enough of an operating system, supporting programs and libraries, and system resources to run a specific program. What this means in practice is you can put two to three times as many applications on a single server with containers as you can with a VM. In addition to this, with containers, you can create a portable, consistent operating environment for development, testing, and deployment.

Why do we need it?

Instances of containerized apps use far less memory than virtual machines, they start up and stop more quickly, and they can be packed far more densely on their host hardware.

Docker enables faster software delivery cycles. Docker containers make it easy to put new versions of software, with new business features, into production quickly — and to quickly roll back to a previous version if you need to. They also make it easier to implement strategies like blue/green deployments.

Docker containers also make it easier to implement the microservices architecture, where applications are divided as loosely coupled components. By decomposing traditional, “monolithic” applications into separate services, microservices allow the different parts of a line-of-business application to be scaled, modified, and serviced separately — by separate teams and on separate timelines if that suits the needs of the business.

In TBC’s development team, we are using the microservices approach in several projects. Generally, containers aren’t required to implement microservices, but they are perfectly suited to the microservices approach and agile development processes.

What problems does it solve?

Let’s see how containers improved the development process and discuss how did we develop applications before the containers. Usually, when you have a team of developers working on the same application, you would have to install most of the services on your operating system directly. E.g. when you working on a .NET application and need SQL Server, Redis or RabbitMQ you have to install all binaries for SQL Server or Redis and make some configurations.

Development Environment Before Containers

The same steps should take for each developer's local development environment and for each server of your organization environments. It easily can be messy and hard to operate depending on how complex your application is because all of the configuration, versions of dependent applications have to be the same for all environments.

With containers, you don’t have to install any of these services (RabbitMQ, Redis, etc) directly on your operating system. Because containers have their own isolated operating system layer with Linux-based images. You have everything packaged in one isolated environment, with all required service dependencies and application binaries.

It also gives us the possibility to automate the deployment process. If an organization uses source control and automates build processes as we do, it is an opportunity to modify the build pipeline to generate a docker image and publish it to a private repository.

If you have generated a docker image, you can easily deploy it into a container orchestration platform such as Openshift and gain all of the benefits Openshift gives. We will discuss Openshift later in a separate blog post.

How is it done?

Let’s discuss a sample example of continuous integration and continuous delivery pipelines. We are using several deployment models and CI/CD pipelines may be vary based on the project, but one of the simplest models is given below.

Sample CI/CD pipeline example

When developers complete the development process on a feature or bug, with a pull request they are merging their feature/bugfix branches into the master branch, which triggers the build pipeline and executes predefined steps. Build pipeline may consists build artefacts, run unit tests, publish code coverage results and build docker images with upload to container registry.

Openshift has several methods to pull images from the container registry. I won’t go into many details and discuss them in another blog post. The main point in this diagram is to describe briefly the process of when do we use docker and how it is included in the CI/CD pipeline.

This diagram shows the high-level architecture of the CI/CD pipeline. It’s interesting how it is implemented in YAML configuration files and also Dockerfile.

At first, let’s see what is inside in Dockerfile for the ASP.NET Core project.

FROM mcr.microsoft.com/dotnet/core/sdk:5.0 AS build-env
WORKDIR /app

# Copy csproj and restore as distinct layers
COPY *.csproj ./
RUN dotnet restore

# Copy everything else and build
COPY . ./
RUN dotnet publish -c Release -o out

# Build runtime image
FROM mcr.microsoft.com/dotnet/core/aspnet:5.0
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "aspnetapp.dll"]

Build pipeline finds Dockerfile in our repository and executes commands from this file. All of the steps are described in the comments given above.

If you want to implement the docker image build step in the azure build pipeline you have to specify docker commands in the YAML configuration file.

Build pipeline YAML file with the docker build command

There is plenty of documentation on how to configure build pipelines with YAML files and also you can get more details on the Docker documentation page.

Keep an eye on our blog to not miss any interesting posts.

--

--

George Kopadze
TBC Engineering

I am a senior software engineer and engineering manager with over 11+ years of experience in application design and developmen.