Introduction to Container-Based Application Development

Damsak Bandara
Geek Culture
Published in
6 min readJun 4, 2021
Fig 1: Containers (Source: Google)

Evolution of Application Architecture

Generation 01: The Dark Ages

One Physical Server — One Application

Fig 2: 1st Generation

The first generation of systems consists of one physical server and one application. A physical server is designed for a single user and cannot be shared among others. A single operating system is on top of the physical hardware. The application runs on top of this Operating system. There are many disadvantages of this implementation,

Disadvantages

  • Wastages of resources.
  • Massive costs to maintain.
  • Slow deployments.
  • Harder to scale.
  • Difficult to migrate.

Generation 02: Hypervisor Based Virtualization

One Physical Server — Multiple Virtual Machines

One Virtual Machine — One Application

Fig 3: 2nd Generation

Important terminology

  • Virtualization: Creating a virtual version of an entity without the actual one.
  • Virtual Machine: Virtualization of a computer system.
  • Hypervisor: A software that is capable of creating and running virtual machines.

This made it possible for us to run multiple Virtual Machines on a single physical server. Each of these VMs has a separate operating system with an allocated amount of memory. Let’s discuss the advantages and disadvantages of this implementation.

Advantages

  • Better utilization of resources.
  • Scaling is comparatively easy.

Disadvantages

  • Multiple VMs need multiple resources.
  • Less Portable.
  • Manage and configure different VMs.

Generation 03: Containers

Containers emerged as a solution to the above-mentioned problems. Containers basically help software to run reliably in different computing environments. Both containers and VMs have similar resource isolation and allocation benefits but containers are more portable and efficient.

Fig 4: VMs vs Containers

Containers virtualize the Operating System instead of hardware.

Docker

Fig 5: Docker

A tool that can be used to containerize applications.

Docker is basically a container engine that can be used to generate containers on top of an existing operating system with the help of Linux Kernel features. It is an open-source project and provides a mechanism for developers to easily separate applications from the infrastructure in order to make the software deliveries fast and reliable. Developers can create containers with or without docker. However, docker makes it easier to build and comes with many different advantages.

  • Easily Scalable.
  • Highly available.
  • Easy to deploy.
  • Easy to manage applications.

Other key aspects,

  • Docker is lightweight as it only includes required OS processes and dependencies. A normal VMs container has to support the payload of the entire OS instance.
  • Docker significantly improves the productivity of developers by providing easy mechanisms to deploy, provision, and restart.
  • Docker provides great resource efficiency as the developers can run different copies of an application on the same hardware.

Let’s look in detail at some of the concepts related to Docker.

Docker Image

This is one of the key components in docker. It is basically a template with a set of instructions that can be used to create a docker container. A docker image may contain components like,

  • Application code.
  • libraries
  • tools
  • dependencies, and etc.

A Docker image has the capability to have one or many instances of a particular container. There are 2 main ways to create a docker image.

  • Interactive: Running a container from an existing Docker image.
  • Docker file

Docker file

The developers can download docker images from the docker hub and use them. However, there may be instances where developers need to create custom Docker images. A Docker file is a text file that explains the process of building a docker image on your own.

Docker Engine

Refers to the core product of Docker. Docker engine is the main component responsible for building and running containers using its services and elements. This engine contains the Docker daemon and Docker CLI. This is basically a containerization technology for building containerized applications.

Docker Registry

Refers to a server-side application that is capable of storing and distributing Docker images. This application is open-source and highly scalable.

When to use the docker registry?

  • When there is a need to control the storage places of the Docker images.
  • To have full ownership of the image distribution pipeline.
  • To integrate the image storage into the in-house development workflow.

Docker Hub can be used as an alternative to Docker Registry. It contains ready-to-go solutions with zero maintenance.

Docker Orchestration

This refers to the process of taking all the docker instances together and go for a common goal. It is basically a tool to —

  • Package and run applications as containers.
  • Find existing container images.
  • Deploy containers.

As an example, an application may have several components like an HTTP engine, HTTP authorization, etc. There may be several containers to perform these activities. Docker orchestration will decide where the containers should go, what are the dependencies required, etc.

There are various frameworks available to perform this Orchestration Process.

Kubernetes ( k8s )

A popular open-source container orchestration platform. Kubernetes automates manual processes(deploying, managing, and scaling) involved in container orchestration. This is done by grouping the available application containers into logical units for easy management and discovery.

Features of Kubernetes

  • Storage Orchestration.
  • Automated rollouts and rollbacks.
  • Service Discovery and Load Balancing.
  • Horizontal scaling.
  • Batch execution.

Docker is a persistent technology. Files in a particular container are persistent even after the contain stops its execution. Docker can be used for both existing applications and new applications. However, the application should fit into the correct architecture in order to fully use the capabilities of Docker. That means even the legacy applications can use docker. However, they may not be able to yield the full capabilities of docker.

Open Container Initiative( OCI )

Docker Inc company introduced the technology of docker in 2011. Another company started to use this docker concept and they found out that it doesn't fit their architectural and business requirements. Therefore they implemented a similar framework called Rocket. Later on, this created many different administrative and technical problems. Therefore Docker and Rocket came to an agreement to form OCI.

OCI is the open governance structure for the express purpose of creating open industry standards around container formats and runtimes. Contains 2 main specifications.

  • Runtime Specification
  • Image Specification

I have used the following video by Mr.Krishntha Dinesh to gather the required information.

--

--