Introduction to Docker and Container based development

Arun prashanth
Geek Culture
Published in
9 min readJun 3, 2021
Photo by Thinkstock

In the early years, any application will deploy on the physical server. For example, we have an application server, web server, and database server like three different servers. For that those days, we use three physical hardware boxes. When it comes to this there are so many problems going to face. We have to maintain it, need space, need separate networks, operating system maintenance, cost, and more than that waste. Because our app server or our web server and the database server may not use 100% of the processing power of that box and 100% memory of that box. Likewise, it's a waste.

So the next(2nd) generation was Hypervisor.

What is Hypervisor?

A hypervisor is hardware, software, or firmware capable of creating virtual machines and then managing and allocating resources to them. Virtual machines are machines set up to use the resources of the host machine. You can divide these resources as many times as you like to accommodate the necessary virtual machine “guests.” (If you’ve heard the term “virtual machine monitor,” you may be curious about the difference between a virtual machine monitor and a hypervisor. They’re the same thing.)

You could have, for example, a PC with 8GB of RAM installed, and a Windows operating system. If you want to run programs requiring Linux instead, you could create a virtual machine running Linux, and then use a hypervisor to manage its resources — for example, allocating it 2GB of RAM. Some of the resources of the host machine would be running the Windows OS, and some would be allocated to the virtual machine running Linux.

Is this a better solution?
No, Because it still has some issues.

Problem No 1 — Here we have a different operating system. So now there is a lot of costs (licensing cost), path those, need to maintain, need to update like a lot of management work to do.

Problem No 2 — Let's say we need another web server. So we need to create another VM. We need to install another operating system and install another webserver. So it will take much time to complete this process.

For all these problems there is one solution call containerized application (Docker). This is the 3rd generation.

Now before we jump right into getting started with Docker/Container, you must first know the difference between Docker and virtual machines. So, let’s begin.

Docker vs Virtual Machines

Docker Container vs Virtual Machine
Differences between these two

In the image, you’ll notice some major differences, including:

  • The virtual environment has a hypervisor layer, whereas Docker has a Docker/Container engine layer.
  • With a virtual machine, the memory usage is very high, whereas, in a Docker environment, memory usage is very low.
  • In terms of performance, when you start building out a virtual machine, particularly when you have more than one virtual machine on a server, the performance becomes poorer. With Docker, the performance is always high because of the single Docker engine.
  • In terms of portability, virtual machines just are not ideal. They’re still dependent on the host operating system, and a lot of problems can happen when you use virtual machines for portability. In contrast, Docker was designed for portability. You can actually build solutions in a Docker container, and the solution is guaranteed to work as you have built it no matter where it’s hosted.
  • The boot-up time for a virtual machine is fairly slow in comparison to the boot-up time for a Docker environment, in which boot-up is almost instantaneous.
  • One of the other challenges of using a virtual machine is that if you have unused memory within the environment, you cannot reallocate it. If you set up an environment that has 9 gigabytes of memory, and 6 of those gigabytes are free, you cannot do anything with that unused memory. With Docker, if you have free memory, you can reallocate and reuse it across other containers used within the Docker environment.
  • Another challenge of virtual machines is that running multiples of them in a single environment can lead to instability and performance issues. Docker, on the other hand, is designed to run multiple containers in the same environment — it actually gets better with more containers run in that hosted single Docker engine.
  • Virtual machines have portability issues; the software can work on one machine, but if you move that virtual machine to another machine, suddenly some of the software won’t work, because some dependencies will not be inherited correctly. Docker is designed to be able to run across multiple environments and to be deployed easily across systems.
  • The boot-up time for a virtual machine is about a few minutes, in contrast to the milliseconds it takes for a Docker environment to boot up.

Now that you know the differences between virtual machines and Docker, let begin this getting started with the docker brief explanation by understanding what Docker actually is.

What is Docker?

Docker is an OS virtualized software platform that allows IT organizations to easily create, deploy, and run applications in Docker containers, which have all the dependencies within them. The container itself is really just a very lightweight package that has all the instructions and dependencies — such as frameworks, libraries, and bins within it.

Advantages of Docker

  1. Return on Investment and Cost Savings
  2. Standardization and Productivity
  3. Compatibility and Maintainability
  4. Simplicity and Faster Configurations
  5. Rapid Deployment
  6. Continuous Deployment and Testing
  7. Multi-Cloud Platforms
  8. Isolation
  9. Security

How Does Docker Work?

Docker works via a Docker engine that is composed of two key elements: a server and a client; and the communication between the two is via REST API. The server communicates the instructions to the client. On older Windows and Mac systems, you can take advantage of the Docker Toolbox, which allows you to control the Docker engine using Compose and Kitematic.

Components of Docker

  1. Docker client and server
  2. Docker image
  3. Docker registry
  4. Docker container

Let us discuss each and every component.

Docker Client and Server

This is a command-line-instructed solution in which you would use the terminal on your Mac or Linux system to issue commands from the Docker client to the Docker daemon. The communication between the Docker client and the Docker host is via a REST API. You can issue similar commands, such as a Docker Pull command, which would send an instruction to the daemon and perform the operation by interacting with other components (image, container, registry). The Docker daemon itself is actually a server that interacts with the operating system and performs services. As you’d imagine, the Docker daemon constantly listens across the REST API to see if it needs to perform any specific requests. If you want to trigger and start the whole process, you’ll need to use the Dockered command within your Docker daemon, which will start all of your performances. Then you have a Docker host, which lets you run the Docker daemon and registry.

Docker image

A Docker image is a template that contains instructions for the Docker container. That template is written in a language called YAML, which stands for “Yet Another Markup Language”.

The Docker image is built within the YAML file and then hosted as a file in the Docker registry. The image has several key layers, and each layer depends on the layer below it. Image layers are created by executing each command in the Dockerfile and are in the read-only format. You start with your base layer, which will typically have your base image and your base operating system, and then you will have a layer of dependencies above that. These then comprise the instructions in a read-only file that would become your Dockerfile.

Here we have four layers of instructions: From, Pull, Run, and CMD. What does it actually look like? The From command creates a layer based on Ubuntu, and then we add files from the Docker repository to the base command of that base layer.

  • Pull: Adds files from your Docker repository
  • Run: Builds your container
  • CMD: Specifies which command to run within the container

In this instance, the command is to run Python. One of the things that will happen as we set up multiple containers is that each new container adds a new layer with new images within the Docker environment. Each container is completely separate from the other containers within the Docker environment, so you can create your own separate read-write instructions within each layer. What’s interesting is that if you delete a layer, the layer above it will also get deleted.

What happens when you pull in a layer but something changed in the core image? Interestingly, the main image itself cannot be modified. Once you’ve copied the image, you can modify it locally. You can never modify the actual base image.

Docker Registry

The Docker registry is where you would host various types of images and where you would distribute the images from. The repository itself is just a collection of Docker images, which are built on instructions written in YAML and are very easily stored and shared. You can give the Docker images name tags so that it’s easy for people to find and share them within the Docker registry. One way to start managing a registry is to use the publicly accessible Docker hub registry, which is available to anybody. You can also create your own registry for your own use internally.

The registry that you create internally can have both public and private images that you create. The commands you would use to connect the registry are Push and Pull. Use the Push command to push a new container environment you’ve created from your local manager node to the Docker registry, and use a PullL command to retrieve new clients (Docker image) created from the Docker registry. Again, a Pull command pulls and retrieves a Docker image from the Docker registry, and a Push command allows you to take a new command that you’ve created and push it to the registry, whether it’s Docker hub or your own private registry.

Docker Container

The Docker container is an executable package of applications and its dependencies bundled together; it gives all the instructions for the solution you’re looking to run. It’s really lightweight due to the built-in structural redundancy. The container is also inherently portable. Another benefit is that it runs completely in isolation. Even if you are running a container, it’s guaranteed not to be impacted by any host OS securities or unique setups, unlike with a virtual machine or a non containerized environment. The memory for a Docker environment can be shared across multiple containers, which is really useful, especially when you have a virtual machine that has a defined amount of memory for each environment.

Linux containers have facilitated a massive shift in high-availability computing, and there are many toolsets out there to help you run services (or even your entire operating system) in containers. Docker is one option among many, as defined by Open Container Initiative(OCI), an industry standards organization meant to encourage innovation whilst avoiding the danger of vendor lock-in. They have a choice when choosing a container toolchain, including Docker, OKD, Podman, rkt, OpenShift, and others.

If you decide to run services in containers, then you probably need software designed to host and manage those containers. This is broadly known as container orchestration. The Kubernetes provides container orchestration for a variety of container runtimes.

References:

--

--