Introduction To Docker

Have you ever been through scenarios when an installation gives you errors, and the dependency versions aren’t compatible with what your software needs?

What if you want to start another clean server without managing additional physical hardware? Well, for that we’ve Virtual Machines, right? But managing VMs is often tedious and a slow process.

To tackle these, we’ve containers which are more flexible and create isolated environments. They package your application with dependencies and Operating System. Guess what…they’ll resolve the dependency mess and are light and easily portable. And there we’ve Docker…the hero of Containers’ era.

Prerequisites:

Virtual Machines: https://whimsical.com/virtualmachines-FTkxQv6iTSMZPoF1Dai3Xc

CI/CD-pipeline:https://whimsical.com/ci-cd-pipelineasadockerprerequisite-XN7V4KP1Gg3kFr2wpWK7x@2Ux7TurymMiaGJrm7caA

Express/Node:https://expressjs.com/en/starter/hello-world.html

Brief Introduction to Docker

Docker is a platform for developing, testing, and shipping applications using containerization i.e. providing a loosely isolated environment, which is called a container.
Multiple containers are built from a single “template” which acts like custom servers for our application, i.e. we can manage the software and dependencies on it.
Docker is significant in Agile development since it provides this
custom server, often called staging server where Continuous Deployment will take place, after CI pipeline. So we’ll simulate the Production Server using a Docker container, i.e. have the same system architecture, apps, and dependency configs, and perform the automated CI/CD pipeline using it. This will be helpful in early catching of dependency bugs, rather than normally testing on a development server and catching them directly in Production. And since containers reduce the complexity that is faced in the case of VMs, Docker is leading the DevOps.

Before going into applications, let’s dig into how containers work.

VMs vs Containers

VMs vs Containers recreated referring to https://www.weave.works/blog/a-practical-guide-to-choosing-between-docker-containers-and-vms

Virtual Machines (VMs) come under hardware virtualization, where we have a ‘Hypervisor’, a monitor that manages hardware resources for multiple kernels. VMs are cheap and manageable but there’re certain drawbacks of initializing them:

  • It’s is an entire instance of an OS. Every instance needs a copy of that OS, so initializing it’s a time-consuming process
  • Most of the memory allocated gets wasted in case of small services
  • Challenging when low memory capacity and lack of high cores on the processor

So there comes a new concept of OS virtualization

In OS virtualization, there’s a single kernel of the host machine and through a virtualization software, the host kernel allocates instances of user-space, i.e. the system memory required to run applications. Thus they simulate isolated environments. These instances are called ‘Containers’. Since containers share the same host kernel, also some libraries, they don’t require a complete clone of the OS and are thus initialized quickly. If a container crashes, then it can be restarted in no time. Docker is a container development environment and runtime plus a vast public registry for managing them.

Containers are highly fast and easily portable

Docker works on well-known image architecture. An image of a container is an immutable template that contains application files, dependencies, system tools, container engine, etc. in order to build a container. A container is an actual instance of the environment created using that image. Note: the image is not a single file.

If you want containers of multiple OS, you need a VM of that OS, since containers share the same kernel. However, this isn’t a drawback, since containers simulate an OS, not the fundamental hardware. So shipping images and using them for testing plus deployment is always easier than VMs.

Docker Architecture

It uses a client-server architecture that interacts through REST API. Docker daemon (dockerd), a service running on your host, being part of the Docker engine, is the one responsible for managing containers by listening to API requests. Docker works on Linux namespaces which you can imagine as scope-preserving divisions of hardware and processes in the kernel. A Docker container if crashed can be instantly replaced by booting another one from the image, enabling a rapid system restore.

Let’s get onto the dock, and sail the ship!

For the tutorial, I’ll work on Windows, but the process will be the same for Linux and Mac users. If you face any performance issues on Mac, then you can always have a Linux VM underneath.

From https://www.reddit.com/r/ProgrammerHumor/comments/d0ck7i/docker_on_mac/

Installation

To create and manage containers, we need the Docker Desktop which has the Docker engine, the CLI client for interacting with the daemon, etc.

Head over to https://docs.docker.com/get-docker/ and install it.

The WSL2 provides its own Linux kernel for Windows so you can run both Windows and Linux containers. Follow Step 4 from https://docs.microsoft.com/en-us/windows/wsl/install-manual#step-4---download-the-linux-kernel-update-package.

‘docker — version’ in CLI will verify correct installation.

Now let’s sail the ship!

Suppose we’ve an application DockerProject, which just has an express server sending text as a response on the homepage. It’s alright if you aren’t familiar with Express/Node.

Express Server listening to port 8000

To dockerize this project, we should have an image with which we can create containers. To build this image we need to have a ‘Dockerfile’ in our app. To create this you must create a file of the name ‘Dockerfile’ and this file has no extensions.

Now we’ll write the instructions in Dockerfile required to build the image. A docker image consists of layers, each layer built of an instruction written in Dockerfile, for improving performance.

To create the image of our application, we need to have a base image of the OS (generally) over which our instructions may be operated to give a new image which has everything- OS, dependencies, application, etc.

DockerHub is a huge pool for docker images, just like GitHub; where we can pull, push images.

Running ‘docker images’ in cmd, we get the details of all the images present with us, of which some were already provided.

For our node application, we may use the ‘node’ image which is built over an alpine-Linux image. It’s the lightest distribution of Linux, & it’ll make our image lightweight too.

To set that image as base image,

FROM node:14-alpine

where 14-alpine is a tag. A Dockerfile should always start by setting a Base image for subsequent instructions using FROM, although there’re exceptions.

This instruction initializes a new build-stage i.e. in CI/CD pipeline, an environment for steps like installing dependencies, tools, compiling for automation. We can have multi-stage builds by adding multiple FROM instructions in Dockerfile, but that’s an advanced topic.

​The LABEL instruction is for adding some meta-data to our image.

Now we want to put our application in that container’s (which we’ll build using the image) File system.

COPY instruction

 COPY <source> <destination>

as ‘COPY . /dockerproject’ copies contents of our main project directory to <root>/app of our container’s.

Now we just have to provide the instruction with which we can run our application. We use the CMD command in exec form

 CMD [‘executable’,’argument1',’argument2'] 

as CMD [“node”,”/app/server.js”].

The exec form, unlike shell form, skips the validations in shell, improving the performance. Shell form has syntax CMD <instruction> <command>, for eg. CMD echo ‘shell form’.

Note: there can be only one CMD instruction per Dockerfile since it accepts parameters while running the container.

The ‘WORKDIR’ instruction is just like cd or mkdir. All the paths in the instructions below will prepend that working-directory path. You can additionally do ‘RUN npm i’ before the CMD instruction.

Building the image

Building the Image

We use docker build command to build an image. Additionally, we specify a name for the image as it’ll be in use when building the container.

docker build -t  <image_name>:<tag-optional>  <path>

Run ‘docker build -t mydockerproject . ’, where -t suggests that ‘mydockerproject’ is the complete tag to the built image and ‘.’ is the relative path for Dockerfile. A complete tag has format <name>:<tag>, and by default, the <tag> is ‘latest’.

Building and running the container

Building and running the Container

We use the ‘docker run’ command to start our container using the image that we just built.

docker run  <flags_and_parameters_if_any>  <imagename>:<tag>

The -d flag is to run our container in detached mode and not in interactive one. This means that our cmd/terminal won’t be considered as the container’s shell and will still be at our use. The container’s shell will run in the background.

The -p flag is to map ports from our local machine to the container’s. The parameter 8000:8000 means mapping our host’s 8000 port to the container’s 8000 port.

What I’m doing ‘3000:8000’ is just for the demo and isn’t recommended. Head over to http://localhost:8000/, (localhost:3000 in my case).

Resource loaded on localhost:3000

On Docker dashboard:

You can avoid the daemon from giving a random name to the container by using — name flag to name it, as shown below, where ‘firstondock’ is the container name. Here I’ve combined -d and -p flags.

Stopping the container

To stop the container, use docker dashboard or simply run

docker stop <container_name>

Displaying Containers through CLI

We may need to learn the CLI way to observe the images too, as it’s necessary when building CI/CD pipeline jobs.

‘docker ps’ displays all details of the containers currently running, while that with ‘-a’ flag:

docker ps -a
#or
docker container ls -a

lists out all containers existing.

Seeing Labels

 docker inspect <container_id>

to see all metadata as JSON, of the image including the base images that we pulled.

Displaying labels

ENTRYPOINT Vs CMD

ENTRYPOINT is another instruction for CLI executable command, with syntax ENTRYPOINT [“executable”,”parameter1",”parameter2"] in exec form. The difference between CMD and this is that, ENTRYPOINT commands aren’t overridden by CLI arguments and are instead appended to actual parameters.

Key-benefit is, unlike CMD, you can have multiple ENTRYPOINT instructions in Dockerfile.

Pushing the Image to DockerHub

DockerProject with modifications
Dockerfile

Making minor changes — using moment (node package); adding below command in Dockerfile to install the node package dependencies on the container. RUN executes commands in new image layer.

RUN node install 

Login to DockerHub and create a repository. One repository may have multiple images. After building, to push this image, it must have both username and repo-name being part of its tag.

docker build -t <username>/<reponame>:<tag> path
Building image with username, repo-name, and tag included

Or you can just use ‘docker tag’ command.

We mentioned an additional tag ‘docker-tut’ so that we can have multiple images in that repo in the future. We run below

docker push <complete-tag-of-image>

Since that tag follows a protocol, it’ll be pushed to the given repo of the namespace with the tag mentioned.

Pushing image to DockerHub with additional tag
Don’t push your photos on DockerHub like SpongeBob XD.

Pulling Image from DockerHub

We can pull base images or even project images from DockerHub and work with them easily. We’re pulling another lightweight Linux distro -Debian.

Pulling Debian image of tag ‘latest’

Running

docker pull <image_name:tags>

brings a copy of an image from DockerHub.

Conclusion

Docker containers are efficient in terms of portability and security. It integrates well with CI/CD tools like Jenkins, in Continuous Deployment, with the benefit of running tests parallelly. Overall, it enhances the Agile Software Development Lifecycle.

Now that you’ve got enough grip on Docker containers, you’re free to automate your project with it.

Also, now you’re capable of understanding Docker memes on the internet XD.

--

--