Docker Logo

Containerization and using Docker

Creating our own container with Docker

Utkarsh Tripathi
Google Developer Student Clubs TIET
8 min readJul 10, 2022

--

Containerization has become a major trend in software development. It involves packaging all the software code, its dependencies, and the runtime environment in a single box known as a container. Now, the container is platform-agnostic, which means it can be run on any OS distribution and infrastructure without changing the software's configuration.

But wait, what is the need for containerization?

The typical answer you may have heard of is the dilemma of “It runs on my machine ”. Many a time when developing software, developers use tools and dependencies according to their systems and there is no problem with that as long as the software runs perfectly fine, right? Yes and no, if someone else is running that software on a similar machine, it will work perfectly fine but if someone is using a different setup, it is not necessary that the program behaves in the same way, for example, software developed on Windows may not properly run on Linux or macOS as they have different underlying architecture. This problem is more pronounced in bigger codebases having hundreds of dependencies. Containerization eliminates this problem by encapsulating all the code and its configurations into a single package of software. It is a portable, lightweight and standalone piece of software that can be run on any host which has the container engine (like Docker).

Virtual Machines vs. Containers

Firstly, let us talk about Virtual Machines(VMs) and running our application in them. It runs on top of an emulating software called the hypervisor. It manages the sharing of resources between multiple VMs on the system. Virtual machines utilize all resources (hardware and software) of a system like CPU, RAM, Disk, and Networking and run software over a guest OS to run our software. They are slow and not much portable.

VMvsContainer (credits- Atlassian)

Containers on other hand, virtualize only the operating system and use the software resources of the system. All containers run on top of a container engine (like Docker) and utilize a single OS. They require very less memory and computation power and are much more portable and lightweight.

It takes much less time to spin up a container than start a virtual machine.

Container Engine — Docker

A container engine is a piece of software that is used to encapsulate all the software into a box called the container.

credits

Docker was launched in 2013 that jumped started the container revolution. It is a platform-as-a-service(PaaS) organization. It is the most popular engine that is used to create images and containers. It provides many more features like sharing the images, managing them, and pushing them to a public registry to share your application.

Other container engines are podman, lxd, containerd, etc.

Installing Docker on your Machine

To install Docker on Windows, you have to first enable WSL2 (Windows Subsystem for Linux)(only for windows users) on your system.

For Windows 11 users run (Open the terminal with admin access):

For users with an older system or who want to manually enable it, run (with admin access):

Now, head to Docker’s Website and install the software according to your system(Linux, Mac). The GUI and the CLI both will be installed in your system. Now run docker version to check if it is properly installed or not.

Hello World with Docker

Let’s start with the traditions and run hello world on docker.

Hello World image in docker

Behind the scenes, when we run this command, it is pulling the hello-world image (the container containing all the files and configurations) from the docker hub, and runs it. The docker run command will pull an image if it doesn't exist on our system or run the image if it exists. Only to pull an image from the registry, we can run docker pull hello-world .

Basic Docker Commands

To see all the images we have in our system, we can go to the docker desktop app or simply use the CLI commanddocker images .

Seeing all existing images in docker

To see all the running containers, you can go to docker desktop or simply run docker ps to list all running containers, to view stopped all containers run docker ps -a .

Let us try another example and install Ubuntu on a container rather than installing the full operating system. It is the easiest way to use the Linux terminal and make applications on top of Ubuntu. To run the container:

This will pull the image and start the container. With the -it flag, it will run the image in interactive mode and the Linux terminal will open.

Running a basic command in the Linux terminal

You can see this stopped container with docker ps -a and run it again with docker run -it ubuntu .

To remove a container, copy the container id and run:

Similarly, you can remove the images using docker rmi <image-id>. You must remove any running/stopped containers before removing the image.

To see all docker commands, visit this website.

Let's create an image of an application

Lastly, let's create a simple node API and containerize it.

We will create a simple node and express server to return some data and then create an image of the application. Create a new directory and initialize it with npm init -y (You should have Node.js installed in your system). Now install express and cors in this application.

Create the index.js file and create a basic server or copy the below code

Run this app by executing node index.js .On visiting localhost:4000/hello will give a JSON response {message:"Hello World"} .

Now to create the image, create a Dockerfile in the root directory of the project. The file name is Dockerfile without any extension.

This is the Dockerfile to create an image of our application. Let us break it down and understand what is happening. This file only takes the commands to create an image.

  • In the first line, FROM node:18-alpine we specify the parent image or the build environment which we want to use, since it’s a node application, we are using the node image with its 18 version in Alpine Linux Distribution, we can also specify the latest version like this node:latest .
  • In the next line, we are specifying the working directory for the code, we can use the root directory but it can sometimes cause conflict so we will mention /app which will create the app directory for us and all further commands will be relative to this directory.
  • Next, we copy the package.json and package-lock.json to the app directory. Note we are using a . at the end, since all our commands are now relative to /app directory and we want to copy these files in root.
  • We are running npm install with RUN command, to install all packages in the directory of the image during its creation.
  • Now we copy all the files from the root directory of our system to the app directory of the image. At this point, we create a .dockerignore file in our root directory of the project and add node_modules to it, to prevent copying of the folder in our image. This file is similar to .gitignore
  • Next, we expose port 4000 from our image to access the API, since remember, our node application is listening on port 4000. We are exposing this port since the containers run in isolation and we cannot directly access their ports.
  • Next, we provide the run command using CMD in the array, these are the space-separated commands. Notice we are using CMD instead of RUN because RUN is used to run commands when the image is being created while CMD commands are executed when we are starting a container.
Directory Structure

Now to build the image, we run the following command

This command takes the image tag, which is later used to reference the image and a . at the end to specify that our Dockerfile is in the root directory.

To run the image, execute the following command in your terminal:

This command spins up the container on port 4000, -p flag is used to map the port exposed by the container to our system’s TCP port. We can again visit the http://localhost:4000/hello and see the results but this time our app is running inside a container in isolation.

In this way, we can create images and start containers. We can easily share these containers with our friends and colleagues who can run this image and get the exact same result as ourselves.

It removes the hassle of cloning the GitHub repository and setting up the environment to see the application, we can speed the application sharing process by containerizing our applications.

Parting Words

With this, we come to an end of this article where we talked about Containerization and using Docker, basic commands to use with docker CLI, and how to implement the same in our applications using Dockerfiles. It is an important part of software development and scaling of our solutions, and a vital aspect of DevOps.

Share this article among your friends and stay tuned for more exciting articles. In the next article, I will talk about creating the docker-compose file and spinning up multiple container services at once, and creating microservices with Docker. Until then, follow me on Linkedin and Github and try creating some images with what you have learned and try learning more from the internet and share them on docker hub and in the comments. Thank you 😊 👋!!

--

--

Utkarsh Tripathi
Google Developer Student Clubs TIET

I am a computer science & electronics student and I want to explore the deep realms in technology to make the best software.