Docker Containers 101: A Beginner’s Guide

Jogesh Gupta
6 min readApr 26, 2023

--

Docker Logo
Docker Logo

Why do we need Containerization ?

Ever worked with a peer where you heard, “It runs on my system, so the code is most probably error-free.” Ever wondered the reason behind such an argument, Maybe you were on a different OS ? or Maybe you had some local configuration that was not passed on to the other person ? Developers use tools and dependencies according to their systems while developing a software. They usually tend to ignore updating the software dependency file(pom.xml / package.json / requirements.txt) while they have the dependency in their local system but you don't . The software maybe OS dependent and runs differently on different OS.

Why not replicate the current work environment and pass it as it is to fellow developers in order to reduce hardware/system barriers and standardize software deployments across multiple machines and platforms , making our software platform-agnostic?

There are two ways to do so.

1) Containers (Docker)

2) Virtual Machine (VMs)

What do I mean by Containerization?

Containerization : It involves packaging all the software code, its dependencies, and the runtime environment in a single box known as a container. This container can be shipped off to any system and can be run without any configurations.

Containerization v/s Virtualization

Virtualization enables you to run multiple operating systems(VM’s) on the hardware of a single physical server. A Virtual Machine runs on top of a firmware known as Hypervisor. It installs a guest OS on top of your OS on which your application is deployed hence using your hardware resources.

Containers perform virtualization at the operating system level. They run on top of a Container engine(Docker or LXC).

They use fewer hardware resources, are easier to manage, lightweight, portable, standalone, and faster.

https://www.atlassian.com/microservices/cloud-computing/containers-vs-vms

Suppose an application uses 10 services. Each developer will have to setup these services manually of the same version. With docker every service is already setup and installed in an isolated environment, ie. container.

Useful Terminologies

Images: Images are blueprints of a container. Just like an architect designs a blueprint of a building before constructing it, we form an image of the application before running it in a container.

Registries: Storage and distributed system for docker images. Official images are available of services such as Postgres, MongoDB, nginx etc.
Examples: Docker Hub, AWS ECR

Installation:

head to Docker’s Website and install Docker according to your OS. Now run docker version to check if it is properly installed or not.

PS: In case you have a cannot connect to the Docker daemon error try running systemctl start docker in you shell.

Hello World

Lets begin with basics.

docker run hello-world

You might be wondering where this hello-world application came from. “hello-world” image originally exists on the docker hub. Executing the docker run command runs the image, if the image mentioned is not found locally, it pulls the image from the docker hub. To only pull the image you can use docker pull image_name .

Docker commands

docker ps : To list all running containers.

docker ps -a : To list all existent containers on the system.

docker images : List all images.

docker rm <container-id> : Permanently deletes a container.

docker rmi <flag> <image-id> : Permanently deletes an image, use -f flag incase a container is using this image.

docker system prune -a : Clears all images and containers in the system.

docker inspect <container-id> : Returns low level information such as ip-addresses, ports etc.

docker run -it <image-name> : Builds the image and opens an interactive terminal. Eg: docker run -it mongo.

Containerize a NodeJS application

Let’s start with creating a basic API and running it inside a container.

In a new folder run npm init -y to initialize a new project. Now download express and cors as dependencies.

npm i express cors

Create an app.js file and add the following lines of code inside it

const express = require('express')
const cors = require('cors')
const app = express()
app.use(cors())
app.use(express.json())
app.get("/check",(req,res)=>{
res.status(200).json({message:"Server Running"})
})
app.listen(3001,()=>console.log("Listening on 3001"))

Go to root and run node app.js . Execute a curl command in your terminal or visit http://localhost:3001/check which will display a message of Server Running.

Create a Dockerfile in your folder named “Dockerfile”. Write the following lines inside the file.

FROM node:alpine

WORKDIR /app

COPY package.json .

RUN npm i

COPY . .

EXPOSE 3001

CMD ["node","app.js"]

The Dockerfile is used to create an image of our application. It contains all the commands a user could call on the command line to assemble an application.

FROM node:alpine is used to decide the base image upon which our application will be mounted. In this example we are using nodejs running on an Alpine Linux Distribution. We use Alpine as it is a light-weight linux distribution. You can also mention the version of node . FROM node:18-alpine

Next, we decide on a directory to setup our application in. Using WORKDIR /app will change the root directory of the application to /app.

The most important aspect of software is its dependencies. We have listed all our dependencies in our package.json file when we used npm i express cors earlier. COPY package.json . copies the package.json to our /app folder then RUN npm i is used to install all dependencies inside our image during building.

Next step is to copy all our code inside the /app folder which is done by COPY . . which means copy from current root dir to root dir of the image.

Now our application runs inside our container but if you visit localhost:3001, you wont be shown the server running page because this time docker is not aware of which port to listen to.EXPOSE 3001Exposing a port allows docker container to listen to incoming requests on that port.

CMD [“node”, “app.js”] is used to start our application.

Query: Why not use RUN [“node”, “app.js” ]

Answer: RUN command is executed when an image is built, but do you start constructing a building when creating its blueprint? No. Simalarly CMD is used to execute instruction when a container is built off from the image.

Lets proceed to creating an image from the file. Execute docker build -t node-app . node-app is the image name and . signifies the relative location of the Dockerfile .

To run the image execute docker run -p 3001:3001 -it node-app .

This spins up our container -p is used to map port exposed by docker container to our systems TCP Port. Now curl our visit https://localhost:3001/check . This time your API is boxed inside a container.

Now share this containers with your friends who can start them and use your application with zero configuration efforts.

Docker Ignore

When you install dependencies a node_modules folders is generated which could be of several mbs in some cases. So we would like to avoid copying such large folders which could be generated easily using npm i inside our root directory which is already specified in our Dockerfile. Hence lets remove node_modules folder from the list of folders that is to be copied to our docker image.

Create a .dockerignore file in root directory and write

node_modules

This now eliminates the node_modules folder in COPY . . step. This decreases our build time.

Conclusion:

In this article, we talked about the need of containers, why we prefer containers over VMs, basic docker commands, and containerizing a custom application and running it via a container.

Share this article with your peers. The next article will consist of docker-compose, volumes, caches, and more. Reach out to me on GitHub or LinkedIn.

Drop me a line in the comments if I’ve made any mistakes or can be helpful in anyway.

Also check out Guide to Docker Volumes — How to Use Volumes with Examples

Thank You,

--

--

Jogesh Gupta

Software Engineer studying in college. Interested in DevOps. Ping me if you want a blog on any topic✨✨.