Node.js Application Containerization: A Step-by-Step Docker Guide

Tameem Rafay
5 min readMar 10, 2024

--

In this article, we will review how to containerize your node.js application so that we can get the benefits of containerization. So, let’s check what benefits the container provides us and why we need to convert our node.js application into containers.

Benefits of Containers

1. Portability

Using this technique we can encapsulate our application and its dependencies and its configurations into the container so that if we want to run the application we don't need to check what services/dependencies or which Node.js version we have to install to run this application.

So this process will ease the process of portability. It means we can easily run our application without worrying about installing its dependencies e.g. Your application needs Redis for caching and Nginx for reverse proxy also you can run your application on different servers whether it is a Windows-based server or Linux it does not care because it already contains the required dependencies of the application.

2. Scalability

Containers are lightweight and you can easily scale your application if you have received heavy traffic by running more containers of your application in the cluster.

If you are working on the microservices architecture then it will facilitate you to adopt this architecture by deploying each service separately this will also loosely couple your application.

Just imagine if you have 100 services in your microservices architecture and you have to deploy them with their dependencies without the containers. BUT thanks for the containers to ease this deployment process because it will encapsulate all dependencies and we have just to deploy them.

Difference between container and image:

Before starting on the implementation of containerizing the application. Let’s discuss between the container and the image. They are very closely related but have a very slight difference between them.

We won’t go in-depth but just remember the image is just the lightweight package that has everything to run the application and it is created using the Dockerfile. While the container is the runtime instance of an image.

Implementation of containerizing the application

Install Docker Desktop

After the installation of the docker app on the machine now, we have the node.js application. That is not a complex application just the one endpoint with a GET request to check if our application is working after the containerization step.

This Dockerfile is used to build a Docker image for a Node.js application. I have pasted the given code below about the Dockerfile that is present inside the GitHub repository.

# Base image
FROM node:21.6.1

# Install new packages
RUN npm install

# Adding the files in the container or you can also use this command copy . .
ADD package.json package.json
ADD package-lock.json package-lock.json
ADD index.js index.js

# Expose the port so that we can run the application on this port
EXPOSE 8000

# Run the application
CMD ["node", "index.js"]

Let’s break down each part of the file:

Base Image

Choose the base image depending on the type of your project. Since we are currently packaging our node application we can choose the node as the base image.

However, it does not mean that if you are working with a node application then you have to choose the node as the base image. However, you can choose the ubuntu as base image but there you need to explicitly install the node before running your application.

This would be the Dockerfile if I chose Ubuntu as the base image.

FROM ubuntu:20.04

# Update package lists and install necessary packages
RUN apt-get update && apt-get install -y \
nodejs \
npm \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm install
COPY . .
EXPOSE 8000
CMD ["node", "app.js"]

So, to make things simple I go with the node as the base image.

.dockerignore file

The .dockerignore file is useful if you are using the copy . .command in the Dockerfile. Here in the given code, I have copied three files manually including package.json, package-lock.json, app.js. So, what if I want to copy all the project files and I don't want to copy them one by one then I will use the copy . . command to include all the files in the docker context.

And for the files that I won’t add in the docker context, I will add these files in the the .dockerignore that I don't want to copy in the container.

#ignore these files

node_modules
Dockerfile
.gitigonore

Layer Caching in Dockerfile

Layer caching in Dockerfiles refers to the caching mechanism Docker uses during the build process to optimize the rebuilding of images. To understand the layers in the Dockerfile, remember each command in the Dockerfile represents the layer.

By default, Docker caches the result of each layer/command so that when we re-run the Docker build command to create the docker image file it will get the cached result for the files that are not updated. This step will expedite the process of docker build.

Optimized the layered caching

If we optimize the commands of the dockerfile it will improve the build time during the iterative development. So, during the development, we made the changes in the code created the build, and deployed it. So, in this case, we can improve the build time if we write the Dockerfile effectively.

Example of Unoptimized Dockerfile

This given file is not optimized because when there is any change in the app.js file it will invalidate the cache of all layers that are present after this command.

So, when there is any change in the app.js file this will invalidate the cache of the output of Run npm install the command and it will again install all the packages new and old once. So, to optimize this process we can write the command Run npm install before this command Add app.js app.js

# Base image
FROM node:21.6.1

# Adding the files in the container or you can also use this command copy . .
ADD package.json package.json
ADD package-lock.json package-lock.json
ADD index.js index.js

# Install new packages
RUN npm install

# Expose the port so that we can run the application on this port
EXPOSE 8000

# Run the application
CMD ["node", "index.js"]

Create the image

This given command will create the image of our node.js application that can be deployed anywhere easily.

docker build -t <image_name>:<tag> <path_to_build_context>
#docker build -t my-node-app-image:v1.0 .

#Run this image as container
docker run -p 8000:8000 my-node-app-image:v1.0

docker-compose.yml file

So, we have checked the Dockerfile creates the image of the application, and now imagine you have multiple services to run in your project e.g. Redis for caching the data then you can define these services in the docker-compose.yml file.

So, in our current node.js application, we have required two run containers one for the current node.js application and the other for the Redis cache that we have to use in our application. Here is the code of the docker-compose file.

version: '3.8'

services:
my_node_app:
build: .
ports:
- "8000:8000"

redis:
image: redis
ports:
- "6379:6379"

Command to run the docker-compose file

The docker-compose up -d --build command is used to build the Docker images and start the services defined in a Docker Compose file.

In our next article, we will check how to deploy this image on the AWS ECS. If you enjoyed this Medium article, please feel free to like, share, and subscribe! Your support is greatly appreciated.

--

--