How to build a Node.js and MongoDB Application with Docker Containers

Engineering@ZenOfAI
ZenOf.AI
Published in
9 min readMay 24, 2019

Introduction to Docker

Docker is the world’s leading software container platform. It is a tool designed to make it easier to create, deploy and run the application by using containers. Docker eases the process of deployment of an application and efficient by resolving a lot of issues related to deployment.

Why Containers?

Containers resolve the problem of a code working on one operating system/computing environment and not working on another. Reasons for not working could be lack of dependencies or delayed updates of software or dependency across all the operating systems.

In such a case an application wrapped inside a Docker Container can run on any system that has Docker installed. Here, Docker serves as the common platform, each container is built on its own OS/Binaries or libraries that are required by the application running inside it. This way a docker container has nothing to do with underlying hardware or operating system and can run in any computing environment provided if docker is installed on it.

The following image illustrates a similar use case:

Benefits of Docker:

  • Portability :
    Can run on any platform like local system, Amazon EC2, Google cloud platforms, virtual box.
  • Version control
  • Isolation feature :
    Doesn’t interact with any other applications running on the same system
  • Security

For more information please visit: Docker and Containerisation vs Virtualisation

Understanding Docker Terminology

Step-by-step Docker workflow for developing a Docker container

Dockerfile :

  • A Dockerfile is a text document that contains all the commands(set of instructions) that are executed to build an image.
FROM alpine:latest
RUN apk add — no-cache nodejs npm
WORKDIR /app
COPY . /app
COPY package.json /app
RUN npm install
COPY . /app
EXPOSE 8080
CMD [“node”, “app.js”]
  • FROM Creates layers of dependencies like we could build an OS layer.
  • RUN allows us to install your application and packages required for it.
  • COPY adds files from Docker client’s current directory.
  • EXPOSE instruction informs Docker that the container listens on the specified network port at runtime.
  • CMD specifies what command to run within the Container.

Note: the difference between RUN & CMD is

  • The RUN command will execute while creating the image
  • The CMD is a list of things to run within a Container that instantiated from an image.

Docker Image :

An image is a combination of a file system and parameters. Docker images are more like templates used to create a Docker container. The run command is used to mention that we want to create an instance of an image, which is then called a container.

Docker Container :

A Container allows a developer to package up an application with all of its libraries and other dependencies into a single standardized unit, so the application can run quickly and reliably from one computing environment to another.

Each container (an instance of a Docker image) includes the following components:

  • An operating system selection, for example, a Linux distribution, Windows Nano Server, or Windows Server Core.
  • Files added during development, for example, source code and application binaries.
  • Configuration information, such as environment settings and dependencies.

Docker hub:

Docker Hub is a cloud-based repository in which users and partners could create, test, store and distribute container images. Through Docker Hub, a user can access public, official, image repositories, as well as create their own private repositories, automated build functions, webhooks, and workgroups.

Case Study : Integrate Node.js + MongoDB application with Docker

Let’s look at how to integrate a Node.js application with Docker containers. For this tutorial, I have built a sample login application using MongoDB. Source files: https://github.com/Rammohan-bitzop/login-app

Steps involved:

  1. Setup your Nodejs application.
  2. Create Dockerfile’s for each service.
  3. Define services using the Compose file.
  4. Run docker-compose to build the application.

Step1: Setup a Nodejs application.

I have created a sample login application using Nodejs and MongoDB. I will run it locally and check if it is working properly. Our application has 2 services running, let us start both of them.

Start node

Start mongo

My application is running at localhost on 7500 port.

Now let us dockerize this application.

Step2: Create Dockerfile’s for each service

  • A Dockerfile could be created in the same project directory or outside the project directory(path to source files have to be provided).
  • I shall be creating the Docker file in the project directory itself.
  • Create a Dockerfile is as easy as creating a new file. Name this file as your wish but standard practice is to name it Dockerfile. With your preferred text editor you could add some instructions in that file.

Running a docker file builds an image. An image is made up of several layers and each instruction in a Dockerfile adds a layer to the image.

  • Layers of an image consist of application files and its dependencies.

We need 2 services for our application to run, So we need 2 images for our application, one for Login-app, other for MongoDB.

Dockerfile login-app image:

#Each instruction in this file creates a new layer
#Here we are getting our node as Base image
FROM node:latest
#Creating a new directory for app files and setting path in the container
RUN mkdir -p /usr/src/app
#setting working directory in the container
WORKDIR /usr/src/app
#copying the package.json file(contains dependencies) from project source dir to container dir
COPY package.json /usr/src/app
# installing the dependencies into the container
RUN npm install
#copying the source code of Application into the container dir
COPY . /usr/src/app
#container exposed network port number
EXPOSE 7500
#command to run within the container
CMD ['node', 'app.js']

Building and Testing above Dockerfile :

  • To build a Docker image from Dockerfile use command :
docker build -t <name_for_image> .
  • -t represents tag name
  • . represents the current directory

Run the above command in the project directory where docker file is stored.

For this demonstration, I named my image as latest123/login-app. Name your image per your requirements as this is used to perform all operations.

  • To list images created, use the command:
sudo docker images

If you have noticed, we have 2 images created, one is our login application image and other is official node image pulled from Docker hub. So, our login application image is built on top of the official node image.

Images are just like classes and containers are like objects. A Container is a running instance of an Image. Our services run inside these containers.

Run image to get the container:

To run that image use:

sudo docker run -d -p <Browser_expose_port>:<application port> <image_id/name>
  • -d(daemon) — it runs the code in the background
  • p represents port networking number

List the running containers

Now to monitor the container use commands:

#To list running containers
sudo docker ps
#To list all the available containers
sudo docker ps -a
#To start a stopped container
sudo docker start <container_name/ID>
#To stop a running container
sudo docker stop <container_name/ID>

I didn’t create a mongo image because I will be using the official mongo image from docker hub in the docker compose file.

Step3: Define services using the Compose file

Docker Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services/containers from your configuration.

Creating a docker-compose.yml

Now let’s create a docker-compose.yml file in the same directory. We will define our services/containers inside this file. When creating a docker-compose file, the .yml extension is a must.

docker-compose.yml

version: "3"
services:
login-app:
container_name: login-app
image: latest123/login-app
restart: always
build: .
ports:
- "7500:7500"
links:
- mongo
mongo:
container_name: mongo
image: mongo
volumes:
- ./data:/data/db
ports:
- '27018:27017'

Breaking down the above code:

  • This compose file defines two services: login-app and mongo
  • container_name field value are used to name the container created.
    For the login-app service, I named it login-app. This way naming a container properly makes it easier to work with and can avoid randomly generated container names (this is merely a personal preference, the name of the service and containers do not have to be the same).
  • build field is where we specify the path to dockerfile to create the image
    I am building the login-app image using the Dockerfile in the project directory and mapping the host/browser port to the container/service/application port.
    You could either build the image by running the dockerfile then specify that image name in the dockerfile or directly provide the path of docker file in compose using build command. When you specify both it uses build command.
  • Our second service is MongoDB but this time instead of building our own mongo image, we simply pull down the standard mongo image from the Docker Hub registry. As we have learned earlier if an image isn’t available locally docker daemon will try to pull it from docker hub.
  • As information in a DataBase is non-volatile, we need persistent storage. So, we mount the external host directory /data (this is where I have added some initial data into my database when I ran the application locally) to the container directory /data/db.
  • Containers are stateless, which means when a container is terminate all of its data is gone. Mounting volumes gives us persistent storage so when starting we restart a container, Docker Compose will use this persistent storage where all of the previous containers data is stored, and copy it to the new container, ensuring that no data is lost.
  • Finally, we use links command to link both the services.
    This way the MongoDB service is reachable from the login-app service.

We shall run this docker-compose.yml file by using the command docker-compose up which will spin up two containers with our services running inside them and expose the services on given port numbers.

Step4: Run docker-compose to build the application

  • From the project directory, start your application by running
docker-compose up

Then you should see this output confirming that your services have been created:

Our application should be running at http://localhost:7500/

  • At this stage switch to another terminal window, use this command to list all local images
sudo docker image ls 
  • And the running containers after-compose up will something look like:
  • We can inspect images and container using:
inspect images:docker inspect <tag or id>inspect running container:docker inspect <container-id/name>

Stop the application containers:

Either by running docker-compose down in the second terminal in the project directory or by hitting CTRL+C in the original terminal where you started the app.

It will look something like this when we use ctrl+c

  • If you want to again run the application then run the command
docker-compose up .

Thanks for the read, In my next article I will be discussing how to synchronize code updates on a dockerized application running on multiple servers.

I hope this article was helpful.

This story is authored by Rammohan Guduru. Ram is a DevOps Engineer specializing on Docker based solutions.

--

--