How to build a scalable, maintainable application with NestJs + MongoDB apply the design patterns and run in Docker(Final part)
The previous sections:
We have built a typical monolithic application combine 3-Tier design architecture. Obviously, the application separation all the concerns, each layer takes a single responsibility as it.
Not only loosely coupled between the layers but also differentiate obviously the code. This can be made the coding more cleaners and readable.
Consequently, there is an API exposed that empowers to enable create a user and store it into the database.
We have maintained our application in the development environment during the tutorial.
So now we are heading to build our application in the production environment. Nest has been introduced a couple of commands that can be combined to start the application in the production environment as shows as below:
npm run build && npm start:prod
And, you can start running up your application in production server via those commands above.
In theory, the application running well until an unnoticed vulnerable introduced. And the process will not be restarted automatically.
So, this certainly makes your application disproportionately and unsustainable. Which means our application will be a halt till manually restart.
Either the application works in long terns sustainable or an issue suddenly occurs should have integrated a failure tolerance mechanism.
Relative to our application that we need a mechanism to manage the application states. A particular mechanism should be introduced.
And fortunately, there is a service called PM2 can help to force this mechanism.
“PM2 or Process Manager 2, is an Open Source, production Node.js process manager helping Developers and DevOps manage Node.js applications in the production environment. In comparison with other process managers like Supervisord, Forever, Systemd, some key features of PM2 are automatic application load balancing, declarative application configuration, deployment system, and monitoring.”
Let’s install and going to build our application with PM2 via the following commands and configuration below:
- Install PM2 as global mode:
npm i pm2 -g
- Install PM2 dependency in the application:
npm i pm2 --save
Next, create a pm2.yaml
file in the root of the application where we can declare the configuration params.
Note: If you are not familiar with PM2 and YAML so that you can have a reference to those documentations below:
- PM2: https://pm2.keymetrics.io/docs/usage/pm2-doc-single-page/
- YAML:https://docs.ansible.com/ansible/latest/reference_appendices/YAMLSyntax.html
This tutorial won’t dig dive into aspects of PM2 and YAML, cause it out of the scope. We will concentrate on configuration those things in our application.
Next, create a pm2.yaml
file, which is located in the root application and defines some configurations info as show as a snippet below.
apps:
- script: ./dist/main.js
name: nest-demo-app
watch: true
instances: max
exec_mode: cluster
env:
PORT: ${SERVER_PORT}
NODE_ENV: development
env_production:
NODE_PORT: ${SERVER_PORT}
NODE_ENV: production
script
: This will trigger our application by pointing to bootstrapping filename
: Name of the processwatch
: Allow watching the application running statesexec_mode
: The cluster mode allows networked Node.js applications (HTTP(s)/tcp/UDP server) to be scaled across all CPUs available, without any code modifications.env
andenv_prodution
both represent for development and production environment triggers.${SERVER_PORT}
: This formula is loading the value from our.env
file.
Let’s modify our package.json
file, this will be adding the commands to enable the application run on PM2.
"build:dev": "nest build && pm2 start pm2.yaml",
"build:prod": "nest build && NODE_PORT=3003 pm2 start pm2.yaml -n nest-demo --env production",
Now we can run the command npm run build:dev
to start our application in the development environment or npm run build:prod
to start our application in the production environment. PM2 will start our application run in the background and automatically manage the states if there is an error occurs, so then PM2 will be restarting the application automatically.
Run the command npm run build:dev
Run the command npm run build:prod
PM2 has started 4 instances that use the utility maximum of our CPU cors. This relies on your cluster mode configure on the PM2 file.
That’s it, we have an application running well and failure tolerance by using PM2 manages the process.
But what happens when we have a team working as ten peoples working on this application, each member using the operating system differently. Sometimes, they got an unexpected error when developing the project. This happens constantly and influences to the deadlines.
This means all the members of the team need to work in a sustainable and consistent environment. Alternatives, we should have an isolated environment that configures in the application and can host in any operating system. Nowadays, Docker has become the most popular tool and this can help us configure our application run in an isolated environment.
Let’s start with Docker. I imagine you have installed docker and docker-compose on the local machine.
If you are not familiar with Docker and Docker Compose, you can have a reference to those docs below:
- Docker: https://www.docker.com/
- Docker Compose:https://docs.docker.com/compose/reference/overview/
A plus point that we can use Docker, PM2, and Nginx (reverse proxy) to run our application. So, let’s going to config those things in our application.
Create the files in the root directory of the application. And define the configure as following below.
Dockerfile
Note*: Dockerfile should be added into git ignore
The Dockerfile is essentially the build instructions to build the image. Docker uses this file to build the image itself. This is similar to the bash script. Let’s take a look at the comment each line in Dockerfile, we can understand what Dockerfile does.
FROM node:10-alpine
# Install PM2
RUN npm install -g pm2
# Set working directory
RUN mkdir -p /var/www/nest-demo
WORKDIR /var/www/nest-demo
# add `/usr/src/app/node_modules/.bin` to $PATH
ENV PATH /var/www/nest-demo/node_modules/.bin:$PATH
# create user with no password
RUN adduser --disabled-password demo
# Copy existing application directory contents
COPY . /var/www/nest-demo
# install and cache app dependencies
COPY package.json /var/www/nest-demo/package.json
COPY package-lock.json /var/www/nest-demo/package-lock.json
# grant a permission to the application
RUN chown -R demo:demo /var/www/nest-demo
USER demo
# clear application caching
RUN npm cache clean --force
# install all dependencies
RUN npm install
EXPOSE 3003
# start run in production environment
#CMD [ "npm", "run", "pm2:delete" ]
#CMD [ "npm", "run", "build-docker:dev" ]
# start run in development environment
CMD [ "npm", "run", "start:dev" ]
.dockerignore
.git
.gitignore
node_modules/
docker-compose.yml
Docker Compose is a tool for defining and running multi-container Docker applications. The YAML file will configure our application’s services. Then, with a single command, you create and start all the services from your configuration. Let’s take a look at the comment each line in the docker-compose file.
# docker compose version
version: '3.7'
# all the containers have to declare inside services
services:
# App service
demoapp:
# application rely on database running
depends_on:
- db
# this build context will take the commands from Dockerfile
build:
context: .
dockerfile: Dockerfile
# image name
image: nest-demo-docker
# container name
container_name: demoapp
# always restart the container if it stops.
restart: always
# docker run -t is allow
tty: true
# application port, this is take value from env file
ports:
- "${SERVER_PORT}:${SERVER_PORT}"
# working directory
working_dir: /var/www/nest-demo
# application environment
environment:
SERVICE_NAME: demoapp
SERVICE_TAGS: dev
SERVICE_DB_HOST: ${DATABASE_HOST}:27017
SERVICE_DB_USER: ${DATABASE_USERNAME}
SERVICE_DB_PASSWORD: ${DATABASE_PASSWORD}
VIRTUAL_HOST: ${VIRTUAL_HOST}
VIRTUAL_PORT: ${VIRTUAL_PORT}
# save (persist) data and also to share data between containers
volumes:
- ./:/var/www/nest-demo
- /var/www/nest-demo/node_modules
# application network, each container for a service joins this network
networks:
- nest-demo-network
# Web server service
nginx-proxy:
# pull image from docker hub
image: jwilder/nginx-proxy:alpine
# container name
container_name: nginx-proxy
# always restart, except that when the container is stopped
restart: unless-stopped
# docker run -t is allow
tty: true
# web server port default 81 and ssl run in port 443
ports:
- "81:81"
- "443:443"
# save (persist) data and also to share data between containers
volumes:
- ./:/var/www/nest-demo
- ./nginx/conf.d/:/etc/nginx/conf.d/
# web server rely on application running
depends_on:
- demoapp
# application network, each container for a service joins this network
networks:
- nest-demo-network
# Database service
db:
# pull image from docker hub
image: mongo
# container name
container_name: nestmongo
# always restart the container if it stops.
restart: always
# database credentials, this is take value from env file
environment:
MONGO_INITDB_ROOT_DATABASE: ${DATABASE_NAME}
MONGO_INITDB_ROOT_USERNAME: ${DATABASE_USERNAME}
MONGO_INITDB_ROOT_PASSWORD: ${DATABASE_PASSWORD}
# save (persist) data and also to share data between containers
volumes:
- db_data:/data/db
# database port
ports:
- "${DATABASE_PORT}:${DATABASE_PORT}"
# application network, each container for a service joins this network
networks:
- nest-demo-network
#Docker Networks
networks:
nest-demo-network:
# nginx external network
external:
name: nginx-proxy
driver: bridge
# save (persist) data
volumes:
db_data: {}
nginx/conf.d/app-demo.conf
Nginx is open-source software for web serving, reverse proxying, caching, load balancing, media streaming, and more. It started out as a web server designed for maximum performance and stability.
server {
expires off;
listen 81;
listen [::]:81;
#listen 443 ssl;
#application directory
root /var/www/nest-demo;
# TODO: Change the domain to match with server hosted.
#server_name example.com www.example.com;
server_name nestdemolocal.local;
# nginx reverse
location / {
proxy_pass http://localhost:3003;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/nest-demo;
}
}
package.json
Adding the following commands to package.json
file
"pm2:delete": "pm2 delete nest-demo-app",
"build-docker:prod": "nest build && NODE_PORT=3009 pm2 start pm2.yaml -n nest-demo --no-daemon --env production",
"build-docker:dev": "nest build && NODE_PORT=3009 pm2 start pm2.yaml -n nest-demo --no-daemon --env development"
Cause we are using Nginx reverse proxy. So, we need to create an Nginx network shares to docker socket and run on port 80. Let’s have a take at the following commands.
Create a network to shares:
docker network create nginx-proxy
Run an Nginx proxy on that network:
docker run -d -p 80:80 --net nginx-proxy -v /var/run/docker.sock:/tmp/docker.sock jwilder/nginx-proxy
Now to can create a virtual host and point to our application, this should be config in app-demo.conf
a file. We assume our virtual host will be nestdemolocal.local
, we need to use DNS to point this domain to the localhost. Let’s edit /etc/hosts
the file, which is located in our local machine (that I’m using Mac OS).
127.0.0.1 nestdemolocal.local
So, all the step is passed and we can start our application in Docker via following commands:
- Start application in docker-compose:
docker-compose up
- Rebuild application:
docker-compose up --build
- Start application in docker-compose permanently:
docker-compose up -d
Let’s start our application via docker-compose up
and run testing with API to create the user.
Conclusion:
That’s it, we have built our application running in an efficient manner and isolated environment.
Thanks for reading!
Github source: https://github.com/phatvo21/nest-demo
List of content: