Deploying a React app and a Node.js server on a single machine with PM2 and Nginx

Leonardo Cunha
Geek Culture
Published in
6 min readJun 14, 2021

The objective of this article is to document my journey through the process of deploying both the Frontend and Backend within the same Linux machine with a load balancer.

Installing prerequisites

Note that these instructions are valid for Ubuntu 16 or later.

Node.js

  • Install command:
$ sudo apt install nodejs
  • Verify the installation:
$ node --version
v12.16.0

Nginx (Web serving, reverse proxying, and load balancing)

  • Update the Ubuntu repository information:
$ sudo apt-get update
  • Install the package:
$ sudo apt-get install nginx
  • Verify the installation:
$ sudo nginx -v
nginx version: nginx/1.14.0 (Ubuntu)
  • Start NGINX:
$ sudo nginx
  • Verify that NGINX is up and running:
$ curl -I 127.0.0.1
HTTP/1.1 200 OK
Server: nginx/1.14.0 (Ubuntu)

NPM (Node.js package manager)

  • Install NPM:
$ sudo apt install npm
  • Verify the installation:
$ npm -v
6.14.13

PM2 (Production process manager for Nodejs applications)

  • Install PM2:
$ npm install pm2@latest -g
  • Verify the installation:
$ pm2 -v
4.5.6

Certbot (A client that fetches a free certificate from Let’s Encrypt)

  • Ensure that your version of snapd is up to date:
$ sudo snap install core; sudo snap refresh core
  • Install certbot:
$ sudo snap install --classic certbot
  • Prepare the Certbot command:
$ sudo ln -s /snap/bin/certbot /usr/bin/certbot

How will this structure work?

Flowchart Diagram for the deployment of Frontend and Backend on a single machine

Nginx will both serve our React build files and help load balance our backend application through our PM2 Node.js instances. The number of Node.js instances is directly proportional to the number of cores that your CPU has. To check how many cores your Linux CPU has, just run the following command:

$ nproc
4

In my case I have four cores within my CPU, which will translate to four instances of Node.js running with PM2. Those instances will receive HTTPS requests traffic according to a specific load balancing method from Nginx. That will help us utilize our machine’s full potential and also reduce its downtime.

Here are some main functionalities provided by PM2:

In this example we will not use the cluster mode that PM2 provides since we will use NGINX to load balance our Backend. That decision will be explained in the course of this article.

Deploying the Frontend

This will be your usual Frontend deployment, you will first clone your project repository, install your dependencies and run your build script.

Clone your repository:

$ mkdir project && cd project$ git clone git@gitlab.com:your/project/frontend.git

Install node_modules dependencies:

$ cd frontend$ npm install

Run your build script:

$ npm run build

Get your build path:

$ cd build && pwd
/root/frontend/build

Save this for later, we will use this build path with our NGINX configuration.

Setting up the Backend with PM2

First you will need to clone your Backend project:

$ cd project$ git clone git@gitlab.com:your/project/backend.git

Install node_modules dependencies:

$ cd backend$ npm install

We can utilize the behavior configuration feature of PM2 and create a Ecosystem file in YAML format with environment variables and PM2 parameters.

For a CPU with four cores, create a process.yml file with four workers:

The parameters:

  • script

Script path relative to pm2 start

  • exec_mode

Mode to start your app, can be “cluster” or “fork”

  • name

The name of your PM2 node

  • env

Environment variables which will appear in your app

Notice that your Node.js application should be prepared to run with a dynamic PORT number.

With everything set up, you can start your workers with the start command:

$ pm2 start process.yml

After starting your workers, you will be able to check the list of the current PM2 nodes running:

$ pm2 list
List of PM2 nodes running

Configuring DNS

You need to add a DNS record within your domain that points to your machine’s IP address.

To find your machine’s IP address, run the following command:

$ ip r
default via 192.08.62.1 dev eth0 onlink
192.08.62.2/24 dev eth0 proto kernel scope link src 192.08.62.1

Now, in your domain DNS settings, add a DNS record that points to 192.08.62.1:

DNS record that points to the Linux machine

It will take up to 72 hours for the DNS propagation to finish, but it is likely to finish around one or two hours. You can check if your domain has the updated DNS record with DNS Checker.

Configuring Nginx

You will need to access Nginx configuration file and add blocks of code so it can interpret your architecture and redirect the requests accordingly.

Access and start editing the configuration file:

$ nano /etc/nginx/sites-available/default

First we need to add the upstream block, which is a way of telling Nginx that when this upstream receives a request, it should proxy between a number of servers according to a specific load balancing method.

The available load balancing methods are:

  • Round-robin (Default)

In this, Nginx runs through the list of upstreams servers in sequence, assigning the next connection request to each one in turn.

  • hash

In this method, Nginx calculates a hash that is based on a combination of text and Nginx variables and assigns it to one of the servers. The incoming request matching the hash will go to that specific server only.

  • IP hash

The hash is calculated based on the IP address of a client. This method makes sure the multiple requests coming from the same client go to the same server.

  • Least connections

Nginx sends the incoming request to the server with the fewest connection hence maintaining the load across servers.

  • Least time

With this method, Nginx calculates and assigns values to each server by using the current number of active connections and weighted average response time for past requests and sends the incoming request to the server with the lowest value.

For the purpose of this example, we will use the most common method Least Connections.

Add the upstream block to your default configuration file:

upstream loadbalancer {
least_conn;
server localhost:3500;
server localhost:3501;
server localhost:3502;
server localhost:3503;
}

When a request is made to https://example.com it should respond with our frontend files, and when a request is made to https://example.com/api it should redirect with a proxy_pass to our load balancer.

At the end of this configuration, your default config file should look like this:

  • The location / block:

Has root as your frontend build folder path, which is valid not only for React builds but for any javascript framework that generates static build files with an index.html file.

  • The location /api/ block:

Will redirect every HTTPS request to our load balancer. Nginx can identify that an upstream named loadbalancer has been declared, so we can directly proxy_pass to http://loadbalacer/.

After saving the configuration file, check if the saved syntax is valid:

$ sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

Reload Nginx to reflect the recent changes:

$ systemctl reload nginx

Setting up a free SSL certificate with Certbot

To set up SSL within your Nginx configuration:

sudo certbot --nginx

Follow every installation step. Certbot will fetch a certificate from Let’s encrypt and automatically update your Nginx configuration files.

You can also test Certbot’s automatic renewal:

sudo certbot renew --dry-run

Everything is done and you can visit your app at https://example.com! 😄

Conclusion

We can now understand why it is better to use this approach of load balancing with Nginx instead of only using PM2’s cluster mode. You can use this initial architecture for when your application maintains a low level of concurrent users. The moment that those users start coming and this configuration is no longer sufficient, you can easily keep this machine only for load balancing and start redirecting your upstream to servers outside localhost, this way achieving a smooth horizontal scaling.

Here is a visual representation of horizontal scaling with our current set up:

--

--