Using Node.js in Production (Part II)
How to deploy Node.js applications into production environments and have a robust pipeline to go from development to deployment
This is Part 2 of the series on how to deploy Node.js applications into production environments, to have a robust pipeline from development to deployment. In this part, we’re going to set up NGINX as a reverse proxy and do some basic load balancing.
If you haven’t read Part 1, I highly encourage you to do so, to ensure that you have a relevant context for this article. You can read it here:
Using Node.js in Production — I
This is going to be a series on how to deploy Node.js applications into production environments and have a robust…
Firstly, let’s clear out the jargon that I threw in the description of this post. What is a Reverse Proxy? Firstly, what is a Proxy?
A proxy server is a go‑between or intermediary server that forwards requests for content from multiple clients to different servers across the Internet. A reverse proxy server is a type of proxy server that typically sits behind the firewall in a private network and directs client requests to the appropriate backend server. A reverse proxy provides an additional level of abstraction and control to ensure the smooth flow of network traffic between clients and servers.¹
Okay, in simpler words, it’s something that sits between the client and your main server to forward requests from the client to the server and return responses back from server to client with some additional perks.
What is Load Balancing?
Load Balancing refers to the process of distributing a set of tasks over a set of resources, with the aim of making their overall processing more efficient.²
Okay, so why do we need these things?
Well, there are two main reasons:
- Whenever we run any server-side application, the port 80, which is the default HTTP port is forbidden to use, unless you’re the root user which when deploying applications you shouldn’t be. And because of this, you have to run your application on another available port — 3000, 5000, 8080, etc. So whenever your application has to be accessed, users have to use
http://yourwebsite.com:PORTwhich isn’t particularly nice. To avoid this, we use a web server that runs on the default port 80 and stands ahead of our Node.js application. The web server just proxies the incoming requests on port 80, internally, to our Node.js application running on whatever PORT we defined and returns the response back in the same way. This allows us to send and receive data over the default port 80 just by accessing
- Node.js applications are single-threaded, hence if you have more than one thread on your server (which in most cases you do), you are not utilizing the server resources efficiently. To overcome this, we run multiple instances of Node.js using a process manager like PM2 (read Part 1 for more information) to run Node.js processes equal to the number of threads/cores available to you. The web server then takes the incoming requests and distributes it to the Node.js processes that are currently free and helps to serve more requests than just running a single instance.
So to accomplish these two things, we’ll be using NGINX, a tried and tested web-server that can function as a reverse proxy and load balancer among many other things.
But firstly, we need to install NGINX onto our server. I’m using CentOS 8 for this example, but the steps would be similar for any Linux based OS with the appropriate change in the dependency manager.
sudo yum install -y epel-release
sudo yum install -y nginx
sudo apt update
sudo apt install nginx
The next logical step would be to register NGINX to load on system startup and to do this, we’ll use:
sudo systemctl start nginx
sudo systemctl enable nginx
You can check the status of the running NGINX service using:
sudo systemctl status nginx
With this, NGINX is setup and running on your server using the default configuration that NGINX comes bundled with. You can verify this by hitting the IP address of your server and you’ll be greeted with the default NGINX landing page.
Now, it’s time to configure NGINX according to our needs but before that, we need to know where the configuration files are stored. The default NGINX configuration resides in
/etc/nginx/nginx.conf and has a wildcard matching include script that includes any sub-configurations from —
So we are going to create a new file in that directory —
touch /etc/nginx/conf.d/nodejs.conf and copy the following config into this newly created file.
Okay, again, a lot of configuration which I’ll go through and explain in detail.
upstream name — this block defines the different applications that we want to proxy to. In this case, we have two applications running, one on port 5000 and the other on port 5001.
server — this block defines the settings that are applied to each of the applications we’re proxying to.
- listen — this defines the port that NGINX listens on. Both IPv4 and IPv6 types are defined. Hence the two listen directives.
- server_name — this is domain name/IP address of your server
- root — this is the root folder where your application is stored
The next three directives are optional but highly recommended. Because NGINX internally routes the requests to our Node.js application, our Node.js application sees all the requests coming from
localhost . Hence to pass on the information about the actual IP address requesting the resource we add the proxy_headers to the requests.
- location route — this defines all the routes that have to be proxied.
/means that all the requests will be proxied. The proxy_pass attribute defines where the proxied requests should go, and notice that we use the upstream name here. By doing this, NGINX automatically does the load balancing for us depending on which upstream server is available.
Now, we’ll first check our NGINX configuration for any errors as in production deployments, errors in this file directly equals application downtime which equals losses. So we use —
sudo nginx -t
If all is good, then we restart/reload NGINX to apply these new changes using systemctl:
sudo systemctl reload nginx
That’s it, you’ve done it! Your Node.js application is now properly load-balanced and served over a reverse proxy.
Now all’s good, and everything will work just fine until you encounter a use case wherein you have to make use of WebSockets and suddenly if you use this configuration, WebSockets don’t seem to work. Hmm, strange, isn’t it? Well, let’s see if we can figure out why it didn’t work —
WebSockets rely on hop-by-hop HTTP headers especially (Connection and Upgrade), and these headers can’t be forwarded through proxies.³ They rely on a handshake mechanism that has to stay consistent with the server instance.
Read this amazing article for a more in-depth explanation:
How to proxy web sockets with Nginx?
What are the web sockets headers that have to be configured ?
So to counter this, we have to modify our NGINX config slightly and introduce a few additional directives:
Okay, let’s look at what changed:
- ip_hash — this ensures that all requests coming from one client are sent to the same server instance depending on the IP address of the client
- location /socket.io/ — This block defines the extra headers — Connection and Upgrade that are required for WebSockets to work correctly and also sets the HTTP version to 1.1 that has support for WebSockets through HTTP.
Great! So now let’s test and reload NGINX and boom, everything works!
Congratulations! With this, you now have your Node.js application load-balanced and served through an NGINX reverse proxy with Web Sockets handled correctly. In the next post, we’ll extend this NGINX configuration to include HTTPS and gzip compression.
To find the code for this exactly in this state, navigate to the
nginx-1 branch of this GitHub repository:
⚙️ A step-by-step guide for deploying Node.js applications in production ⚙️ Using Node.js in Production (Part I) This…
Thank you so much if you made it this far and I hope this was useful. Please share this and any feedback is appreciated. See you in Part 3.
 NGINX Glossary — https://www.nginx.com/resources/glossary/reverse-proxy-server/
 Load Balancing — Wikipedia — https://en.wikipedia.org/wiki/Load_balancing_(computing)
 Proxy Web Sockets with NGINX — https://blog.usejournal.com/how-to-proxy-websockets-with-nginx-e333a5f0c0bb