Node.js, PM2, Docker & Docker-compose DevOps

Adrien Desbiaux
3 min readJan 3, 2016

--

Happy new year 2016 everyone!

In order to start this year in the best auspices as a DevOps, I will describe how to deploy a job queue Node application with Docker, Docker-compose and PM2. Hope it will give you some leads to deploy your own app.

I will keep it as dry as possible to limit the length of this article. For the details, I let you read the documentation of each one of the tools involved.

Let’s say your Node application run over a Nginx load balancer (proxy). Your Node application would be a job queue application and so you might have roughly a server.js and a worker.js to run.

The overall application can then be deconstructed into the following parts:

  • Nginx -> Dockerfile
  • Node -> Dockerfile

Nginx -> Dockerfile

This Dockerfile calls an ENTRYPOINT in order to dynamically apply the right domain name in the nginx.conf file. For example if you wish to have different environments.

The Dockerfile also exposes the port 443 for SSL considerations and mount the folder/volume SSL present on the Host on which you will deploy your app.

Hence, in your project folder, under a nginx folder you will have the Dockerfile, the entrypoint file and of course the nginx.conf file.

Nginx folder

Node -> Dockerfile

In order to deploy in an efficient way your node app, I would highly advise you to use PM2.

Same here, we have an ENTRYPOINT doing certain things after the installation of Node and PM2. Then it exposes the port used by our server.js.

It runs the node application thanks to PM2 in no-daemon mode (otherwise your container will exit). The processes.json file is a PM2’s feature.

Please refer to the doc for more info.

So basically what we are doing here is to deploy 1 server, 2 workers and a PM2 watcher in our container.

Docker-compose

Thanks to this wonderful tool, we can now bring up the application by putting all the pieces together. To do so, it requires the use of a YML file.

In this file you will link all the containers together.

For example docker-compose -f docker-compose-prod.yml up -d will build and run all the containers in a deamon mode.

  • a container running a Redis DB used by your job queue application
  • 2 containers running the Node application
  • 1 container running Nginx

All tight together by the different opened ports.

You can then build independently each one of them => docker-compose -f docker-compose-prod.yml build nginx && docker-compose -f docker-compose-prod.yml up -d nginx will re-run the nginx container for example. Useful after an update of the nginx.conf file.

Zoom on the nginx.conf

The container chooses the IP addresses suitable for itself. So in order to apply those IP to our nginx.conf we trust docker-compose i.e the docker-compose file => app1 / app2. The IP will both be distinct since they are a part of 2 distinct containers. This explain the choice to apply the same port number. It makes also the deployment and configuration files easier to maintain.

Zoom on the Node app conf

In order for our Node application to connect to the Redis DB, the image makes available the default connection settings via two main env variables:

  • process.env.REDIS_PORT_6379_TCP_PORT
  • process.env.REDIS_PORT_6379_TCP_ADDR

Conclusion

I tried in a very short overview to bring you some key elements in order for you to deploy your own Node application involving a Nginx load balancer, a Redis DB and a job queue architecture via PM2 and docker-compose.

Docker-compose makes the deployment of any complex application a real pleasure. Hope you will enjoy it!

Feel free to give your opinion and correct my typos :)

--

--