How I created a CI/CD pipeline to deploy my NestJs application to Digital Ocean Droplet.

Samin karki
6 min readApr 6, 2024

--

I needed to deploy my NestJs backend to staging and production environments (because I have been writing a lot of articles on NestJs so why not deploy it) and decided to use the Digital Ocean Droplet this time. The process of creating a continuous deployment pipeline is quite straightforward.

Prerequisite: a Digital Ocean account with container registry and a droplet enabled, a NestJs project (obviously).

Tools used

  1. Github actions — for automating the deployment
  2. Docker — for containerizing the backend
  3. Digital Ocean Container Registry — private storage for images
  4. Digital Ocean Droplet — server where our backend is deployed

Here are the steps involved:

Step 1:

Create a dockerfile for your NestJs application. It contains steps to create and run the prod build of the NestJs application. A Docker image is created.

Your dockerfile should look something like this:

Filename: Dockerfile

I was trying out multi-stage builds for the first time, so the code might not be perfect here and I know you’ll write it better when you implement it yourself.

I’ll tell you the important things happening in the Dockerfile.

In the first stage i.e. builder, I create a work directory to specify the current working directory. Then, I copy everything from my code to the workdir. Then I install everything from package.json using npm ci and run the build command which creates a production build of our code in the dist folder.

In the next stage i.e. staging, I copy that dist folder from the previous stage, I copy package.json and package-lock.json, and the entire node_module folders instead of re-running npm ci here. Then I run the prune command(line 24) to remove files with dev dependencies and expose port 3000(it is the port that is running my backend code). The final line runs the script to run the code.

Step 2:

Using the dockerfile created, create a docker container and store it in the Digital Ocean Container Registry. Your GitHub action file should look something like this.

build-and-deploy.yml
run: if [ ! -z "$(doctl registry repository list | grep "$(echo $IMAGE_NAME)")" ]; then doctl registry repository delete-manifest $(echo $IMAGE_NAME) $(doctl registry repository list-tags $(echo $IMAGE_NAME) | grep -o "sha.*") --force; else echo "No repository"; fi

Above is the command on line 37 which was too long to take a screenshot of and I was too lazy to move it to a new line.

Want to know what is happening here? Let me summarize.

On push to branch with the name ‘staging’ or ‘deployment-setup’ (line 4), the rest of the jobs are triggered.

You should be wondering what are those variables we set up in the file above (ones that look like ${{secrets.*}} ). We store these values in the GitHub repo for your Nestjs project and use it from there.

Note that I was young and naive when I wrote this code and utilized two different ways for variables i.e. one from secrets and one from secrets through env (example: $${{secrets.DIGITALOCEAN_ACCESS_TOKEN}} and $REGISTRY in line 24). Just because I was a noob, you don’t have to be.

Also, I may or may not have copied these lines and modified them, so I would like to credit whoever I copied the file from. I am so sorry, I don’t remember you but what I do remember is your work. Also, I may have commented out the code for garbage collection and you might/ might not need that as the registry might get cluttered with unused images.

Okay, I feel like we’re going on a tangent. Let’s get back on track.

Where even is this place to keep your secrets you might say? It’s here.

GitHub settings page

Inside your repository, you’ll find this. Click on manage environment settings and create your variables. The variables can be accessed with secret object in GitHub actions. We will create REGISTRY, DIGITALOCEAN_ACCESS_TOKEN, IMAGE_NAME, SSH_HOST, SSH_USERNAME and SSH_KEY.

You can most of these values from Digital Ocean from here. (Look into how to add SSH key to GitHub account to add SSH_KEY)

DIGITALOCEAN_ACCESS_TOKEN is generated from here.

Generate and use the token from here: Digital Ocean page

REGISTRY starts with registry.digitalocean.com/**** and you can find it in your created container registry in Digital Ocean.

IMAGE_NAME is something you can set yourself. Try to make it meaningful for ease of use in other parts.

SSH_HOST is the IP address of the droplet that you’re going to use.

SSH_USERNAME is the user of the Digital Ocean Droplet. I am guessing it is root by default.

Note: To communicate between Digital Ocean droplets and GitHub actions, I am using ssh based authentication. (How I added ssh is beyond the scope of this article. Maybe will write an article on that later.)

Still have a problem with SSH keys with Digital Ocean? Here is a link to help you get started.

https://docs.digitalocean.com/products/droplets/how-to/create/

Step 3:

Pull the image from the container registry and run the container in Digital Ocean Droplet (your droplet needs docker installed beforehand)

build-and-deploy.yml

Another note: Add your .env file to the droplet in a secure place(I’ve added in /root/.env as seen in line 74 but that’s for demonstration purposes) so it does not have to be added every time by the user. I am not storing this in GitHub secrets or anyplace as I want to use GitHub actions to create a build that can be deployed in staging and prod environments by just substituting the env variables.

The steps to deploy job are written above. I don’t think I’ll want to repeat it here. The command on line 73 with volume might not be needed. I created keys necessary for my project and instead of pushing it to Git Hub, I stored it in the droplet and am using volume to bind the folder in the container with the keys folder in the droplet.

Step 4:

Access the API with a public IP address you’ll see on the Digital Ocean Droplet followed by the port (like this http://xxxx.xxxx.xxxx.xxxx:3000/) and you’re done.

Pretty easy, right? Just 4 steps. My personal favorite is the fourth step. :D

An additional thing you might need is a script to clear out old images from Digital Ocean Droplets. The command is pretty simple. You can either manually run this in the Droplet or set an automatic script somewhere. The command will remove all the images without at least one container attached to them. Pretty handy command if you run the pipeline multiple times.

docker image prune --all

In conclusion, we created a CI/CD pipeline to deploy anything that is pushed to the ‘staging’ branch. First, we use dockerfile to write steps to create a build and run it. The second part is to create a pipeline to build an image and push it to the container registry for storage. The last step is to log in to the server, then the registry, and run the container after stopping the previously running one if it exists.

Let me know if you think there are other better ways to do this. Efficient commands or any missed steps, perhaps.

Until next story. Ba-Bye.

--

--