Dockerize a React Nodejs App and Deploy in AWS EC2 — The Startup Way Part-1

Vikranth Kanumuru
Kanlanc
Published in
7 min readJun 5, 2021

Introduction

The startup way part of the heading might be misleading, but when I was looking into tutorials that could help me deploy a React + Node + Postgres application for a startup I was freelancing for, all I got were ECS tutorials, or tutorials that only show dockerizing for development purposes which did absolutely nothing to harness the power of react building or docker for that case.

For the startup, I was working for, they only wanted to deploy the app for beta testers for a few days and so, the hassle or the cost of going through ECS was just too much.

Hence, I wanted to make a tutorial that dockerises three services(following a microservices architecture):

  1. UI service : Frontend that nginx to serve static assets created by the command “npm run build” in React
  2. API service : Backend built with Expressjs( Node framework)
  3. Postgres service : A service that serves Postgresql Database

and make these services cooperate through docker-compose

If you are unfamiliar with docker or docker-compose, I would recommend going through the youtube video below

For not overloading you with info, I divided the process into two parts:

Part-1 : Dockerising the application and,

Part-2 : Deploying the dockerised application to EC2

Now, since that’s explained, let’s start with Part-1 of the article.

Project File Structure

The file structure for the entire project was as follows:

You can ignore the .github which was used to make github actions for CI/CD, appspec.yml file( for Continuous Deployment with AWS), and the scripts folder. (If you would like a tutorial for those, let me know in the comments)

1. UI service (Frontend)

For the frontend which used react, I made a Dockerfile that uses multi-staging to reduce the image size.

Let’s first start with the file structure of the UI folder

In this file structure, the four things to focus on are:

  • Dockerfile
  • .dockerignore
  • package.json and
  • nginx.conf file in nginx folder

Dockerfile

FROM node:15-alpine3.10 as build 
ENV NODE_ENV production
LABEL version=”1.0"
LABEL description=”This is the base docker image for prod frontend react app.”
LABEL maintainer = [“abc@gmail.com”, “anc@gmail.com”]
WORKDIR /appCOPY [“package.json”, “./”]RUN npm install — productionCOPY . ./RUN npm run build# production environment
FROM nginx:1.19.10-alpine
COPY --from=build /app/build /usr/share/nginx/html
COPY --from=build /app/nginx/nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
EXPOSE 443
CMD ["nginx", "-g", "daemon off;"]

Here, we are first building the react environment in the build stage, and later in the production environment we are discarding everything except for the build part which helps reduce the docker image size by compressing the CSS, js files, and keeping only the essentials.

In the next part, these essentials which are static assets are served with Nginx. You can use other tools like apache, but Nginx is considered the best for static assets.

.dockerignore

node_modules
npm-debug.log
build

This makes docker ignore these files

package.json

{
"name": "client",
"version": "0.1.0",
"private": true,
"dependencies": {
"@cleandersonlobo/react-mic": "^1.2.0",
"@react-rxjs/core": "^0.7.1",
"@testing-library/jest-dom": "^4.2.4",
"@testing-library/react": "^9.3.2",
"@testing-library/user-event": "^7.1.2",
"@use-it/event-listener": "^0.1.6",
"axios": "^0.19.2",
"bootstrap": "^4.5.3",
"font-awesome": "^4.7.0",
"moment": "^2.29.1",
"query-string": "^6.13.7",
.......
},
"scripts": {
"start": "react-scripts start",
"build": "react-scripts build",
"test": "react-scripts test",
"eject": "react-scripts eject"
},
"eslintConfig": {
"extends": "react-app"
},
"proxy": "http://api:4000", <--------- ADD THIS LINE

}

Make sure to add the proxy line which is at the end

nginx.conf

server {listen 80;location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
location /api {
resolver 127.0.0.11;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://api:4000;
}
error_page 500 502 503 504 /50x.html;location = /50x.html {
root /usr/share/nginx/html;
}
}

Add this file in an nginx folder in the UI folder. This file is essential because it tells nginx where to find the static files and where to send the HTTP methods that you use in your react app like get and post.

Also, it creates a server which listens on the port 80, and the /api part is so that nginx can differentiate which are requests to send to the backend and which requests are not for the backend service. Please note that you are requried to prefix your backend requests with “/api” like if you were sending requests to your backend like:

axios.post(“/users/login”)

is changed to

axios.post(“/api/users/login”)

and you are done with the frontend part of your project.

2. API service

File structure of the API service

**NOTE: You need to setup your backend routes to match up with your frontend changed routes **

app.use(“/api”, indexRouter);
app.use(“/api/users”, usersRouter);
app.use(“/api/auth”, authRoutes);
app.use(“/api/cases”, caseRoutes);

In this file structure, there are three things to focus on. They are:

  • Dockerfile
  • .dockerignore
  • .env

Dockerfile

FROM node:15-alpine3.10
ENV NODE_ENV production
LABEL version="1.0"
LABEL description="This is the base docker image for Humaine frontend react app."
LABEL maintainer = ["saivicky2015@gmail.com", "akashsmaran@gmail.com"]
WORKDIR /appCOPY ["package.json", "package-lock.json", "./"]RUN npm install --productionCOPY --chown=node:node . .USER nodeEXPOSE 4000CMD "npm" "start"

This is a standard nodejs dockerfile

.dockerignore

node_modules
npm-debug.log

.env file

DB_USER=postgres
DB_PASSWORD=abc12345
DB_HOST=localhost
DB_PORT=5432
DB_DATABASE= vikranth
DB_HOST_DOCKER=postgres

This is an env file that I customised for my use, feel free to change the DB_USER, DB_PASSWORD and the DB_DATABASE fields

That’s it!! You are done with the backend service too.

3. Postgres service

This is the database part of the project which is relatively very simple.

Make a config folder in the API folder as shown in the picture below.

If you would like to prepopulate your database with the schema or the data, make a dump file using pg_dump meant for postgres sql

The command is :

pg_dump -U postgres vikranth > C:\Users\saivi\OneDrive\Desktop\vikranth_backup_latest.sql

change “vikranth” to your database name and store it in the API/config folder.

Also, add the following files to the folder,

  • database.js
  • Dockerfile.db

database.js

var pg = require("pg");
const { Client } = require("pg");
require("dotenv").config();
const client = new Client({
user: process.env.DB_USER,
// For dev, use below
// host: process.env.DB_HOST,
host: process.env.DB_HOST_DOCKER,
database: process.env.DB_DATABASE,
password: process.env.DB_PASSWORD,
port: process.env.DB_PORT,
});
client
.connect()
.then((result) => {
console.log("Database connection successful");
})
.catch((err) => console.log(err));
module.exports = {
database: client,
};

Dockerfile.db

#FROM postgres:13.3-alpine THis is not working with initializationFROM postgres:11.2-alpine
COPY ./vikranth_backup_latest.sql /docker-entrypoint-initdb.d/

The second line is this file handles prepopulating your container database with the data you exported into the SQL file earlier

Thats it!! You have reached the final part of the entire thing

Finally, getting to

4. docker-compose file

version: "3.7"services:
##############################
# Backend Container
##############################
postgres:
image: kanlanc/vikranth:production_03062021_postgres
hostname: postgres
container_name: postgres
restart: always
build:
context: ./API/config
dockerfile: Dockerfile.db
ports:
- "5432:5432"
environment:
POSTGRES_DB: vikranth
DB_USER: postgres
DB_PASSWORD: abc12345
volumes:
- vikranth:/var/lib/postgresql/data
api:
env_file: "./API/.env"
container_name: api
restart: always
build:
context: ./API
dockerfile: ./Dockerfile
image: "kanlanc/vikranth:production_03062021_api"
depends_on:
- postgres
ports:
- "4000:4000"
##############################
# UI Container
##############################
ui:
build:
context: ./UI
dockerfile: ./Dockerfile
image: "kanlanc/vikranth:production_03062021_ui"
restart: always
container_name: ui
ports:
- "80:80"
- "443:443"
depends_on:
- api
##############################
# Pgadmin Container
##############################
# pgadmin:
# container_name: pgadmin4_container
# image: dpage/pgadmin4
# restart: always
# environment:
# PGADMIN_DEFAULT_EMAIL: a@a.com
# PGADMIN_DEFAULT_PASSWORD: root
# ports:
# - "5050:80"
volumes:
vikranth:

In this file, I’m first building the postgres service, then the api service and finally the UI service.

Now, if you type the command

docker-compose up

You should see

Congratulations!! You are finally done with dockerising your application and also succeded in moving to a microservices architecture.

But to take this a step further, push all your images that are built from the previous docker-compose file to your docker hub using the command line or using Docker Desktop. You need to have a docker account for this step. Create a repo with any name. Since you would have your own username and repo, make sure to change “kanlanc” to your username and “vikranth” to your repo name in all the files.

docker push <your username>/<reponame>:production_14062021_postgres
docker push <your username>/<reponame>:production_14062021_api
docker push <your username>/<reponame>:production_14062021_ui
example:
docker push kanlanc/vikranth:production_14062021_ui
(or)docker push <your username>/<reponame> --all-tags

Once you have finished pushing, all you need is the below docker-compose file alone to deploy( like seriously only one file)

docker-compose.prod.yml

version: "3.7"services:
##############################
# Backend Container
##############################
postgres:
image: kanlanc/vikranth:production_14062021_postgres
hostname: postgres
container_name: postgres
restart: always
ports:
- "5432:5432"
environment:
POSTGRES_DB: vikranth
DB_USER: postgres
DB_PASSWORD: abc12345
POSTGRES_PASSWORD: abc12345
volumes:
- vikranth:/var/lib/postgresql/data
api:
container_name: api
restart: always
image: "kanlanc/vikranth:production_14062021_api"
depends_on:
- postgres
ports:
- "4000:4000"
##############################
# UI Container
##############################
ui:
image: "kanlanc/vikranth:production_14062021_ui"
restart: always
container_name: ui
ports:
- "80:80"
- "443:443"
depends_on:
- api
# volumes:
# - ./UI/nginx/certbot/conf:/etc/letsencrypt
# - ./UI/nginx/certbot/www:/var/www/certbot
# ##############################
# # Certbot Container
# ##############################
# certbot:
# image: certbot/certbot:latest
# volumes:
# - ./UI/nginx/certbot/conf:/etc/letsencrypt
# - ./UI/nginx/certbot/www:/var/www/certbot
##############################
# Pgadmin Container
##############################
# pgadmin:
# container_name: pgadmin4_container
# image: dpage/pgadmin4
# restart: always
# environment:
# PGADMIN_DEFAULT_EMAIL: a@a.com
# PGADMIN_DEFAULT_PASSWORD: root
# ports:
# - "5050:80"
volumes:
humaine:

Pat yourself in the back for completing your objective of dockerising and if you wanna deploy this to an AWS EC2, read Part-2 of this article here where I can show you the power of docker-compose.

If you really liked the article, consider dropping me a few claps, so I can be a bit more motivated to write more articles like these.

--

--

Vikranth Kanumuru
Kanlanc

A Curious Fellow in love with Technology — Featured in ABC Australia| 70K+ Views | 9 x Top Writer in Innovation and Startup — https://portfolio.kanlanc.com