Node.js with React on a multi-docker container: in development and in production

Rúben Bernardes
The Startup
Published in
8 min readSep 23, 2019

It’s been quite the journey to feel at ease with Docker and set up a pipeline that works both in development and in production.

As a heads up, I’m not assuming a supreme knowledge on the subject; my goal is simply to share the recipe that worked for me for a seemingly common app structure: Node.js with React as frontend, running on Docker.

Caution: This won’t a be a step by step article and assumes some knowledge of Docker — besides having it installed, along with docker-compose.

Basic App structure

It starts off with two folders:

  • Server (Node.js)
  • Client (React app)

The communication between the two will be delegated to an Nginx server, which decides which channel to use: the root (‘/’) for React and (‘/api’) api for the server.

But first, let’s start by creating Dockerfiles to work in development.

In development

Inside each folder (Client and Server) create a file named Dockerfile.dev with the following code:

FROM node:alpine
WORKDIR “/app”
COPY ./package.json ./
RUN npm install
COPY . .
CMD [“npm”, “run”, “start”]

This will be the same for both Client and Server, so you can duplicate the file.

However, if you’re using nodemon on your node.js script, it’s easier if your start the Server with nodemon instead. Assuming that you start nodemon with ‘npm run dev’, the last line would be:

CMD [“npm”, “run”, “dev”]

At this point, we could have a docker-compose.yml file starting up both containers and exposing their Ports, thus allowing the communication between them to flow, but we’ll use a Nginx server to orient traffic and host the app.

In the root of your project, create an nginx folder with two files.

default.conf:

upstream client {
server client:3000;
}
upstream api {
server api:5000;
}
server {
listen 80;
location / {
proxy_pass http://client;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
location /api {
rewrite /api/(.*) /$1 break;
proxy_pass http://api;
}
}

Dockerfile.dev:

FROM nginx
COPY ./default.conf /etc/nginx/conf.d/default.conf

On the Dockerfile.dev, you’re passing a custom configuration (default.conf) to the official Nginx image. This config file says:

  • Create an upstream server called ‘client’ (the name client derives from the same name given in the docker-compose.yml file ahead);
  • Create an upstream server called ‘api’ (we’re using ‘api’ instead of ‘server’, because otherwise we’d be telling nginx to “server server” and that could create problems);
  • Listen to port 80;
  • Serve ‘client’ on the root route and set up the host name (this fixes the error ‘Invalid Host Header’ on your React Dev server);
  • Serve api in /api route and pass only what is after api (so that you don’t need to specify ‘/api/’ on your Node.js routes). All your calls from the frontend to the backend need to be prefixed by /api/do_something

Once all this setup is done, we can create the main script that will start up our project in development. On the root folder, create a file named docker-compose.yml and enter the following code:

version: ‘3’
services:
api:
build:
dockerfile: Dockerfile.dev
context: ./server
env_file:
— .env
volumes:
— /app/node_modules
— ./server:/app
client:
build:
dockerfile: Dockerfile.dev
context: ./client
volumes:
— /app/node_modules
— ./client:/app
nginx:
restart: always
build:
dockerfile: Dockerfile.dev
context: ./nginx
ports:
— ‘8080:80’

This is a sort of the blueprint of our app that docker-compose will pick up. It says thusly :

  • Create 3 services: ‘api’, ‘client’ and ‘nginx’;
  • Build them using the specified Dockefile.dev and where you can find it (‘context’);
  • Include all the environment variables inside the .env file (in case you have one);
  • In ‘volumes’, with the exception of the ‘node_modules’, map everything inside our local directory onto the app container. By doing so, it mirrors every change in your code without having to restart the server;
  • The ‘nginx’ container should always restart in case it crashes, and open the host port 8080 (it can be whatever you want) and run on port 80 (default from nginx);

After all of this is done, navigate to the app folder on your terminal and type docker-compose up to build the project. Open http://localhost:8080/ to see your app running!

It may happen that some of the containers are built before others which they depend on. If this happens, just kill the server (Ctr + C) and start up again with “docker-compose up”.

If you need to make any changes in the config files (by entering a dependency in package.json), you need to use docker-compose up — build to rebuild your project.

Gongrats! This concludes the development set up and it listens to every change you make!

In production

The goal here is to deploy changes every time you push code to your master branch in Github. As simple as that.

Step 1: Dockerfiles

Just like we had Dockerfiles for development inside each folder, we now need to have “real” Dockerfiles, which are pretty identical.

Add Dockerfile inside Server:

FROM node:alpine
WORKDIR “/app”
COPY ./package.json ./
RUN npm install
COPY . .
CMD [“npm”, “run”, “start”]

Add Dockerfile inside Nginx:

FROM nginx
COPY ./default.conf /etc/nginx/conf.d/default.conf

We’ll need a bit more work for the client one. The main point is to only serve the ‘build’ assets to the nginx server. We start by creating a folder inside client named nginx. Inside that folder we’ll have a single file called default.conf with the following code:

server {
listen 3000;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
}

We’re going to pass this custom config to our nginx client server, telling it which port to use and which files to serve (basically the built assets).

So, our Dockerfile inside client will be like this:

FROM node:alpine as builder
WORKDIR ‘/app’
COPY ./package.json ./
RUN npm install
COPY . .
RUN npm run build
FROM nginx
EXPOSE 3000
COPY ./nginx/default.conf /etc/nginx/conf.d/default.conf
COPY — from=builder /app/build /usr/share/nginx/html

This script says:

  • First, run ‘npm build’ in the react app;
  • Once you’re finished, use the nginx image and serve the files inside the /app/build folder created in the previous code;

Step 2: Travis CI

Travis allows you to do… lots of things, after your code is pushed to master, including running tests and deploying to AWS Elastic Beanstalk, as we’ll soon see. After creating an account, you can activate the repo you want from Github by switching it on:

The next step is to create a script file on the root of your app that talks to Travis named .travis.yml:

sudo: required
services:
— docker
before_install:
— docker build -t YOUR_DOCKERHUB_USERNAME/react-test -f ./client/Dockerfile.dev ./client
script:
— docker run -e CI=true YOUR_DOCKERHUB_USERNAME/react-test npm run test — — coverage
after_success:
— docker build -t YOUR_DOCKERHUB_USERNAME/multi-nginx ./nginx
— docker build -t YOUR_DOCKERHUB_USERNAME/multi-server ./server
— docker build -t YOUR_DOCKERHUB_USERNAME/multi-client ./client
# Login to the docker CLI
— echo “$DOCKER_PASSWORD” | docker login -u “$DOCKER_ID” — password-stdin
# Take these images and push them to Docker Hub
— docker push YOUR_DOCKERHUB_USERNAME/multi-nginx
— docker push YOUR_DOCKERHUB_USERNAME/multi-server
— docker push YOUR_DOCKERHUB_USERNAME/multi-client

The names that you give to your containers (after your docker hub username) are totally up to you, but this is the jist of this file:

  • First create a container of the client from the Dockerfile.dev and run any tests that you have there (this is optional);
  • Then, build the containers inside of each folder using Dockerfile (we don’t have to specify in this case);
  • Build and push these containers to Docker Hub under this username;

You’ve probably noticed that push images to your Docker Hub repository, you need to login. To do so, you must insert the environment variables DOCKER_PASSWORD and DOCKER_ID inside your Travis CI repository under Settings.

Step 3. The AWS bit

In your AWS console, search for the service Elastic Beanstalk (EB).

Create an application (top right corner), give it a name (“My-app”) and click on ‘Next’ a couple of times until you see the section of Base Configuration. Under ‘Platform’ select Multi-container docker.

That’s the only real specification you need to create the environment with a sample app. In case you have environment variables that your code needs to run, add them in Configuration > Software of your Elastic Beanstalk environment.

There are a couple of things that you need to note down from your EB application:

  • Name: My-app;
  • Environment: typically it’s ‘My-app-env’, which you can see the dashboard of your EB app;
  • Bucket-name: this is related to the AWS region that you’re using. To find out the value of the bucket-name, search for ‘S3’ in services and copy the bucket name related to the region you deployed your app (i.e. “elasticbeanstalk-eu-west-1–XXXXX”
  • AWS credentials: these are related to your IAM user. You can either create a new IAM user or create new credentials. I’m not going much into detail here, but you’ll need the AWS_ACCESS_KEY and AWS_SECRET_KEY to deploy via Travis;

Once this is done, create a file in the root of your app called Dockerrun.aws.json and paste the following code:

{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"name": "client",
"image": "docker.io/YOUR_DOCKERHUB_USERNAME/multi-client",
"hostname": "client",
"essential": false,
"memory": 128
},
{
"name": "server",
"image": "docker.io/YOUR_DOCKERHUB_USERNAME/multi-server",
"hostname": "api",
"essential": false,
"memory": 128
},
{
"name": "nginx",
"image": "docker.io/YOUR_DOCKERHUB_USERNAME/multi-nginx",
"essential": true,
"portMappings": [
{
"hostPort": 80,
"containerPort": 80
}
],
"links": ["client", "server"],
"memory": 128
}
]
}

This is telling AWS where to fetch the images for your containers, attributing a default memory of 128 (you can search what is the most convenient for your containers) and that Nginx will be hosting both other containers.

Back in your .travis.yml file, add the following deploy script at the bottom:

sudo: required
services:
— docker
before_install:
— docker build -t YOUR_DOCKERHUB_USERNAME/react-test -f ./client/Dockerfile.dev ./client
script:
— docker run -e CI=true YOUR_DOCKERHUB_USERNAME/react-test npm run test — — coverage
after_success:
— docker build -t YOUR_DOCKERHUB_USERNAME/multi-nginx ./nginx
— docker build -t YOUR_DOCKERHUB_USERNAME/multi-server ./server
— docker build -t YOUR_DOCKERHUB_USERNAME/multi-client ./client
# Login to the docker CLI
— echo “$DOCKER_PASSWORD” | docker login -u “$DOCKER_ID” — password-stdin
# Take these images and push them to Docker Hub
— docker push YOUR_DOCKERHUB_USERNAME/multi-nginx
— docker push YOUR_DOCKERHUB_USERNAME/multi-server
— docker push YOUR_DOCKERHUB_USERNAME/multi-client
deploy:
provider: elasticbeanstalk
region: "YOUR_AWS_REGION"
app: "My-app"
env: "My-app-env"
bucket_name: "YOUR_AWS_BUCKET_NAME"
bucket_path: "My-app"
on:
branch: master
access_key_id:
secure: "$AWS_ACCESS_KEY"
secret_access_key:
secure: "$AWS_SECRET_KEY"

Make sure you add the AWS_ACCESS_KEY and AWS_SECRET_KEY vars to the settings of your Travis Repository before running the script.

After all that, if you push your code to your Github master branch, it should be picked up by Travis, (re)build the images in Docker and deploy the new images to AWS Elastic Beanstalk… which I find pretty neat.

Thanks for reading it!

--

--