Docker and Isomorphic React App in Node 4

Djoe Pramono
6 min readOct 31, 2017

--

So recently we inherited an isomorphic react application that was “a bit” outdated. It was developed in Node 4, hadn’t been touched for a while, and it had quite a few dependencies on modules that were no longer maintained by its author. It was not straight forward to setup and it was not dockerised. Naturally, the first thing we did, we dockerised it. It was a fulfilling journey and I learnt quite a bit. So here I am, sharing what I learnt in form of this blog post.

Docker for Development

We knew from the start that we wanted to produce a docker image that contains the necessary code to run the application. We call this docker image, app image. However for our local development environment, we had two options: nvm or docker image.

nvm is probably easier to setup and run. However, we ended up with using a custom built docker image for development and testing. We call this dev image. This approach has the following benefit

  • Dev image and app image are built on top of the same base layers. This means there is no headache even if we develop our application on Mac while the production server is running on Ubuntu/CentOS
  • We had quite a few tools that needed to be installed for this repository e.g. aws-cli, headless browser for testing, grunt/gulp, etc. Without docker, we need to install these ourselves and there is a good chance that I will install a different version to the one that my colleagues or CI server uses.
  • Contrary to some skepticism, personally it was quite easy to setup and use.

Differences between dev image and app image

  • Dev image is used for development and testing, while app image is used in production.
  • Dev image still mounts the local folders, while app image has everything baked in. We utilise docker-compose to help running the docker containers.
  • Since it mounts the local folders, dev image contains similar folder structure to what we have in the repository. All of the files are still raw. Meanwhile the app image contains a much simpler folder structure where we only have the server code and the client code that is already compiled via webpack.
  • We build dev image locally through script, while app image are built by the CI server. Note that we also have our own internal docker registry, so we push and pull images from there.

Utilising docker build cache

Below is a stripped down version of our Dockerfile for our app image

FROM node:4
ENV NODE_ENV development
WORKDIR /app
COPY package.json package.json
RUN npm install

As you may notice, we do npm install inside the Dockerfile instead of running in through a script on the CI server. Why? Because this way we are utilising docker build cache.

Notice the COPY and RUN commands are set one after another. This means in the subsequent docker build, RUN command would use cache if the package.json stays the same, which means faster build time.

Still confused? I was.

  • COPY command analyse the file(s) it copies and as long as the checksums are the same, Docker will use cache.
  • RUN command only analyse the command string instead of the checksums of the result. Thus Docker will use cache here, unless you change the command string.
  • As long as the previous command are still using cache, the subsequent command are allowed to use cache. In our case, both commands tick the requirement, so Docker will use caching.

For more information you can visit the docker documentation here

You might think why do we need the cache. Well, doing IO with docker on Mac can be quite slow. This repository have quite big node_modules size. Doing an npm install takes some time, and doing it over and over again without caching sounds a bit time wasting.

Docker volume trick

There was one gotcha with our approach though. As mentioned, the dev image doesn’t have everything baked in. It mounts local folder, and this includes node_modules folder.

So originally our docker compose volume looks something like this

dev:
image: docker.registry.mycompany.com/myteam/isomorphic-dev:latest
environment:
NODE_ENV
APP_NAME=isomorphic-app
volumes:
- .:/app

Which is run with a command like this

docker-compose run -it --rm dev 

This means when running the docker compose command, inside the dev image, the preinstalled node_modules folder, are replaced with the local node_modules folder. This is probably okay during development but for the CI server, there is no local node_modules folder. This will result in the CI running the steps without node_modules.

Luckily we saw this article just in time. Basically there is a trick where the next volume declared in the docker compose, it will overrule the previous volume. So now the docker-compose.yml looks like this

dev:
image: docker.registry.mycompany.com/myteam/isomorphic-dev:latest
environment:
NODE_ENV
APP_NAME=isomorphic-app
volumes:
- .:/app
- /app/node_modules

Basically after mounting the local root folder into /app, docker creates a volume in /app/node_modules based on the used image. That means effectively, when CI runs the docker-compose command, the node_modules will be there.

This also means if we runnpm install outside docker and run the docker compose command, the new node modules will not be there.

Package Lock

As you are probably aware, Node 4 comes with earlier version of npm which has no package lock. This means, every time we do npm install, it will try to install the latest package available. This is problematic, and we need a way to lock the modules’ version. There are a few options here

Upgrade to higher version of Node

We thought, maybe we could just bump up the Node version to version 8, and hoped that every modules can be upgraded to run on it. Node 8 comes with newer npm. This new npm in turns comes with package-lock.json, which is what we want. Unfortunately, some of our Node modules didn’t support Node 8 yet, so we dropped this solution.

Yarn

Some of us developed some React app with yarn before and we were happy with it. It comes with yarn.lock which is arguably the go-to solution before Node introduces package-lock.json.

However this means we need to introduce a new module, which means introducing a new dependency. Plus this module is not supported by the same team that supports Node.js. So using this may not be in everyone best interest

npm-shrinkwrap

Yes it is also an additional module, but at least it is officially mentioned in the Node.js website. The problem with this though, it doesn’t really create the lock file straight after the npm install which may confuse some people that is not familiar with it. We need to manually run npm shrinkwap to generate the npm-shrinkwrap.json.

Node 4 with npm 5

So we ended up with Node 4 and npm 5. Node 4 does not come with npm 5 by default, so we upgraded our npm. This allows us to have package-lock.json the same way as if our application is running on Node 8.

And since we are using docker for our development, everyone will have the correct Node and npm installed straight off the bat.

Forever JS

Node isomorphic application needs some kind of a wrapper so that whenever an exception occurs, the application won’t just simply die. We use forever to make sure that our main script runs well … forever 🤞.

We run forever through docker CMD command

FROM node:4
ENV NODE_ENV development
WORKDIR /app
COPY package.json package.json
RUN npm install
...CMD ["node_modules/.bin/forever","/app/server.js"]

This enables docker run command to automatically pick up the command in CMD and run it.

Note that we don’t use ENTRYPOINT in the Dockerfile, just because sometimes we want to start a container with the app image and run an interactive shell on it to debug application i.e.

docker run -it app bash

If you haven’t noticed already we run forever in the foreground, contrary to how most people might use forever js.

forever start /app/server.js  # This will run in the background
forever /app/server.js # This will run in the foreground

This is because, docker container will simply quit after executing a task set in CMD even if it actually spawns a background process. If forever is executed as a foreground task instead, it means the task will run forever. This prevents the docker container to shut itself down, which is ultimately what we want.

Epilogue

That’s it for now 😃 . Thanks for reading! Below are some links to articles that helped me a lot

Also kudos to @bpsommerville and the rest of the team whom I bounced ideas with, while dockerising the application.

--

--