Deploy That NodeJs App to a Docker Container

Dean Slama Jr
8 min readJul 10, 2018

--

This walk-through will demonstrate a basic Node.js Docker configuration that will allow for easy, fast, and deterministic deployments. Web app hosting services e.g. Elastic Beanstalk (EB) that support deployments of Docker containers grant developers much more control over the configuring of the deployment environment.

Edit: To fully reap the benefits of supporting a Node.js project with Docker, read the follow up to this post where we walkthrough a local development setup that also uses this Docker image.

TLDR

What is Docker?

Why use Docker for an App’s Deployment Environment

A Dockerized Elastic Beanstalk Environment

Helpful Docker on Elastic Beanstalk Tips

This post was peer reviewed by William Horstkamp

What is docker?

Docker is a platform for running Docker containers. A Docker container is an environment in which one can execute software, basically a lightweight virtual machine. One can fully configure this environment with code (called a Dockerfile) and reliably start/stop/restart the environment. If a code execution environment is somehow unreliable in its configuration (e.g. which operating system is available, the layout of the file system, the command-line tools available, running processes, etc.), a Docker container can provide a more deterministic environment for an application.

The computer that the Docker container is running on is considered the host. One distinguishes the Docker container’s operating system/filesystem as separate from the host’s operating system/filesystem. A Docker container results from the running of a Docker image. Docker images are created from Dockerfiles. In other words, one builds a Docker image from a Dockerfile and one starts a Docker container by running a Docker image.

Why use Docker for an App’s Deployment Environment?

Running an application via a Docker container will require additional system resources (e.g. RAM) and will generally increase a given application’s complexity. Even with these downsides, dockerizing an application can prove to be a net positive due to several benefits:

  • Greater similarity between development and deployment environments, which helps reduce production-only bugs (an upcoming blog post will walk through a development Docker configuration)
  • Cleaner deployment environment guarantees; Node.js, npm, and dependencies pulled from npm are downloaded and installed the same way each and every time with no baggage from previous installs
  • Faster deploys due to the Docker container caching mechanism
  • Simpler Node.js & npm version bumping, independent of vendor’s upgrade schedule (EB can be slow to support new versions of Node.js, which makes upgrading a project’s version difficult)

A Dockerized Elastic Beanstalk Environment

Modifying a Node.js application to run from a Docker container is relatively simple and should not affect the app’s functionality. To demonstrate this, I will start by forking an application that resembles the complexity one might expect of a production application under development. Specifically, I want to show an example of an app that includes significant backend and frontend logic and also provides advanced development tooling. An open-source project starter kit that meets these requirements is React Universally.

The file changes mentioned in this post can be viewed on github.

Modify Project to Initialize and Start via a Single Command

A Docker container will execute a single command after being started. Within this constraint, we must perform all application initialization tasks and start the app. Our example project requires both the execution of a build step (to create front end assets) and the starting of the Node.js app. We can leverage package.json scripts to combine these two commands:

Here we are leveraging the package.json’s pre hook. We’ll have EB execute npm start which will execute prestart first (to begin the build step), synchronously followed by npm start (to start the backend web server).

Add Dockerfile

EB provides a couple different ways to configure an app for deployment to its Docker platform. The simplest and least vendor-locked approach is by providing a Dockerfile in the project’s root. A Dockerfile is a list of instructions that will be executed in order, one after the other, to create a Docker image.

Create a file named Dockerfile at the project’s root, with the following contents:

Let’s go through each instruction (check out the official documentation for descriptions of all valid instructions):

From node:8.11.1 — Dockerfiles begin with a FROM instruction which names another Docker image (by default publicly available via Dockerhub) which the instructions in this Dockerfile will modify. This provides a simple inheritance mechanism for creating new Docker images as variations of well-established base images. node:8.11.1 references a base image that provides a fresh copy of Node.js v8.11.1. There are similar base images available for all versions of Node.js.

WORKDIR /opt/app — This sets the file path context that the following instructions will use when resolving relative paths.

COPY package.json package-lock.json* ./ — The COPY instruction copies one or more files from the host filesystem to the filesystem being configured. Here we want to copy the Node.js app’s npm dependency list (package.json) to the Docker image, a prerequisite for the next instruction.

RUN npm cache clean — force && npm install— The RUN instruction simply executes a command. In this case, we want to use npm to download and install the app’s dependencies. Before installing, however, we delete all the data out of the npm cache folder so that we don’t experience any dependency weirdness from previous installs.

COPY . /opt/app — Using the COPY instruction again, we are now copying the rest of the app files to the Docker image.

The order of the instructions in the Dockerfile is important: during the Docker image build, the result of each instruction is cached as a layer. The Docker image build can save a lot of work by reusing the previously cached layers and only performing the work if the files associated with an instruction have changed. Assuming that application files will change more frequently than the list of npm dependencies (i.e. package.json), we’ve separated the task of copying all of the app’s files into two separate COPY instructions (lines 5 and 10) to maximize the effectiveness of the Docker cache. This will allow deployments to skip the work associated with the RUN npm install … instruction if the dependency list hasn’t changed between deploys, speeding up most deployments considerably.

ENV PORT 80 — This instruction will set an environment variable named PORT(in the Docker container) to the value 80. Although EB will set Docker container environment variables to the values configured in the normal way, it is useful to “hard-code” PORT to 80 here in the Dockerfile because EB requires Docker containers to have an environment variable named PORT set to 80.

EXPOSE 80 — The Docker documentation claims this instruction is non functional and serves as a type of documentation between the person who builds the image and the person who runs the container. However, EB requires this to be included in the Dockerfile.

CMD [ “npm”, “run”, “start” ] — Lastly, the CMD instruction sets the command that should be executed in the Docker container after it is created and started. We’ll use the single app initialization & start command from earlier here.

Add .dockerignore

The COPY instructions in a Dockerfile will skip any files/directories listed in an associated .dockerignore file:

It is a good idea to include .git, node_modules, a project’s Docker configuration files, any results from the project’s build step, and any other files that are not needed for the app’s initialization steps or at run time. The COPY instruction takes an amount of time proportional to the total size of the contents to be copied. Consequently, if a Docker image build step is taking a relatively long time, build times can be improved by tweaking this file.

Create Elastic Beanstalk Docker Environment

Adding a Dockerfile to a project’s root directory is the minimum setup required to configure a Node.js app to be deployed to an EB Docker environment. Next, we will create that environment.

When creating a new deployment environment, EB requires a platform type to be specified (Node.js, PHP, Python, etc). One should choose the Docker setting:

I’ve found it best to select the Sample application setting for the Application Code field during new environment setup. This guarantees the first deploy will be a success. After that first deploy, do an immediate second deploy with your app codes. Therefore, if the deploy fails, you’ll have more confidence that the bug is in your codes.

Instance Type might need to be tweaked to provide a bit more RAM than your Node.js app would normally require on EB. I’ve found t2.small works perfectly fine for Dockerized Node.js applications but you may find smaller, cheaper instances work for your application too. Basically, if it deploys, the instance is large enough (I’ve written more about this in another post).

Helpful Docker on Elastic Beanstalk Tips

There are some notable differences between EB’s Node.js and the Docker platforms:

application logs — Where as with the Node.js platform one would expect application logs to be found in /var/log/nodejs/, the Docker platform will place application logs in /var/log/eb-docker/containers/eb-current-app/

SSHing into container — Sometimes it can be helpful to connect directly to an EB server instance and dig around. However, the EB Docker platform adds an extra complication to this workflow: we must not only connect to the server instance but also connect to the Docker container running on that instance.

After connecting to the instance via the EB Cli, ensure you are the root user (should be by default) and then list the running Docker containers:

sudo docker ps

This will show a single row of information associated with the instance’s live Docker container. Copy the CONTAINER ID and then execute:

sudo docker exec -it <CONTAINER ID> bash

Read more about connecting to EB Docker containers via SSH

eb extensions work the same as in the non Docker platformsEB Extensions function normally on the Docker EB platform. However, one must be cognizant that EB Extensions execute commands not on the Docker container but on the associated host environment. This is normally not a problem, although I’ve seen EB extensions used to setup and manage daemon processes with which the web application is to interact (e.g. a StatsD relay instance to which the web app sends metrics). This type of architecture should be avoided with a Docker EB deploy. It is considered an anti-pattern to have processes running on Docker containers communicate directly with processes running on a container’s host (a more proper pattern is to have all services running within individual Docker containers and have the containers communicate over local network). EB provides a Multicontainer Docker Environment type for more advanced app architectures.

To fully reap the benefits of supporting a Node.js project with Docker, read the follow up to this post where we walkthrough a local development setup that also uses this Docker image.

For more Dockerized Node.js tips:

https://blog.hasura.io/an-exhaustive-guide-to-writing-dockerfiles-for-node-js-web-apps-bbee6bd2f3c4

https://github.com/nodejs/docker-node/blob/master/docs/BestPractices.md#cmd

--

--

Dean Slama Jr

A journaling of solutions to interesting problems encountered in the modern web stack @henryslama www.dslama.net