I’ve been using Docker for approximately a year now, and after some time getting used to I am now a huge fan of how it can improve the whole making of an application, from the development phase to the production phase.
In this article I chose to talk about 3 parts of the making of an app that Docker can bring to a new level:
- Optimizing the production artifact
- Normalizing environments
- Improving integration and delivery
These insights, adaptable to other stacks than Node.js, come mainly from experience on small to large scale application development, as I’m continuously trying to increase the realization flow in work and personal projects.
1. Optimize production artifact with Docker
One of Docker’s main feature is to package your app so that it can be deployed in any Docker-compatible environment. Your Docker image should include everything you need for your app to run.
But when you and your IT team you release your app in production with Docker, there are certain optimizations you can make to improve your app’s performance, increase security and reduce the footprint of your package.
- Use alpine based image
Alpine linux is a lightweight Linux distribution based on musl libc and busybox. The main benefit of using Alpine is the size of the docker image (node:alpine weight 24Mo, compared to the the 267Mo for node:latest).
The light weight of the Alpine distribution also provides less attack surface for hackers.
Beware though that you might encounter some issues if using software compiled specifically with glibc, as stated in node-alpine repository (https://github.com/mhart/alpine-node#caveats)
But this should not impact your app if you’re using a single stack inside your container (like Node), which is highly recommended for cloud-native applications (see https://12factor.net/)
- Include only what the application needs to run
This means only include production dependencies, not development dependencies :
RUN npm install --only=production
Also use a .dockerignore file to exclude the files not needed for production, like the node_modules that will be fetched inside the Dockerfile, test files, the documentation, the docker files themselves, etc…
If you are using a transpiler like Babel to use ES6 or newer syntax in your Node app, then do the transpile part in your npm run build script inside your Dockerfile, and remove your source after the build successfully executes. These steps can be made more elegantly using Docker multistage build that you can see in the code below (docs here : https://docs.docker.com/develop/develop-images/multistage-build/).
- Run npm install before copying your source to the container image
This allows your docker runtime to cache the volume layer containing all your dependencies below the layer containing your sources. That means that if your source code is updated more frequently than your dependency configuration (which is likely), your Docker build time will be much faster on average.
Node official documentation has a clean tutorial on how to build a docker image for a node application, where they mention this part :
Dockerizing a Node.js web app | Node.js
The goal of this example is to show you how to get a Node.js application into a Docker container. The guide is intended…
- Use a specific version of Node docker image
Even if you might not be aware of it, your application probably has some tight coupling to a specific version of your language runtime (Node or any other application stack). To prevent your application from crashing when the runtime gets updated during a new Docker build, you should precise the version of Node you want running on your production platform.
Here is a gist containing basic files for a dockerized Node application that uses ES6 and Babel as a transpiler (https://gist.github.com/guillaumejacquart/676627dd862e70fd6e45e8361f513abf):
2. Normalize environments with Docker Compose
Docker compose is a tool by Docker which allows you to define your wholeapplication stack (app services, databases, cache layer, …) as containers inside a single file (docker-compose.yml), and manage the state of these containers as well as the underlying resource (volumes, network) using a CLI.
What is cool about docker-compose in my opinion is that it can make it easy to run a full production-like environment in your development environment.
Let’s imagine you have an application that consists of the following components :
- An API in Node.JS
- Talking to a MySQL Database
- Using Redis as a cache and session layer
- Traefik as a reverse proxy for your API
By the way, if you don’t know Traefik, I would recommend you check it out, it is a dynamic reverse proxy that can inspect your running web containers and reverse proxy them on the fly.
Docker compose allows you to setup this stack for all your environments (dev, staging, production even if the ops team feel like it) quite easily and in a somewhat factorized way.
Here are the steps I came up with to facilitate iso-production setup and configuration factorization between environments :
- Use a configuration library for your Node.js app
This allows you to store your configuration in a centralized place, and make it overridable in multiple ways, such as dotenv files or environment variables. Personally I find convict (by Mozilla) to do the job fine.
By doing so, the only thing that should change in your Node.js app when running it on different environment is a dotenv file or a list of environment variables
In our example, the configuration should contain at least the MySQL and Redis connection information.
- Define your whole stack configuration in a single place
This can be in a sourced environment script, or with a .env file (which makes it easier as at can be read by docker-compose)
In our example, this file should contain the same variables as for the configuration file in the Node.js app.
- Create your docker-compose.yml file using variables
Docker Compose can substitute environment variables in the configuration file (see https://docs.docker.com/compose/environment-variables/). This is convenient to have a single docker-compose file in all the environments.
The only differences between dev and prod is that in development I am using a different Dockerfile for the Node.js app so that I can have nodemon live-reload changes to my code (mounted inside a Docker volume)
Here are the docker-compose.yml and docker-compose.dev.yml files, the .env file and the Dockerfile for development :
.env file :
FROM node:9-alpineWORKDIR /home/node/app# Install deps
COPY ./package* ./
RUN npm install && \
npm cache clean --forceCOPY . .# Expose ports (for orchestrators and dynamic reverse proxies)
EXPOSE 3000# Start the app
CMD npm start
docker-compose.yml file :
image: traefik # The official Traefik docker image
command: --api --docker.exposedbydefault=false # Enables the web UI and tells Træfik to listen to docker, without exposing by default
- "80:80" # The HTTP port
- "8080:8080" # The Web UI (enabled by --api)
- /var/run/docker.sock:/var/run/docker.sock # So that Traefik can listen to the Docker eventsdb:
The docker-compose.dev.yml file :
command: npm run dev
You can see in the “app” section of the docker-compose.yml file that I am using localhost.tv, which is a nice remote DNS server that bind all *.localhost.tv to your localhost. I use it to avoid using relative path for application endpoint (like localhost/api), which always come with undesirable side-effects when moving to a subdomain in production (embedded links for instance, inner routing, stuff like that).
The separate Dockerfile for development image is a bit annoying, as it makes the development configuration not the same as the production one, and so introduces some work (and thus some risk) when deploying the app to another environment. So far the only solution I’ve come up with is to use a templating system (simple script, or more evolved provisioning tools such as ansible) to make the Dockerfile dynamic.
With all these file setup, you can use the following command to run your stack in development environment :
First, build your app container from the Dockerfile-dev file :
docker-compose -f docker-compose.yml -f docker-compose.dev.yml build
Then, run your stack with the following :
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up -d
You know have a dockerized, reverse-proxied, iso-production development environment running with live-reloading in Node.js.
You can find the full example app here :
3. Smoothen delivery and integration with CI/CD
Now that you have a portable and customizable app environment, you can use it for all the steps of the continuous integration and deployment.
Here is what I try to do for each project in terms of tests when using Docker with Node.js :
- Run unit tests when building the Docker image. You can also build a custom image for this, such as :
# Use the builder image as base image
# Copy the test files
COPY tests tests
# Override the NODE_ENV environment variable to 'dev', in order to get required test packages
ENV NODE_ENV dev
# 1. Get test packages; AND
# 2. Install our test framework - mocha
RUN npm update && \
npm install -g mocha
# Override the command, to run the test instead of the application
CMD ["mocha", "tests/test.js", "--reporter", "spec"]
You can test the return of the docker run function to determine whether the CI pipeline can go on or not.
- Run integration tests using docker-compose inside the CI tool, such as running docker-compose up for the full stack to be operational, and calling a special endpoint to check that the Node.js app can correctly access its required components (database and redis in the example)
- Run real API tests using docker-compose inside the CI tool, and tools such as fixtures in Sequelize to populate the database before running the tests.
You can run all these steps inside your CI provider (Jenkins, Gitlab-CI, Travis) if they can run a dockerized environment. For example in gitlab-ci you can use this image : https://hub.docker.com/r/gitlab/dind/, which is a docker in docker image which includes docker-compose.
I hope these insights will be helpful to anyone who consider using Docker for a Node.js based application development or deployment.
They are by no means a complete list of requirements, but rather aim to offer a view on how to use new container tools to improve the making of modern apps.
Feel free to share other practices of Docker and Node.js you use in the comments section.