Docker best practices with Node.js
Welcome to our comprehensive list of Docker best practices that are exemplified under the realm of Node.js.
Collected, curated and written by: Yoni Goldberg, Bruno Scheufler, Kevyn Bruyere and Kyle Martin
Few words before we start
Note that each and every bullet has a link to detailed information and code examples. The entire list can be found in our repository Node.js Best Practices. It covers the basics but goes all the way to strategic decisions like how much and where to limit the container’s memory, how to prevent secrets from sticking to the image, is a process manager needed as the top process or can Node act as PID1?
🏅 Many thanks to Bret Fisher from whom we learned many insightful Docker best practices
✅ 1 Use multi-stage builds for leaner and more secure Docker images
📘 TL;DR: Use multi-stage build to copy only necessary production artifacts. A lot of build-time dependencies and files are not needed for running your application. With multi-stage builds these resources can be used during build while the runtime environment contains only what’s necessary. Multi-stage builds are an easy way to get rid of overweight and security threats
🚩 Otherwise: Larger images will take longer to build and ship, build-only tools might contain vulnerabilities and secrets only meant for the build phase might be leaked.
✍🏽 Code Example — Dockerfile for multi-stage builds
FROM node:14.4.0 AS buildCOPY . .
RUN npm install && npm run buildFROM node:slim-14.4.0USER node
EXPOSE 8080COPY --from=build /home/node/app/dist /home/node/app/package.json /home/node/app/package-lock.json ./
RUN npm install --productionCMD [ "node", "dist/app.js" ]
🔗 More examples and further explanations.
✅ 2. Bootstrap using ‘node’ command, avoid npm start
📘 TL;DR: use CMD ['node','server.js']
to start your app, avoid using npm scripts which don't pass OS signals to the code. This prevents problems with child-process, signal handling, graceful shutdown and having processes.
🚩 Otherwise: When no signals are passed, your code will never be notified about shutdowns. Without this, it will lose its chance to close properly possibly losing current requests and/or data.
✍🏽 Code example — Bootsraping using Node
FROM node:12-slim AS build
WORKDIR /usr/src/app
COPY package.json package-lock.json ./
RUN npm ci --production && npm clean cache --forceCMD ["node", "server.js"]
🔗 More examples and further explanations
✅ 3. Let the Docker runtime handle replication and uptime
📘 TL;DR: When using a Docker run time orchestrator (e.g., Kubernetes), invoke the Node.js process directly without intermediate process managers or custom code that replicate the process (e.g. PM2, Cluster module). The runtime platform has the highest amount of data and visibility for making placement decision — It knows best how many processes are needed, how to spread them and what to do in case of crashes
🚩 Otherwise: Container keeps crashing due to lack of resources will get restarted indefinitely by the process manager. Should Kubernetes be aware of that, it could relocate it to a different roomy instance
✍🏽 Code Example — Invoking Node.js directly without intermediate tools
FROM node:12-slim# The build logic comes hereCMD ["node", "index.js"]
🔗 More examples and further explanations
✅ 4. Use .dockerignore to prevent leaking secrets
TL;DR: Include a .dockerignore file that filters out common secret files and development artifacts. By doing so, you might prevent secrets from leaking into the image. As a bonus the build time will significantly decrease. Also, ensure not to copy all files recursively rather explicitly choose what should be copied to Docker
Otherwise: Common personal secret files like .env, .aws and .npmrc will be shared with anybody with access to the image (e.g. Docker repository)
✍🏽 Code Example — A good default .dockerignore for Node.js
**/node_modules/
**/.git
**/README.md
**/LICENSE
**/.vscode
**/npm-debug.log
**/coverage
**/.env
**/.editorconfig
**/.aws
**/dist
🔗 More examples and further explanations
✅ 5. Clean-up dependencies before production
📘 TL;DR: Although DevDependencies are sometimes needed during the build and test life-cycle, eventually the image that is shipped to production should be minimal and clean from development dependencies. Doing so guarantees that only necessary code is shipped and the amount of potential attacks (i.e. attack surface) is minimized. When using multi stage build (see dedicated bullet) this can be achieved by installing all dependencies first and finally running ‘npm ci — production’
🚩 Otherwise: Many of the infamous npm security breaches were found within development packages (e.g. eslint-scope)
✍🏽 Code Example — Installing for production
FROM node:12-slim AS build
WORKDIR /usr/src/app
COPY package.json package-lock.json ./
RUN npm ci --production && npm clean cache --force# The rest comes here
🔗 More examples and further explanations
✅ 6. Shutdown smartly and gracefully
📘 TL;DR: Handle the process SIGTERM event and clean-up all existing connection and resources. This should be done while responding to ongoing requests. In Dockerized runtimes shutting down containers is not a rare event, rather a frequent occurrence that happen as part of routine work. Achieving this demands some thoughtful code to orchestrate several moving parts: The load balancer, keep-alive connections, the HTTP server and other resources
🚩 Otherwise: Dying immediately means not responding to thousands of disappointed users
✍🏽 Code Example — Placing Node.js as the root process allows passing signals to the code
FROM node:12-slim# Build logic comes hereCMD ["node", "index.js"]
#This line above will make Node.js the root process (PID1)
✍🏽 Code Example — Using Tiny process manager to forward signals to Node
FROM node:12-slim# Build logic comes hereENV TINI_VERSION v0.19.0
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini
RUN chmod +x /tini
ENTRYPOINT ["/tini", "--"]CMD ["node", "index.js"]
#Now Node will run a sub-process of TINI which acts as PID1
🔗 More examples and further explanations
✅ 7. Set memory limits using both Docker and v8
📘 TL;DR: Always configure a memory limit using both Docker and the JavaScript runtime flags: Set the v8’s old space memory to be a bit less than the container limit
🚩 Otherwise: The docker definition is needed to perform thoughtful scaling decisions and prevent starving other citizens. Without also defining the v8’s limits, it will underutilize the container resources — Without explicit instructions, it crashes when utilizing ~50–60% of its host resources
✍🏽 Code Example — Memory limit with Docker
docker run --memory 512m my-node-app
✍🏽 Code Example — Memory limit with Kubernetes and v8
apiVersion: v1
kind: Pod
metadata:
name: my-node-app
spec:
containers:
- name: my-node-app
image: my-node-app
resources:
requests:
memory: "400Mi"
limits:
memory: "500Mi"
command: ["node index.js --max-old-space-size=450"]
🔗 More examples and further explanations
✅ 8. Plan for efficient caching
📘 TL;DR: Rebuilding a whole docker image from cache can be nearly instantaneous if done correctly. The less updated instructions should be at the top of your Dockerfile and the ones constantly changing (like app code) should be at the bottom.
🚩 Otherwise: Docker build will be very long and consume a lot of resources even when making tiny changes
✍🏽 Code Example — Dependencies install first, then code
COPY "package.json" "package-lock.json" "./"
RUN npm ci
COPY ./app ./app"
✍🏽 Anti-pattern — Dynamic labels
#Beginning of the file
FROM node:10.22.0-alpine3.11 as builder# Don't do that here!
LABEL build_number="483"#... Rest of the Dockerfile
✍🏽 Code Example — Install “system” packages first
It is recommended to create a base docker image that has all the system packages you use. If you really need to install packages using apt
,yum
,apk
or the likes, this should be one of the first instructions. You don't want to reinstall make,gcc or g++ every time you build your node app.
Do not install package only for convenience, this is a production app.
FROM node:10.22.0-alpine3.11 as builderRUN apk add --no-cache \
build-base \
gcc \
g++ \
makeCOPY "package.json" "package-lock.json" "./"
RUN npm ci --production
COPY . "./"FROM node as app
USER node
WORKDIR /app
COPY --from=builder /app/ "./"
RUN npm prune --productionCMD ["node", "dist/server.js"]
🔗 More examples and further explanations
✅ 9. Use explicit image reference, avoid latest
tag
📘 TL;DR: Specify an explicit image digest or versioned label, never refer to ‘latest’. Developers are often led to believe that specifying the latest
tag will provide them with the most recent image in the repository however this is not the case. Using a digest guarantees that every instance of the service is running exactly the same code.
In addition, referring to an image tag means that the base image is subject to change, as image tags cannot be relied upon for a deterministic install. Instead, if a deterministic install is expected, a SHA256 digest can be used to reference an exact image.
🚩 Otherwise: A new version of a base image could be deployed into production with breaking changes, causing unintended application behavior.
✍🏽 Code example — Right vs wrong
$ docker build -t company/image_name:0.1 .
# 👍🏼 Immutable
$ docker build -t company/image_name
# 👎 Mutable
$ docker build -t company/image_name:0.2 .
# 👍🏼 Immutable
$ docker build -t company/image_name:latest .
# 👎 Mutable
$ docker pull ubuntu@sha256:45b23dee
# 👍🏼 Immutable
🔗 More examples and further explanations
✅ 10. Prefer smaller Docker base images
📘 TL;DR: Large images lead to higher exposure to vulnerabilities and increased resource consumption. Using leaner Docker images, such as Slim and Alpine Linux variants, mitigates this issue.
🚩 Otherwise: Building, pushing, and pulling images will take longer, unknown attack vectors can be used by malicious actors and more resources are consumed.
🔗 More examples and further explanations
✅ 11. Clean-out build-time secrets, avoid secrets in args
📘 TL;DR: Avoid secrets leaking from the Docker build environment. A Docker image is typically shared in multiple environment like CI and a registry that are not as sanitized as production. A typical example is an npm token which is usually passed to a Dockerfile as argument. This token stays within the image long after it is needed and allows the attacker indefinite access to a private npm registry. This can be avoided by coping a secret file like .npmrc
and then removing it using multi-stage build (beware, build history should be deleted as well) or by using Docker build-kit secret feature which leaves zero traces
🚩 Otherwise: Everyone with access to the CI and docker registry will also get access to some precious organization secrets as a bonus
✍🏽 Code Example — Using Docker mounted secrets (experimental but stable)
# syntax = docker/dockerfile:1.0-experimentalFROM node:12-slim
WORKDIR /usr/src/app
COPY package.json package-lock.json ./
RUN --mount=type=secret,id=npm,target=/root/.npmrc npm ci# The rest comes here
✍🏽 Code Example — Building securely using multi-stage build
FROM node:12-slim AS build
ARG NPM_TOKEN
WORKDIR /usr/src/app
COPY . /dist
RUN echo "//registry.npmjs.org/:\_authToken=\$NPM_TOKEN" > .npmrc && \
npm ci --production && \
rm -f .npmrcFROM build as prod
COPY --from=build /dist /dist
CMD ["node","index.js"]# The ARG and .npmrc won't appear in the final image but can be found in the Docker daemon un-tagged images list - make sure to delete those
🔗 More examples and further explanations
✅ 12. Scan images for multi-layers of vulnerabilities
📘 TL;DR: Besides checking code dependencies vulnerabilities also scan the final image that is shipped to production. Docker image scanners check the code dependencies but also the OS binaries. This E2E security scan covers more ground and verifies that no bad guy injected bad things during the build. Consequently, it is recommended to run this as the last step before deployment. There are a handful of free and commercial scanners that also provide CI/CD plugins
🚩 Otherwise: Your code might be entirely free from vulnerabilities. However, it might still get hacked due to vulnerable version of OS-level binaries (e.g. OpenSSL, TarBall) that are commonly being used by applications
✍🏽 Code Example — Scanning with Trivvy
sudo apt-get install rpm
$ wget https://github.com/aquasecurity/trivy/releases/download/{TRIVY_VERSION}/trivy_{TRIVY_VERSION}_Linux-64bit.deb
$ sudo dpkg -i trivy_{TRIVY_VERSION}_Linux-64bit.deb
trivy image [YOUR_IMAGE_NAME]
🔗 More examples and further explanations
✅ 13 Clean NODE_MODULE cache
📘 TL;DR: After installing dependencies in a container remove the local cache. It doesn’t make any sense to duplicate the dependencies for faster future installs since there won’t be any further installs — A Docker image is immutable. Using a single line of code tens of MB (typically 10–50% of the image size) are shaved off
🚩 Otherwise: The image that will get shipped to production will weigh 30% more due to files that will never get used
✍🏽 Code Example — Clean cache
FROM node:12-slim AS build
WORKDIR /usr/src/app
COPY package.json package-lock.json ./
RUN npm ci --production && npm clean cache --force# The rest comes here
🔗 More examples and further explanations
✅ 14. Generic Docker practices
📘 TL;DR: This is a collection of Docker advice that is not related directly to Node.js — the Node implementation is not much different than any other language:
✓ Prefer COPY over ADD command
TL;DR: COPY is safer as it copies local files only while ADD supports fancier fetches like downloading binaries from remote sites
✓ Avoid updating the base OS
TL;DR: Updating the local binaries during build (e.g. apt-get update) creates inconsistent images every time it runs and also demands elevated privileges. Instead use base images that are updated frequently
✓ Classify images using labels
TL;DR: Providing metadata for each image might help Ops professionals treat it adequately. For example, include the maintainer name, build date and other information that might prove useful when someone needs to reason about an image
✓ Use unprivileged containers
TL;DR: Privileged container have the same permissions and capabilities as the root user over the host machine. This is rarely needed and as a rule of thumb one should use the ‘node’ user that is created within official Node images
✓ Inspect and verify the final result
TL;DR: Sometimes it’s easy to overlook side effects in the build process like leaked secrets or unnecessary files. Inspecting the produced image using tools like Dive can easily help to identify such issues
✓ Perform integrity check
TL;DR: While pulling base or final images, the network might be mislead and redirected to download malicious images. Nothing in the standard Docker protocol prevents this unless signing and verifying the content. Docker Notary is one of the tools to achieve this
🔗 More examples and further explanations
✅ 15. Lint your Dockerfile
📘 TL;DR: Linting your Dockerfile is an important step to identify issues in your Dockerfile which differ from best practices. By checking for potential flaws using a specialized Docker linter, performance and security improvements can be easily identified, saving countless hours of wasted time or security issues in production code.
🚩 Otherwise: Mistakenely the Dockerfile creator left Root as the production user and also used an image from an unknown source repository. This could be avoided with just a simple linter.
✍🏽 Code example — Inspecting a Dockerfile using hadolint
hadolint production.Dockerfile
hadolint --ignore DL3003 --ignore DL3006 <Dockerfile> # exclude specific rules
hadolint --trusted-registry my-company.com:500 <Dockerfile> # Warn when using untrusted FROM images