Docker image size optimization for your Node.js app in 3 easy-to-use steps.
Author: Yaroslav Rozum (Developer)
Docker is a software platform designed to make it easier to create, deploy, and run applications by using containers and one of the most required and great technologies in the world of modern development. So, we run almost all of our projects through it.
Our business task for the smart home project was to run as many Docker containers as possible on an embedded device (Raspberry Pi / Orange Pi), the problem is that usually, standard Docker images has really big size ~600 MB and the average size of our images was reaching over than 700 MB. Therefore, today, I would like to share our experience with you and go through all the steps from the starting point of our optimization to the point where we are now.
Steps we will go through:
- Step 1: Simple image size optimization
- Step 2: Get rid of unnecessary dependencies
- Step 3: Compile your app to a single bundle using NCC
Image size optimization step by step
The first thing that comes on mind when we do not think about optimization is using a standard image.
Here is an example of Dockerfile that we were using:
Build via this Dockerfile looks acceptable, but there is much work to do:
- Do not specify
FROM node:<version>as your base image.
Full Node.js image adds a significant number of tools, like:
- source controls software, such as git and others
- runtime libraries and build tools
- libraries and API sets
That’s why full Node.js image usually starts from 600 MB. At the time of writing the post, official full Node.js image size is 900 MB and 899 MB for node:10
2. Node modules that we do not need (We will talk about this in further steps).
From here we have ~100 MB of Node modules and 600 MB of the only base image.
The total size of our images was ~700 MB.
Yes, we had a Layered Node.js image, but 600 MB was still unacceptable for us.
Step 1: Simple image size optimization
Take an initial smaller Node image. It is easy to find on dockerhub, and there are few small sized images.
alpine- That’s what I strongly recommend.
Alpine is the best choice for a base image - it is the smallest one (only 70 MB). We tried it, but it did not work for us because of processor architecture on the target platform. Our
alpine are not supported for this one. So we decided to use
slim (~100 MB), but we still keep in mind that we can build a new image on the base of
alpine Linux for
After this step, our Dockerfile looked like:
So the difference now is 100 MB of the base image and ~100 MB of
node_modules for each image
Step 2: Get rid of unnecessary dependencies
Install only production dependencies
After this small improvement, our delta for
node_modules was ~60 MB
Node.js Process Managers
Process Manager is a tool, which provides an ability to control application lifecycle and monitor the running services to maintain your project operability. Here is the list of most popular ones:
When we do not use Docker, we use PM2 to run our app in production. The great thing for Docker is that you don’t need process manager inside the container. You can use the Docker restart policy instead:
docker run --restart on-failure <your image>
Also, Docker provides
What did we get from it?
Before getting rid of
- containers RAM usage ~80 MB
- delta only for
- containers RAM usage ~50 MB
- delta for
So the Dockerfile is:
Step 3: Compile your app to a single bundle using NCC
NCC is a simple CLI for compiling Node.js app into a single file, together with all its dependencies. Moreover, it is easy to use:
The last command will build your app and store it in
Combining Docker and NCC
Now, let’s get back to Docker.
Every time you do
ADD in your Dockerfile Docker will create a new layer, directly increase the size of the build and cache it so if you do:
Your image size still will be like with
node_modules and Docker will cache all the layers, so if you build another image via this Dockerfile, it will use files and
node_modules form the old one.
To prevent caching use:
docker build -t myimage . --no-cache
To prevent size increasing, we have 3 options:
- Build locally and copy.
It will work, but you have to run bash commands each time you want to build a new image.
2. Run everything in one command.
Also will work, but it is necessary to
rm -rf node_modules each time before build, and besides
index.js file, we will have unused app files
3. Multi-stage Docker builds, which is the best one!
With multi-stage builds, you use multiple
FROM statements in your Dockerfile. Each
FROM instruction can use a different base, and each of them begins a new stage of the build. You can selectively copy artefacts from one stage to another, leaving behind everything you don't want in the final image.
ncc with multi-stage builds, our largest build was 2.5 MB, before it was 70 MB.
We've finally got the smallest image.
We didn’t use any binary dependencies for our applications, so we could build our images on
x86 and run them on
armv7. If you use binary dependencies and your target platform processor architecture is different, build your images on a machine with the same architecture or use a virtual machine which emulates the architecture of the processor you need.
I hope you have learned at least some useful information from this post and at the end, I want to share with you our final results:
Delta for our images before any optimisation:
- 600 MB base node image
- ~100 MB for each new image
Delta, after all the steps:
- 100 MB node image
- ~2.5 MB for each new image
As I noticed in the beginning, our goal for the smart home project was to run as many containers as possible on an embedded device (Raspberry Pi / Orange Pi). For example, to run 7 containers we needed ~1.3 GB of SSD on start for images and overhead ~30 MB RAM for each container because of PM2, now it is only 117.5 MB of SSD and 210 MB less RAM usage.
Originally published at https://blog.webbylab.com.