Develop That NodeJs App Inside a Docker Container
Mirroring developer environments across a team is a good practice to minimize “worked on my system” type bugs. A development environment that runs the app from within a Docker container can virtually eliminate this class of bugs by standardizing the environment that a Node.js application is developed in via version-controlled configurations.
TLDR
Why dockerize an app development environment?
Dockerize That Development Environment
Helpful Dockerized Node.js Development Environment Tips
Example docker-compose’s for Mocking Service Dependencies
This post was peer reviewed by Felipe Amorim
Why dockerize an app development environment?
Running an application via a Docker container will require additional system resources (e.g. RAM) and will generally increase a given application’s complexity. Even with these downsides, dockerizing an application can prove to be a net positive due to several benefits:
- Fewer production-only bugs due to development and deployment using as similar configurations as possible (see part 1 of this post for production environment configuration)
- Fewer works on my system type bugs that can be caused by team members having dissimilar development environments
- Simpler development environment setup, including the provisioning/mocking of upstream service dependencies (e.g. database, message broker service, other micro services, etc.)
Dockerize That Development Environment
Modifying a Node.js application to run from a Docker container is relatively simple and should not affect the app’s functionality. To demonstrate this, I will start by forking an application that resembles the complexity one might expect of a production application under development. Specifically, I want to show an example of an app that includes relatively significant backend and frontend logic and also provides advanced development tooling. An open-source project starter kit that meets these requirements is React Universally.
The file changes mentioned in this post can be viewed on github.
The initial step to configure a developer workstation to run a Node.js app from within a Docker container is to install both Docker and Docker-Compose.
Add Dockerfile and .dockerignore
After Docker and Docker-Compose have been set up on the developer’s workstation, add theDockerfile
from Part 1 to the project’s root directory:
Also, add the .dockerignore
from Part 1 to the project’s root directory:
The Dockerfile
will define the Docker container we will be using for local development, while the .dockerignore
will ensure we only copy necessary project files to that Docker container.
Add docker-compose.yml
Next, we will configure the development environment. Specifically, we need to enumerate all the services/apps that need to be running to enable local development. To start this walk through simply, we’ll assume the development environment consists entirely of only a single service: our Node.js app.
We’ll define and manage this environment with Docker Compose. Docker Compose is an official Docker tool “for defining and running multi-container Docker applications”. When provided with a configuration file that lists one or more Docker images, Dockerfiles
, and some options, Docker Compose will build, start, and manage a network of Docker containers.
Create a file named docker-compose.yml
in the project’s root directory with the following contents:
In this YAML file, there are three top level settings (check out the official documentation for the full list of available settings):
version
— specifies the structure of the file. This will determine which Docker Compose configuration keys are supported.
services
— a hash that defines the Docker containers. Each immediate child of the services
field represents a Docker container that Docker Compose should create and manage. In this simple example, we have a single service, app
. The key (app
) should be set to a useful shorthand identifier of the service. It will be used to identify the container (e.g. Docker Compose connects the containers on a network and this identifier is used as the host address for each container on the network e.g. app:3000
-> port 3000 on the container associated with the services
field, app
).
volumes
— a Docker container does not by default provide persistent storage. In other words, only data copied over or configured during Docker image creation will exist after the associated container is stopped and restarted. Volumes provide a mechanism to persist data.
We define a named volume here, reserved
, to be used by our Node.js app’s node_modules
directory (more on this in a bit).
Let’s go a level deeper and understand the app
service’s configuration. (tip: with some exceptions, these keys are functionally equivalent to the similarly-named instructions used in a Dockerfile):
build
— Including the build
option instructs Docker Compose to build an image for this container. The value specifies the directory to find the Dockerfile to use to build the image. Each service configuration must include either a build
or an image
option.
ports
— The ports
option allows network ports on the service container to be mapped to one of the host’s ports. This allows for addressing ports on a service’s container from the host system via localhost
.
In this example, we are mapping the container’s port 80
(port 80
is defined in the previously defined Dockerfile
as the port on which the app is listening for client requests) to the host system’s port 1337
. Consequently, the app will be available on the host system via localhost:1337
.
We’ve set up some other port mappings as well: 7331 (react hot reloading as configured on the react-universally
project), 5858 (legacy debug port) and 9229 (new inspect port for using host-based debugging like Chrome Developer Tools or VS Code).
volumes
— Similar to the volumes
top-level setting, this service-level setting allows for defining one or more persistent storage mappings. There are multiple types of volumes that can be configured here and the semantics can be complicated!
In this example, we are configuring two different types of volumes
:
.:/opt/app
maps the Node.js app’s project directory on the host file system to the/opt/app
directory on the Docker container’s filesystem. The takeaway here is that any changes to the host’s project directory will be reflected on the Docker container’s filesystem (because they’re the exact same files). This enables developers to edit project files on the host and have the changes reflected inside the Docker container. Also this setting will persist all local changes when the container is stopped and then restarted.reserved:/opt/app/node_modules/
is a bit of a hack to exclude the mapping effect from the.:/opt/app
volume configuration from affecting thenode_modules
directory. We don’t wantnode_modules
directory on the host system to ever influence thenode_modules
directory in the Docker container because then we lose one of the benefits of using Docker for local development: a reliably consistent app execution environment. On every Docker container start, we wantnode_modules
to be either freshly downloaded and installed from thepackage.json
/package-lock.json
, or refreshed from the cached Docker layer from a previous install. This is identical behavior that occurs during Dockerized deployments (as configured in part 1), which should allow any npm-related bugs to be caught locally.
entrypoint
—This setting allows a docker-compose.yml
to override the CMD
instruction of the associated Dockerfile. The CMD
instruction defines a command to be executed after the image is created e.g. the commands that initialize and start the app. The Dockerfile
we’re using defines a CMD
that performs deployment actions but for local development we require a different series of actions. Consequently, we set this option to npm run develop
, which will start the app in development mode.
That’s it! Now, a developer can execute:
docker-compose up
and Docker Compose will build / start the container and the app.
Helpful Dockerized Node.js Development Environment Tips
Installing npm dependencies — Due to the project’s node_modules
directory being “owned” by the app’s Docker container, interacting with the project’s npm dependencies must be performed from inside the container. The following command starts a bash command prompt from within the container associated with the docker-compose service named DOCKER_COMPOSE_SERVICE_NAME
:
docker-compose exec DOCKER_COMPOSE_SERVICE_NAME bash
From our example docker-compose.yml
above, this command would be:
docker-compose exec app bash
Now just use the npm
commands (npm i -S
, npm update
, etc.) the way you normally would.
git use will be the same as the non dockerized app — Although npm
commands must be invoked from within the docker container, git commands can be invoked as normal, from a command prompt on the host machine.
Useful Docker commands — Sometimes, the Docker abstraction leaks and you’ll have to get your hands a little dirty. For example, shutting down a running docker-compose via CMD + c
/CTRL + c
will sometimes fail to stop all running containers associated with the docker-compose (there would be an error displayed in this case). In such cases, it can be helpful to know a few Docker commands:
docker ps
— see info (e.g. ids, state) on running docker containers, useful with with thedocker stop
command.docker stop <container-id>
— force a docker container to stop, useful if the app becomes unresponsive.docker system prune
— cleanup orphaned images / containers / volumes /etc.
Example docker-compose’s for Mocking Service Dependencies
An application normally will depend on one or more upstream services to function properly e.g. a web api, a database, a message broker service, etc. Now that our development environment has been dockerized, we can provide local versions of upstream dependencies via the docker-compose.yml
.
The idea here is that one can find publicly available Docker images via Docker Hub (also, one can build Docker images locally) for just about any upstream service a Node.js app could depend on. After finding or building the image(s), simply add a new direct child to the services
section to the project’s docker-compose.yml
with an appropriate configuration, and then set the Node.js app’s environment variables/run-time configuration to target the service with the local address and credentials defined in the docker-compose.yml
.
For example, if we wanted to provision a PostgreSQL relational database to a project, we could modify the docker-compose.yml
like this:
This docker-compose.yml
supplements the one from earlier with an additional service, postgres
. Aside from the name of the service, this configuration was copy/pasted from the docker-compose.yml
found in the github repo for the associated image, sameersbn/postgresql:9.6–2
. To configure the Node.js application to connect to this service, use the service name from the docker-compose.yml
as the host and the mapped port: http://postgres:5432
We might also want to provide developers with a GUI client for the DB for debugging. A quick Google search reveals that there is a pgAdmin PostgreSQL client Docker image on Docker Hub. Let’s add that to the docker-compose.yml
:
Running this docker-compose.yml
starts three services: the Node.js app, a PostgreSQL DB, and a PostgreSQL client GUI. Now, each member of the team can have confidence that their code changes are correct before connecting to the production DB. It works like magic!