Debugging Node apps in Docker containers through WebStorm

You find plenty of examples of how to set up a node app in a Docker container or how to use Docker from WebStorm, but most of the times the thing I’m struggling with most is getting the whole orchestra play together, i.e. I want to:

  • host multiple instances of a Node app in Docker containers
  • automatically restart the app on code changes, while developing
  • be able to set breakpoints and take advantage of the full WebStorm goodie bag for debugging
  • differentiate between development and production environments, especially in terms of installed dependencies
  • run our code in a Docker container only, i.e. no local installs (since it partially defeats the purpose of using Docker)

What do we need ?

  • The example repo, to be able to follow along.
  • Docker, obviously. You need to familiarise yourself with creating containers and images and Dockerfile syntax (just the basics is enough though).
  • [WebStorm, if you’re interested in how to setup a good workflow for node+Docker in everyone’s favourite IDE.]

The road ahead

  1. we’ll create a small node app, which serves some HTML.
  2. we’ll create Dockerfiles to declare our dependencies and installation steps for our dev and production environments
  3. we’ll create docker-compose files to configure and combine these Dockerfiles into easily usable configurations
  4. we’ll setup and configure WebStorm for debugging purposes

The Node App

I chose to use WebStorm to scaffold a node.js+express app for me: File > New> Project… > Node.js Express App

It’s a very simple app which serves a single html page, saying:

If you can’t be bothered with generating one yourself, just use the one from the example repo, all of its code resides under app.

Just to make sure it runs and does what it should, we are going to

λ npm i && node app/bin/www

Now point your browser to http://localhost:3000 You should see the same welcome message as above.

Our base Dockerfile

Next we want to create a Dockerfile which will inform Docker on what software is needed and how it will be structured. Node publishes its own docker setups, and we’re going to use one of these to base our core setup on:

# file: df.base
FROM node:4.4.5

We’re writing this into df.base (short for docker-file.base). The file will serve as the basis from which we will extend our environment specific Dockerfiles.

I’m using a handy trick I found on http://bitjudo.com/blog/2014/03/13/building-efficient-dockerfiles-node-dot-js/ which avoids re-building the node modules any time there’s a change in our app directory, yet does force re-installation when our package.json file changes.

ADD package.json /tmp/package.json
RUN cd /tmp && npm install --production
RUN mkdir -p /home/deb && cp -a /tmp/node_modules /home/deb/

/home/deb is the destination for our code (deb stands for docker-example-backend)

Next we copy our package.json to our destination directory and change the CWD:

ADD package.json /home/deb/package.json
WORKDIR /home/deb

That’s about it for our base file. As you can see we don’t copy any of our application code, nor do we expose PORTs or anything. The reasons are:

  1. We’ll use a different mechanism for accessing our code in development containers than in production
  2. We want to keep any environment-dependent stuff out of our Dockerfiles

Our base docker-compose file

We’re using a docker-compose file called dc.base.yml (short for docker-compose.base.yml) to declare the stuff that should be overwritable for each specific environment.

# file: dc.base.yml
services:
backend:
build:
context:
.
dockerfile: df.base

What it boils down to is we’re declaring a backend service and tell it to use the same directory the file resides in (i.e. “.) for building, using instructions from df.base

Next we name the generated image deb:base, which will allow us to refer to this image by name.

image: deb:base

We pass a PORT variable to our container environment (i.e. it will be retrievable with process.env.PORT inside our code) with value 3000

environment:
PORT: "3000"

Obviously we could hardcode 3000 into our code instead (and in fact it is, as a fallback,) but I like to keep a clear overview of what is running where, both internally and externally.

Next we’ll define an entry point for our app:

entrypoint: ["npm", "start"]

Which translates to:

"start": "node ./app/bin/www",

Our dev environment

Yeah, we’re not there yet. To run our dev env we need two more things:

  1. we need to tell Docker where the application code is, since we neglected to do so (with good reason)
  2. we need to make sure our devDependencies are installed as well, since in df.base we used npm install — production

We create a dc.dev.yml file which will take care of 1. and additionally some other stuff:

# see dc.dev.yml for the full configuration of the image name etc.
environment:
NODE_ENV: "development"

We make sure our NODE_ENV env var inside the containers is set to “development”

Then we’ll hook up our app directory as a data volume:

volumes:
- "./app:/home/deb/app"

This maps ./app in the current directory to /home/deb/app inside the container. This is useful during development since any changes made to our app directory will be immediately synchronised to our container. This way we won’t have to build a new Docker image for every change.

Next we need to tell Docker what ports should be exposed externally:

ports:
- "3100:3000"
- "56745:56745"

I.e. our internal port 3000 is exposed on the host as 3100 and furthermore we expose port 56745 on both, which we’ll be using to connect our node debugger to later on.

Lastly, we overwrite our entrypoint from dc.base.yml with a new npm script call:

entrypoint: ["npm", "run", "debug"]

Which translates to:

nodemon --debug=56745 ./app/bin/www

For development we use nodemon which will automatically reboot the application when any changes are made to the code. Here we pass the same debugger port value to make sure everything hooks up nicely later on.

So, this fixes 1. but what about 2. (installing the devDependencies, e.g. nodemon)?

We’ll create a separate Dockerfile which instructs Docker to npm install all dependencies and we base it on our base image:

# df.dev
FROM deb:base
RUN npm install

Yeah, there’s no more to it.

So, how the hell do we run this?

We need to build the images and then the containers and we’ll run everything from the command-line (we’ll get to WebStorm later on!)

Since our dc.dev.yml extends dc.base.yml we need to make sure the base image is built first, before it can be used to base our dev image on:

λ docker-compose -f dc.base.yml build

The -f flag informs the CLI to use dc.base.yml as docker-file instead of the default.

This should spew a lot of output ending with

Successfully built

Once it’s done we can fire up our dev environment, with:

λ docker-compose -f dc.dev.yml up

Note we’re using up here, it will build the images and containers and run them.

This should give us something like:

Starting dev.backend
Attaching to dev.backend
dev.backend | npm info it worked if it ends with ok
dev.backend | npm info using npm@2.15.5
dev.backend | npm info using node@v4.4.5
dev.backend | npm info predebug docker-example-backend@0.0.0
dev.backend | npm info debug docker-example-backend@0.0.0
dev.backend |
dev.backend | > docker-example-backend@0.0.0 debug /home/deb
dev.backend | > nodemon — debug=56745 ./app/bin/www
dev.backend |
dev.backend | [nodemon] 1.10.2
dev.backend | [nodemon] to restart at any time, enter `rs`
dev.backend | [nodemon] watching: *.*
dev.backend | [nodemon] starting `node — debug=56745 ./app/bin/www`
dev.backend | Debugger listening on port 56745
dev.backend | Listening on port 3000

To make sure everything’s up and running, we’ll visit http://localhost:3100 (we remapped port 3000 from the container to port 3100 on the host, remember) and normally should see our “welcome to express message” again.

If you make any changes to app.js for instance, the node app should be immediately rebooted, since we’re running this on nodemon.

Let’s debug…

So, now all we need to do is configure WebStorm to connect to the debugger port as defined in our df.dev and dc.dev.yml files, being 56745.

We’re going to create a “Node.js Remote Debug” configuration, as described in the WebStorm help files

If you don’t see this option, you probably haven’t installed the node plugin yet. See the above link on how to fix this.

This should lead us to:

All you need to do is, give a name to the configuration (“Docker debug” in this example) and define the host and port.

Now let’s put a breakpoint somewhere and test it out. I chose the 404 handler in app/app.js:

Next we start our WebStorm debugger, by clicking the small bug icon in the toolbar or status bar:

If we visit http://localhost:3100/notfound it should stop execution at our breakpoint:

Hurray.

Our Production environment

The production environment differs from the development environment in 2 ways:

  1. We want to ADD our application code to the image, since this freezes the code as-is, i.e. even if we change something in our code locally, it won’t be reflected in the running containers and/or we can push the image to Dockerhub, allowing for really easy deployment practically anywhere
  2. We want a dynamic port mapping from container to host, since we want to spin multiple instances.

Again, we will tackle 1. in our Dockerfile:

# file: df.prod
FROM deb:base

ADD app /home/deb/app

Yeah, it’s too easy, I know.

And for 2. we’ll create a composer file:

# file: dc.prod.yml
services:
backend:
build:
dockerfile:
df.prod
extends:
file:
dc.base.yml
service: backend
image: deb:production
container_name: ${APP_NAME}
environment:
NODE_ENV: "production"
ports:
- "${PORT}:3000"

No big surprises here, except we’ll use a APP_NAME env var to set the container name, and we’ll use a PORT env var to allow declaring the destination port on the command line.

For our convenience I wrote a small shell script, which allows us to easily spin up various instances with:

λ ./up.sh foo 3200

will create a container named “foo” and map it to http://localhost:3200

Run your dev environment from WebStorm

If you don’t feel like executing

docker-compose -f dc.dev.yml up

every time you start developing, WebStorm has a Docker plugin you can use as well. Make sure you’ve downloaded and enabled it in your preferences.

CAVEAT: I didn’t get this to work with the Docker Mac OS X app, only with the Docker toolbox.

Create a Run Configuration and select Docker Deployment:

For some mysterious reason you can’t use any other file than “Dockerfile” or “docker-compose.yml”, i.e. you have to use one of either.

For our example I chose to create a docker-compose.yml, which just serves as a proxy for our dc.dev.yml:

services:
backend:
extends:
file:
dc.dev.yml
service: backend

Once you run the “Docker Serve” Run Configuration, it will build and run your dev container.

The Docker panel allows you to inspect your containers and their configuration and mappings:

And with that we have achieved all of the goals we outlined above.

Thanks for getting this far and I hope it was useful to you!