Remote Debugging Node.js in Docker


Nick Warlen
Aug 2, 2016 · 7 min read

I can still remember the first time we were shown how to debug code in school. It was around the third week of my Intro to Computer Science class, and up until this point nothing we had written was complex enough to warrant a debugger. By this, I mean our programs contained mostly simple math and string manipulation, and any problems could easily be found by printing out a few statements.

At first, the debugger seemed heavy, like it would take longer to get the hang of it than to just figure out the problem with some good ol’ print statements. I acknowledged the existence of proper debugging tools and promptly discarded them for the next several weeks.

Along came the final week of the class, and the projects had ramped up in difficulty. Our programs were turning into full blown applications, and print statements were simply not enough. It was late, the night before a large assignment was due, and I was desperate for answers to my broken code. I decided to try out the debugger (I think we were using BlueJ and its built in debugger at this point). It took a few minutes to set up, which seemed like an eternity at 3 am, but I was finally ready to go. A few breakpoints and test runs later, I had found the problem. A small typo that had cost me hours. The debugger had proved its worth.

Fast forward a few years to when I am working at Cascade Energy as a Software Development Intern. I have been given my first coding task: a data stream processing concept. I worked on it for a few days and made great progress. I only had a few more features to add and I would be done. Then everything came grinding to a halt; I couldn’t figure out the problem. I was writing in a language I had never seen before (PHP), in an IDE I had never used before (PHPStorm). I tried to solve the problem for over a day using simple print statements and var_dump to no avail. I finally swallowed my pride and asked a more senior developer for help.

Senior Developer: Have you tried the debugger?

I felt foolish and my mind went back to freshman year at 3 am. I admitted that I didn’t know how to debug my code in this new environment. The senior developer sat down and we set up PHPStorm’s debugger and got everything running. A few minutes later, we had tracked down the bug, something so small I can’t remember the details. It wasn’t the debugger that found the problem, nor the senior developer, but rather the combination of a good developer and a good tool that uncovered the issue. This only reinforced what I had already learned: A good debugging environment is a crucial tool in a good developer’s tool belt.


Fast forward again from my days as an intern to this year and a few things have changed. Almost all of our development at Cascade has transitioned to Node.js, all of our new development uses Docker, and our system is built around distributed microservices. These changes meant that we no longer had a comfortable debugging environment for our applications, which led to heavy use of console.log.

It wasn’t that we liked debugging using console.log but we were developing at such a rapid pace in a fairly new environment that we fell into the same trap that I fell into in my freshman year of college:

Figuring out a debugging environment will take too much time.

After a few months of this, we decided to figure out how to remote debug our Node applications running in Docker containers.

We had a few things going for us: node’s built-in debug flag is extremely useful and our IDE of choice, PHPStorm, has solid remote debug capabilities.

But we still had a number of challenges:

Connecting a Remote Debugger to a Node application inside a Container

This was the most straightforward of the challenges we faced. I decided the easiest way to do this was to have the remote debugger think it was connecting directly to the remote machine and ignore the fact that there was a container. This necessitated a small change to our Dockerfile and a change to the command used to run the container.


FROM node:6...EXPOSE <Application-Port>
# Expose node debug port

Old Docker run command

$ docker run -d -p <application-port>:<application-port> image

New Docker run command

$ docker run -d -p <app-port>:<app-port> -p 5858:5858 image

The changes in the Dockerfile and the run command meant that port 5858 inside the container is mapped to port 5858 on the host machine. This allowed us to specify the host’s public ip address (or DNS entry) as our remote host for debugging in PHPStorm.

Updating Code in a Container without Re-Building the Container Constantly

The changes above only succeeded in providing a way to remote-debug our applications, but did not address the more subtle issue of creating a manageable workflow. We needed a way to easily fit debugging into our development process.

Developing using Docker has several advantages, but one small drawback is a small update to your code requires a full rebuild of the container. This is annoying during development and debugging alike. We first noticed the issue during our review life-cycle, where we would review code change as well as a running copy of the updated code. As review comments and changes were addressed, the container holding the code had to be manually rebuilt each time so the code and running example remained in sync. This was a poor workflow. The solution: Docker volumes and Nodemon.

Nodemon is a tool that watches for file changes and restarts a specified node process when it sees a change. Docker Volumes allows you to map a host directory or file to a location inside a container. The combination of these two tools would allow us to restart the node process inside the running container every time code changes were pushed to the remote host rather than rebuilding the entire container.

The changes required involved edits to both the Dockerfile and the Docker run command:


FROM node:6...RUN npm install -g nodemon...# Old way
# ENTRYPOINT node <application-entrypoint.js>
# New way
CMD ["node", "<application-entrypoint.js>"]

First, we installed Nodemon in the container, and more subtly we switched from using ENTRYPOINT to CMD. The difference between the two is that CMD is a default that can be overridden at Docker run time. This change meant that when we deployed the application there was no change to our build or run process as the default behavior was identical to what it was previously.

Docker Run Command

$ docker run \
-d \
-p 5858:5858 \
... \
-v <location of code on host>:<location of code in container>
image-name \
nodemon --debug <application-entrypoint.js>

The two lines in bold are the changes we made. First, we mounted the code directory on the host to the code directory in the container as a volume. Next, we overrode the default command to use Nodemon and to use the debug flag built into Node.js.

At this point we could:

  • Set breakpoints and debug our code
  • Upload code changes to remote host, auto triggering a process restart inside the container

We could have stopped here, but something wasn’t quite perfect yet: The built in debugger in PHPStorm was pretty slow. This wasn’t a big deal, but we decided to spend a few more hours and find a better solution.

Improving Debug Workflow

I could sum up this section with one tool: Visual Studio Code. VSCode has a built in debugger that was easier to set up and much faster than PHPStorm’s. Although VSCode is marketed as a text editor, I find that it strikes an almost perfect balance between useful features and being lightweight and unbiased. The debug workflow was awesome in VSCode, but…

There is one feature that is not included in VSCode that is critical to our daily development, auto FTP. We move files between our local machines and remote development machines so often that the lack of this feature completely eliminated an otherwise perfect tool.

I still go to VSCode if I have a complex bug to hunt down, but I can’t give up my IDE (yet).

To conclude this section, VSCode is awesome and with one more feature, I would gladly make it the only editor on my machine, but that one feature is too mission critical for daily development at Cascade.


We developed our own workflow/methodology to remote debug/develop node applications running in docker containers. This has greatly improved our bug hunting abilities as well as general development/review processes. The process is not perfect, but it is much improved from where we were a few months ago with console.log statements and mounting frustration. We are slowly integrating these new tools and processes into our daily development, and are excited to find other ways to improve our lives as developers in a new and cutting edge ecosystem.


If you found this post interesting and would like to work with a small team using cutting edge technology to monitor energy consumption, feel free to submit an application or shoot over an e-mail with any questions. Thanks!

Nicholas Warlen
Software Engineer @ Cascade Energy Inc.

SENSEI Developer Blog

Software development blog from Cascade Energy, Inc’s SENSEI…

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store