Docker Kata 003

>>> Live Debugging

Jesse White
7 min readSep 7, 2016

In a recent DockerNYC Meetup event, fellow organizer Doug Masiero and I presented a DockerCon 2016 recap. Doug went through a fantastic demo on Docker Swarm, and I showed off the live debugging that Aanand Prasad did during his debugging session at DockerCon 2016.

I’ll run through our Meetup demo here, and I’d also recommend checking out Aanand’s original guide, from which this kata is heavily inspired. I’ll also ping my fellow presenter and co-organizer Doug to get his demo on Docker Swarm up on the Docker Labs repository, or posted to another Medium Story.

In order to complete this kata, you’ll need the following:

  • Docker: I recommend Docker for Mac or Windows.
  • An IDE which supports Node.js remote debugging: I used Visual Studio Code.
  • A Node.js application: We will create a simple one as part of this tutorial.

In the future, we will also be doing a fair amount of movement between different Docker engines, so I recommend getting familiar with the following command.

$ docker-machine config <machine_name>

This command is going to output a number of variables that the Docker command can take advantage of for remote execution. Here are several examples which use the Docker engine on several different machines.

$ docker $(docker-machine config master) node ls
$ docker $(docker-machine config master) swarm join-token worker -q
$ token=$(docker $(docker-machine config master) swarm join-token worker -q)

And finally, we can string all of the examples into a larger, more complex command that takes advantage of remote execution and a command line variable.

$ docker $(docker-machine config node1) swarm join \
--token $token \
$(docker-machine ip master):2377

You can see here that each example builds upon the other, saving you a bit of time and space when building out commands that you’ll add to your scripts and orchestration tools. While we build in complexity, spend some time building your own workflow.

Now, let’s build a simple application that we can debug.

An Example Node.js Application

Create a directory to work from:

$ mkdir node-example
$ cd node-example

To get our app running, we’ll need 5 files:

A JavaScript file to contain the actual app code

  • A package.json that defines npm dependencies
  • An HTML template
  • A Dockerfile to define our application inside of the container.
  • A Compose file to set up our dev environment.

We can create a file called app.js with the following code.

https://gist.github.com/anonymuse/06af66e183245fdd1eac0e299d237eae

This creates a simple node.js web server that prints a series of messages back to the client after a 2 second delay. Spoiler alert there’s a bug in the application!

Let’s set up the application’s main script and dependencies in package.json.

We’ll also need a template for our webpage.

The Docker framework

Let’s then create a Dockerfile

If you have had just a little bit of experience with Dockerfiles, you might be wondering here, why CMD, instead of RUN or ENTRYPOINT or another Dockerfile directive?

Basically, it boils down to a few different options. RUN will execute any commands in a new layer on top of the current image. This is useful for installing software packages. The main purpose of CMD is to provide sensible defaults for an executing container which can include an executable. In the absence of this executable, you’ll need to specify an ENTRYPOINT. I’d recommend reading over the Dockerfile reference if you have any questions.

Let’s create a very simple Compose file in order to build the service.

There are a number of moving pieces inside the Compose file, which Aanand describes:

It defines a service called “web”, which uses the image built from the Dockerfile in the current directory.

It overrides the command specified in the Dockerfile to enable the remote debugging feature built into Node.js. We do that here because when you ship this application’s container image to production, you don’t want the debugger enabled — it’s a development-only override.

It overwrites the application code in the container by mounting the current directory as a volume. This means that the code inside the running container will update whenever you update the local files on your hard drive. This is very useful, as it means you don’t have to rebuild the image every time you make a change to the application.

It maps port 8000 inside the container to port 8000 on localhost, so you can actually visit the application.

Finally, it maps port 5858 inside the container to the same port on localhost, so you can connect to the remote debugger.

So, now let’s start the application up.

$ docker-compose up -d
Creating network "nodeexample_default" with the default driver
Creating nodeexample_web_1
$

Docker compose will build the application and start up the nodemon server in the background.

We can now open up http://localhost:8000 to see the Node.js application running through a series of statements.

What’s the problem here?

The problem is obvious: we’re outputting a blank message at the end before cycling back to the first line. It’s time to debug!

Remote Debugging

We’ll need to open up the application in Visual Studio Code. Click the “Open Folder” button to make sure you open your full project directory.

Next, click the bug icon in the left-hand sidebar.

Here’s we’ll create a boilerplate launch configuration set for use with Node.js.

Click the gear icon and select Node.js in the dropdown.

This will create a template, which we can replace with the following code.

There are a couple of important things to consider for those unfamiliar with VSCode. First the “Launch” config has been deleted — we’re going to be running our app with Compose, not VSCode. We also want to make sure that the debugger reconnects when the application restarts. As a last measure, we’re also pointing to the code directory inside of the container, which is easier to pinpoint than the code on your laptop which may be in any directory.

Let’s start up the debugger by clicking the play icon.

Now we can go hunt for the bug. A good place to start is around where we initialize the message variable. Let’s put a breakpoint there.

Peek back at http://localhost:8000/. Now that we’ve set a breakpoint, the application will stop running through all of the responses. You may need to refresh the page. You can click the Play button repeatedly at the top.

Hit the Play button to step through each of the lines in the array. We’ll see VSCode hit a breakpoint every 2 seconds, the pace at which the application is set to refresh the browser. It’ll cycle successfully through each line where until it hits an undefined variable.

What happened? Let’s take a peek at VSCode variable debugging. Look in the VARIABLES section under debugging, and check the Closure section. This following is a healthy variable.

After four steps through the array, we’ve found a less than optimal message!

We can see here that the variable ‘lineIndex’ has incremented to 4, with 3 length array. That’s not going to work!

Let’s get healthy

In order to fix this, we’ll replace the > with >= in the conditional on the next line. That should look like this.

lineIndex += 1;
if (lineIndex >= LINES.length) {
lineIndex = 0;

Save the file, which will cause the debugger to reattach to the folder. The yellow line should blink out and back into existence at this point. You may need to refresh your browser if you’re getting an error.

This is the exciting part that native Docker running on Mac or Windows enables. With close integration into the host OS filesystem, Docker is able to detect the filesystem change and proxy that back through the container. Nodemon is going to do it’s job in detecting the change, and restart the application. VSCode picks up the last step in the chain, and re-attaches the remote debugger.

Exciting!

If you’d like, keep stepping through the debugger again and you’ll see that there aren’t any more undefined variable messages.

You can now remove the breakpoint and stop the debugger, which will cause the application to start back cycling again.

I personally learned a lot in reproducing Aanand’s work here, and I hope you find similar value in running through this tutorial. Here’s his live demo, and I’d also recommend the whole keynote. This sort of debugging is going to ease a ton of development pain for anyone who’s tried to debug code in remote containers, and will continue to speed up the development process with the use of an integrated development and operations toolset. In learning more about Swarm, we’ll see how this handoff works.

In future posts, I look forward to debugging services, jobs DABs, and other features of the new advanced Docker orchestration tools included in Docker 1.12 with Swarm and Docker Datacenter.

Thanks for reading, and as ever — click the heart below if this helped!

--

--