Dockerize you projects for faster setup and development — Part 2

Ishtiaque Zafar
7 min readMar 16, 2020

--

Overcome your development hurdles. Photo by MAX LIBERTINE on Unsplash

This article is part of a series where I’ll be getting a full-fledged nodejs REST API server containerized, up and running in minimal time as possible.

Part 1: https://medium.com/@ishtiaque/dockerize-you-projects-for-faster-setup-and-development-part-1-a502169f97fc

Part 2: This article

In my previous article (please go through it once, if you haven’t already), we discussed some pain points regarding software development: the problems that come while setting up a new project and how we can avoid all that hassle by using containers. We saw how to use Docker to spin up an container which was running a simple API server written in nodejs and express.

Although we were able to speed up the setup time, the development was still a problem. The problem with the previous setup was that we had to rebuild the image and restart the container every time we made some code changes. This was the only way to get the new code into the container. Not a very useful setup, eh?

What if there was a way where we can share the code between our system (the host) and the container. Any code changes we make on our system would be instantly reflected in the container without the need to rebuild and restart the system.

There is: DOCKER VOLUMES

Docker docs define volumes as: Volumes are the preferred mechanism for persisting data generated by and used by Docker containers.
They are complex and have a lot of uses, but for now, for the sake of simplicity, let’s just think of volumes as a way to share data between host and container.
There are two ways we can do that:

  1. We can mount the volumes during runtime using cli
  2. We can use a docker-compose file. (This is much easier)

Our target is to share our project root folder from our machine with the container’s current working directory. In order to achieve that, we will need to make a couple of changes.

First, we need to configure our API server to restart on code change. This can be done easily using libraries like nodemon , pm2 or forever . For simplicity, we will use nodemon. When running in production we can use pm2 .
So let’s install nodemon as a dev-dependency:

npm install nodemon --save-dev

Once that is done, lets modify our npm scripts a little. This will allow us to run our own commands like npm run start & npm run start-dev.Open package.json and add two line in the "scripts" section:

"start": "node server.js","start-dev": "nodemon server.js"

So, your new package.json should look like this:

Now, we are good to explore options 1 and 2 mentioned above. Let’s try option 1 of using the cli to mount volumes during runtime.

Option 1:

Step 1:

Open your Dockerfile and modify the last CMD command. Change it
from CMD [ "node", "server.js" ]
to CMD [ "npm", "run", "start-dev" ]

The Dockerfile should now look like this:

Step 2:

Build

docker build -t izafar/docker-node-api .

Step 3:

Run

# For unix-like environments, macos, linux
docker run -p 8001:8001 -d -v $(pwd):/usr/src/app izafar/docker-node-api
# For Windows
docker run -p 8001:8001 -d -v %cd%:/usr/src/app izafar/docker-node-api

You will notice a new argument in the command -v $(pwd):/usr/src/app . This tells docker to create a volume from my current working directory, which is the root folder of the project and mount is at the working directory of the container, which is /usr/src/app .

Now the code is shared between host and container.

Let’s see the logs and try hitting our API server for a response.

# Get container ID
$ docker ps

# Print app output
$ docker logs <container_id>

The logs should look like:

Logs from the API sever running inside the container

Open up a browser and navigate to localhost:8001/api/v1 . You should see something like:

Response from API server

Step 4:

Now to test whether our code changes are getting reflected instantly or not. Let’s change some code in server.js to:

Step 5:

Checks the logs again and it should now look something like this:

Updated logs from the API server running inside the container

If you hit the URL again, it will return an updated response like:

Updated API response based on the code change we did just now

This shows that our express API server was restarted on detecting code change! Our method is working well and the code changes on our system is being reflected in the container instantly!

Are you happy with what you have? I am not. I have to mount the volume every time I run the container. To do that, I have to remember a very long command. Let’s have a look into option 2, using a docker-compose file, which can help us alleviate that problem as well.

NOTE: Changes not getting reflected? Still? Have a look here

While trying out this tutorial on a Windows 10 host, my changes were not getting reflected even though it was working perfectly on my Mac. Upon inspection, I found out that my file changes were getting reflected, but nodemon was not able to detect the change. This is a known issue in nodemon. To fix it, open your package.json file and change the start-dev script to: nodemon -L server.js

Option 2

This method is the easiest and fastest way to get up and running. Using a docker-compose.yml file allows us to write the configuration inside it without the need of a Dockerfile in this case.

Step 1:

Delete the Dockerfile

Step 2:

Create a docker-compose.yml file in the root folder and paste the following contents:

The code is does pretty much the same compared to the steps we did above. It’s just that everything is now written inside a single file and we don’t have do remember and run any special commands during runtime. Let me explain a few lines in the file:

  • The first line declares a service called api-server
  • Next, we tell docker to use the node:10 image as the base image
  • Next, we map root(./) folder from host to /user/src/app in container
  • Set the working directory to /user/src/app in the container
  • ports: — 8080:8080 maps the host port 8080 to port 8080 of the container
  • Next up we define some environment variables
  • Finally, the last line tells the command to execute when the container is ready. The command installs all the dependencies and then executes the script start-dev defined in our package.json file, which starts the server in watch mode using nodemon.

Step 3:

Start the api-server service:

docker-compose up -d

Run a docker ps command to see whether your container is up. You see something like this:

Next, check the logs by running docker logs <container_id> . It should look something like:

Step 4:

Change some code in server.js file like we did previously. Save the file and check the logs again. It should now look something like this:

You can see that nodemon detected the code changes and restarted the server automatically. This means the method is working well!!!

NOTE: If nodemon is not restarting on code change, please see my note written above.

Your final project structure should look like:

docker-node-api
| - node_modules
| - .dockerignore
| - docker-compose.yml
| - package-lock.json
| - package.json
| - server.js

This is it folks! That’s all you will need to get up and running with docker and the technology you want to try out. It may seem to have a bit of a learning curve, specially if you don’t know Docker, but believe me, it will pay off. It will save you countless hours of installing everything from scratch, resolving conflicts and setting up the tool/library.

It also provides an easier way of clean-up. Just run docker-compose down and it will stop all the containers and removes them as well! No cluttering up from leftover installations, tools, etc.

--

--

Ishtiaque Zafar

Founding Engineer @ Skyflow. Using my tech powers to solve real-life problems. Dreamer, doer, go-getter.