A Development Environment for Micro-Services with Docker and Node.js

--

In this article we will explore how we at blogfoster set up our development environment with Docker. If you’re already familiar with Docker you can jump directly to our setup and skip this short overview.

TL;DR

Use one shared Docker network and unique container names.

Docker

If you haven’t heard about Docker yet or you’re not familiar with its concepts I would recommend the amazing Docker docs and read a few tutorials online.

Docker provides a way to run applications securely isolated in a container, packaged with all its dependencies and libraries. [https://docs.docker.com]

So what is Docker? It helps you to prepare a reproducible encapsulated environment for your application where only those dependencies exist, that are necessary to start. For Node.js applications this would mean the node executable, your source code and your npm dependencies (and maybe some c/c++ tooling).

Why do we need it? Have you ever experienced “it works on my machine”? You tested it locally, all tests pass but on your colleagues machine it’s failing (or worse, it’s failing in production)? With Docker we can create an isolated environment which is reproducible on other machines (though of course, you cannot run the node x86 executable on an ARM machine).

In addition to creating an encapsulated environment, Docker can also create images of this environment. Imagine you download node, install your npm dependencies and then, together with your source code create a tarball (or zip archive). This image can then be shipped to any other machine and started there. This saves the overhead of e.g. running npm install on other machines, which is why it's way faster. Also, Docker uses a layered file system for its images, which means only the parts which change will be sent over the network when the image is updated.

Docker-Compose

OK we heard about some basic functions of Docker, but what do we need to do if our service needs a database? To solve this problem, we can use docker-compose. It can easily orchestrate many services using configuration files. All common things you can do with Docker through the command line can be configured in a docker-compose.yml configuration file.

The blogfoster Setup

We at blogfoster love to write JavaScript, so all our backend services are Node.js applications and for our frontend applications we’re using React. We’re running a couple of micro-services in production. Some of them need to talk to each other, all of them have at least one database connection and others need to communicate with external services.

So imagine you’re working on a new feature in the front-end, which involves interactions with multiple services, their databases and even the external service? Should we start all the services on the local machine, but how do we install databases without messing with the local system?

Thinking back a few years (and still today), people were using Vagrant with VirtualBox to spawn independent machines per services. Each service had it’s own instance, each of these instances were provisioned with e.g. Chef-Solo. And starting this setup from scratch easily took more than 20 minutes. This was just too long for “Generation Internet”, which loses focus after 5 minutes, so I actually forgot what I wanted to do even before the initial setup finished :D.

When it comes to setup speed, Docker is just amazing. For sure it does not have the capabilities Chef has at all! There is no ruby DSL, no nothing: just pure shell commands. Also it’s not as encapsulated as a virtual machine. But it’s so fast. No really, it’s amazing! Currently spawning all the services in Docker takes me no longer than 2 minutes of which it takes me about 45 seconds just to open all terminals and run the correct Docker commands.

How we organize code

Before we look at some code examples I’d like to explain how we organize our code. All of our services have their own git / GitHub repository.

Why does it matter? When we started setting up our dev environment, we of course searched to find solutions someone else described, but splitting up your code-base unfortunately makes it a little harder to connect all your services. At first glance this seems strange, as Docker was built for micro-services, but since docker-compose was the go-to tool for local orchestration, how should one docker-compose file know of your other services? Each repository has it’s own separate docker-compose file to orchestrate databases, but you couldn’t easily link a service that was described in a different file of which you didn’t know the exact location. Some of the solutions proposed to create a top-level docker-compose file which then knows all the other services but this just looked awkward. So we came up with our own solution.

The final solution was to use one shared network. That sounds simple and it is, but finding good examples was hard, so I hope this article helps spread the word.

OK, so how do we get there? Let’s recap what we need in our development environment.

  • Whenever I change my code it should be reflected in the dev environment
  • It should be convenient
  • Other environments should be as close as possible to the same setup, so deploying to production has less risks

Unfortunately, I’m not going to tackle the last point in this article but the others I will.

The first point (that code changes should be reflected) is quite easy when using Volumes. The following code snippet is a Dockerfile for the imaginary web service iron:

Here we’re declaring node:6 as our base image, marking /opt/iron as volume and defining an entrypoint script.

We’re using the entrypoint.sh script to install npm dependencies whenever a Docker container is spawned. This helps to always keep your dependencies up to date when multiple people work on the same project.

The following code shows the entrypoint script we use:

As you can see we’re using a simple “caching” technique here. Remember that we use a volume? This keeps our node_modules folder persistent across deletion of the Docker container, but to gain more speed improvements we don't even call the npm executable, if the package.json didn't change between two runs of this entrypoint.sh script.

Next, for managing docker containers, we’re using docker-compose and the following code shows an example docker-compose.yml file for the iron service:

As you can see we’re declaring a node service (iron), forwarding the port 8080 and are giving the service some environment variables. The depends_on attribute tells docker-compose to also start other services if this service was started. This is actually not necessary when calling docker-compose up -d but you need it when spawning a one-off container with docker-compose run. One more interesting fact: we're using true as the container’s default command, which is a noop (no operation)command, that exits immediately. We're doing this to be able to call docker-compose up -d which also spawns the databases but will not run our service, so we start it separately later.

Next to the iron service we define a mysql and redis service, one volume (for mysql only, as we don't need peristent redis data), which is used with mysql, so stopping the mysql container does not delete your data. For convenience we forward the mysql and redis port ending on xx80, as the node service.

Another thing to note here is the networks section. Here we're telling docker-compose to start all the mentioned services in the given blogfoster network. By default docker-compose itself will make sure there are no naming conflicts for multiple containers in the same network.

Finally we’re using defined, unique container_names. These names can now be used as DNS names to access another service. Check the environment section and you'll see that we tell our application to access redis through REDIS_HOST=iron.redis.blogfoster.local. We're also passing AURUM_URL=http://aurum.api.blogfoster.local:8084 - Think of this as another service started independently from another terminal. But since it's running in the same network and has a well known, unique name, we can now access this service from within the iron container.

The last script I want to show you now is a small script that creates the default network:

As you see this is a small one. It just creates the network if it doesn’t exist already.

Remembering the docker-compose commands can become tricky over time. To simplify our lives we’re using npm scripts:

To create a prepared environment you only need to type npm run d:build. As you see, this calls the network script, then calls docker-compose build, which builds your initial docker image (for us this means downloading the base image and marking the code directory as a volume). The next command might look strange: docker-compose run --rm iron 'true'. This will spin off a one-time iron container that should execute the true command. As mentioned before, the true command is a noop command. The real thing that now happens is that the entrypoint script will be executed. You might remember: we use the entrypoint.sh script to install npm dependencies, so we now only update our npm dependencies in an active shell, so the developer can see whats going on. The next command docker-compose up -d should spawn all other services defined in the docker-compose.yml file (databases, etc.). The command npm run d:prepare can be used to prepare initial database fixtures, but it's not necessary if not needed. Last but not least, docker-compose rm -fv iron removes all exited docker containers which are no longer needed.

The next npm script is npm run d:login. Although this sounds like logging into some running machine, it's not. We chose this name on purpose, but what really happens is that we start a new interactive docker container with the /bin/bash command. This gives us the feeling of logging into something. From this bash session you can now just start your real project using node . or npm start.

To clean up your setup we define two more commands. The npm run d:clean command is used to stop all of your containers and removes them. This will leave named volumes untouched, so any changes to your database will remain.

To clean up the database we also defined npm run d:cleanDb.

Conclusion

Wow, this was a long journey. We learned a bit about Docker and docker-compose. We saw that docker containers spawned with docker-compose can easily communicate with each other if they are spawned in the same network and are given unique DNS names. Additionally, I showed you the scripts we use here at blogfoster. I hope you could follow my thoughts and and it wasn’t too confusing. If you liked the article tweet about it! If you have any questions feel free to send me an email at alexander.springer@blogfoster.com. I would love to hear your feedback.

2017–03–24

  • changed entrypoint script to use exec so that the executed command runs as PID 1, thanks to @puneeth_mysore

--

--