Docker Containers For Development Environment: The Good, The Bad and The Ugly

Alane Pontes
Beakyn
Published in
7 min readMar 11, 2020

Disclaimer: We’re using React, so our examples are all based on it.

It’s a docker gif just with the logo of docker.

Introduction

This post assumes that you are familiar with Docker and Docker Compose, but you don’t need to be an expert or even work with it already. In the following I will explain: how to choose a node Docker image, how to handle node_modules in a proper way, how to inject environment variables, how to achieve live-reload and how to configure a React Application with Docker and, also, how to run it using HTTPS.

The problem

Our team is growing and new developers are joining us. We have many applications with different tools and versions for each of them. Because of that, the time to set up their development environments is quite substantial, besides, we use node-sass which compiles to a binary executable that uses a particular type of computer and operating system, and as different devs have different OSes, so this increases our cross-platform issues.

As the developers generally pick different tools and versions they like, we start to get more unexpected errors during deploy because local computers are slightly different from production in most cases. Another problem is that we use an external application to manage environment variables, one for each of the applications, then during development, it is a pain to manage, sync or even keep it updated on daily basis.

Last but not least, we need to run our application using HTTPS.

In a nutshell, the problems are:

  1. Waste of time doing the setup of dev environments.
  2. Outdated env vars between stage and prod environment.
  3. Breaking during deploy.
  4. Cross-platform compatibility issues.
  5. Run all applications using HTTPS.

The Good

Using Docker we can abstract the host environment for applications, and make a minimal and fast setup of the environment because we can describe and configure it using a Dockerfile and starting it is as easy as running a single command.

Another good point is that we guarantee our development environment is equal to our production one, with the dependencies and binaries necessary to run on both sides. This way, we avoid the cross-platform problems and decrease the crashes during deploy.

Using docker we can, as well, inject environment variables during the startup of containers, so we completely remove the responsibility from the developers to handle the hellish task of managing and keeping the environment variables updated.

As we use Create React App for our frontends, integrating them with Docker and make them use HTTPS was a trivial task.

The Bad

Working with Docker may seem trivial at first, but once your start encountering the real world, day-to-day issues things get a lot more complicated if I’m being completely honest.

As soon as we started using it, things as:

  • Choosing the right Docker image
  • Handling mapped volumes properly.
  • What the user you must use
  • Bidirectional sync except for specific folders
  • Or how to achieve live-reload

Weren’t so clear, and as we need all these things and we work using shared folders with our dependencies, everything is relevant to us.

The Ugly

For me, the most ridiculous thing was to properly install and run the docker using MacOS, just by Googling we can already see a lot of articles about it. There are performance problems, network issues, and even problems during the bind of volumes.

Just if you need support to installation of Debian and MacOS.

Solutions

To bring some context, suppose we have two React applications, one is responsible for the authorization and another is the core application. It’s a requirement to run both at the same time because one depends on the other and we need to have live-reload on it.

Next, I will explain how to solve these problems and, in the end, share with you the Dockerfile and docker-compose.yml that solve them.

Choosing the right Docker image

I think that picking the right image is tricky because, in the beginning, it is difficult to determine what that means.

I generally use the following:

  • What is the image used in my production environment? It must include the specific version of the image.
  • The size of images is important, but not necessarily the main reason to choose one or another. Be aware of it, a lot of headaches will be avoided.
  • Pick the one which reduces your change rate, that is stable and closer to your real environment.

We use the node slim image, let’s have a look at the README in the documentation:

This image does not contain the common packages contained in the default tag and only contains the minimal packages needed to run node. Unless you are working in an environment where only the node image will be deployed and you have space constraints, we highly recommend using the default image of this repository.

Despite this note, we picked the slim because it has just what’s necessary to run React applications, so it matches our case.

Keep in mind that no one decision is final.

Bidirectional sync except for node modules

Any changes in the host machines must be propagated to the containers to avoid making the process of development unnatural.

We can achieve this using bind mounting, it means, in short, the content in the host will overwrite the content of the container so that the two sides show the same data.

Although this is a great feature for us, using it along with node_modules is painful, why?

First, our hosts and containers have different OS distributions and if we completely overwrite the host or containers node_modules folders, we will certainly have cross-platform problems.

Furthermore, if we use the node_modules of the containers excluding the one from the host, we certainly decrease how naturally the developer’s works on their machines, because the IDE will alert for missing dependencies or even lint errors.

So, how we fix it?

We need to keep both, node_modules created into containers, and the node modules created by developers.

It’s important to avoid that the node_modules of the container are overwritten in the run-time by that in the host and also, that modifications made within container crash the node_modules of the host.

We achieve this doing the following within docker-compose.yml:

volumes:
- .:/opt/node_app/app:delegated
- ./package.json:/opt/node_app/package.json
- /opt/node_app/app/node_modules/

Add in our root tree a .dockerignore:

**/node_modules
node_modules

And within Dockerfile, we need to put the node_modules of the container in a parent directory, and we also need to set the NODE_PATH with the path to node_modules and the PATH with the path to binaries of node_modules. This way, it won’t be overwritten by the node_module from the host.

RUN mkdir /opt/node_app
WORKDIR /opt/node_app
COPY package.json yarn.lock /opt/node_app/
ENV NODE_PATH=/opt/node_app/node_modules
ENV PATH=$PATH:/opt/node_app/node_modules/.bin
RUN yarn
WORKDIR /opt/node_app/app
COPY . .

Using HTTPS

Because we use Create React App, all we need to do is set the HTTPS environment variable to true if you wanna use a certificate provided by the local server.

So within the package.json, you can use:

"scripts": {
"dev:ssl": "HTTPS=true react-scripts start
}

If you need to use a custom SSL certificate, you can use:

"scripts": {
"dev:ssl": "HTTPS=true SSL_CRT_FILE=cert.crt SSL_KEY_FILE=cert.key react-scripts start
}

Be aware of this alert in the documentation:

Note: this feature is available with react-scripts@0.4.0 and higher.

Up to date environment variables

We assume that containers are stateless and immutable, that’s why we don’t build the docker image with our external environment vars.

We do that during the docker run-time through a shell script that creates an env file in each of our projects.

Then, we inject it in our running containers, by using:

version: "3"
services:
auth:
env_file:
- ./.env

Live-reload

Like I said before, we’re using Create React App and it doesn’t support hot reload of components. Instead of that, it sends a signal to the browser through WebSockets when a local file changes, we manage this using custom ports.

Please, take a look at this piece of the documentation:

WDS_SOCKET_PORT:

When set, Create React App will run the development server with a custom websocket port for hot module reloading. Normally, webpack-dev-server defaults to window.location.port for the SockJS port. You may use this variable to start local development on more than one Create React App project at a time.

We achieve it doing the following in our docker-compose.yml:

ports:
- "9000:9000"
- "35729:35729"
environment:
- PORT=9000
- WDS_SOCKET_PORT=35729

As I said before, here are the files:

Dockerfile

docker-compose.yml

Conclusion

There are some tradeoffs here, now we have to handle some different steps in our development, which are:

  • Docker becomes part of our stack, so we need to guarantee that developers in the team understand it well. It means we increase our learning curve.
  • Commands that developers previously ran into the host are necessary to run into a docker environment. It means that things like yarn installturns into: docker-compose run <service> yarn install.
  • Generally, docker commands are slower than host commands.

But we do achieve the solution to our initial problem:

  • Decrease the time of setting up dev environments.
  • Keep updated env vars between stage and prod environments.
  • Reduce crashes during deploy.
  • Eliminate cross-platform compatibility issues.
  • Run all applications using HTTPS.

With all that being said, I can tell you that Docker fits the bill and I’m happy with features that it brings to us.

There’s a lot of other good things to be implemented using docker as a multi-build stage, using a non-root user, logging, improving npm install performance, different builds to dev and production and more. But for now, we won’t be using those yet, but we will soon.

Lastly, feedback is always welcome, if you have any suggestions to improve our solution, please leave us a comment.

thanks, João Faulhaber, Bruno Lazaro, Juan Pujol and all Beakyn’s team for your support :)

--

--

Alane Pontes
Beakyn
Writer for

At the end of each day I’m a problem solver and that’s all.