React and Flux: A Docker Development Workflow

Aaron Tribou
7 min readAug 2, 2015

--

Docker development environments are quickly becoming the de facto standard for productive programming. Perhaps Vagrant ignited the inspiration with it’s remarkable ability to share server environments across development workspaces. However, Docker’s simplistic Dockerfiles and fast build times including inheritable build images appear to be giving it a dominant position in the development ecosystem. In this post, I would like to create a Docker-powered React and Flux development workflow for my previous Todo app example.

The challenge with this endeavor is finding a workflow that allows my React and Flux assets to be rapidly rebuilt with each iteration but still be served from inside the container to make it as “prod-like” as possible. Unfortunately, this can get very complex if these two goals are combined into one über Docker development environment. For instance, certain features like mounted volumes with symlinks or file-watching become less compatible across different operating systems such as Windows. Consequently, I’m breaking this workflow into two parts — the accelerated development and the container testing. An overview of the workflow would look like the following:

  1. Develop the app locally with nodemon and Webpack for quick feedback to complete a feature.
  2. Run the app and/or a test suite locally inside a Docker container for “prod-like” feedback.
  3. Push a commit to a remote Git repository for automatic continuous integration (CI) tests, user acceptance tests, etc.

The good news about this workflow is I can simply skip to step two if my workstation’s local environment does not support my application’s environment or if I don’t want to install all of my app’s dependencies locally. Since step two only requires Docker to be installed for any container-compatible development environment, I can switch among developing in Node.js, Ruby, Python, Go, Java, and more without having to install each environment locally. If you haven’t installed Docker, head over here to get started.

The Docker Files

In order to bring this example closer to a real-world scenario, I’m going to add both web and database containers. However, I’m only going to need a Dockerfile for my web container because the database container should be a standard, pre-built image usable by many applications. Consequently, I should be able to pull the database image directly from a public or private Docker registry. This helps to ensure that my application is not too tightly coupled with the database environment. For more information, see the Backing Services factor of a Twelve-Factor app.

The Dockerfile for the web container looks like the following:

By having a Dockerfile in my code repository, it makes it easier for automated build tools to recreate the same container in a different environment. To quickly review the contents, I’m pulling from the official Node.js image on Docker Hub, creating a web app directory, copying my local workspace to the container, installing dependencies, running build scripts, exposing the port, and starting the app. These steps allow me to create a build artifact which can be deployed locally or remotely to a CI server.

In addition, I’m going to add the following .dockerignore file:

Each line allows me to ignore one or more files or folders when Docker uploads my workspace to the container. Perhaps the most important line is the node_modules folder which ensures I don’t upload dependencies that were installed in my local environment. Especially if I was developing on a Windows workstation, uploading pre-built dependencies would reduce my dev-prod parity and increase the chance of having an environment-related bug getting deployed to QA or production.

Step 1: The Local Workflow

For local development with quick feedback, I’m going to run Webpack to watch my assets directory and rebuild my client-side assets bundle when those files change. I’m also going to run nodemon to watch my server assets and rebuild my docker container when those files change. This part is identical to the workflow specified in my previous post. However, with Docker, I can now run a database container if needed as well. Here is a diagram to help visualize the different parts of this local workflow:

The local development workflow enables faster rebuilds of server and client assets.

Notice that if my app needed a database, I would need to use a DB_HOST environment variable to access the changing IP address across the local, CI, and production environments. To start up a MongoDB database container, I’ll run the following docker command to have it running in the background.

# Start a MongoDB database
docker run -d --name db mongo
# Check that it's running
docker ps
# Stop it when I'm done
docker stop db
# List all previously stopped containers
docker ps -a

Next, I can export my DB_HOST variable (if needed) and run my NPM dev script to start my app which auto-restarts upon file changes:

# Export the usual Boot2Docker IP
# Running `boot2docker ip` can double-check this
export DB_HOST=192.168.59.103
# Start the app and file watching
npm run dev

Step 2: Running in a Container

Building and testing an app in a container may take longer to get feedback, but it is more cross-compatible among operating systems.

Now on to the container-based development workflow, which should be cross-compatible as long as Docker can be installed. Since I’ve already created a Dockerfile, I only need to build an image and run a container using the image. Keep in mind, if I’m using Boot2Docker, the docker binary will connect to the Docker daemon on my locally running virtual machine and build the image there. However, I can also configure Docker to connect to a remote host which will automatically upload my workspace to build and run on a target server. Either way, to build my image, I’ll run:

# Build the Dockerfile into a Docker image
docker build -t todo .
# List all local images
docker images

The -t flag assigns the repository name todo to my image. If I wanted to be more conventional, I could pass a repository name and tag such as tribou/todo to label my image. The final parameter is “.” which tells Docker to look in my local directory for the build context and a Dockerfile. Once the image is built, I can run it to test with the following command:

docker run --rm --name web -it --link db:db -e DB_HOST=db -p 8000:8000 todo

With this command, I’m auto-removing the container when it stops, naming it web, ensuring I can see output and pass commands, linking my database container, passing db as my DB_HOST variable, exposing my app’s port, and using my todo image to make the container. Since I’m linking my db container, Docker will automatically add a “db” alias entry with the appropriate IP address to the /etc/hosts file on my web app container. That’s why I don’t have to pass the actual IP address for the db container in this command.

At this point, I had to edit the server.js file to change the host property for the Hapi.js server from “localhost” to “0.0.0.0” to be able to access the web app externally on the Docker container. Afterwards, if my app starts up correctly, I should be able to access it at my Boot2Docker IP which makes the URL http://192.168.59.103:8000. In addition, if I set up a test script such as npm test, I could pass that at the end of the previous docker run command to execute that as well. Running and testing the app with this method gives me a much higher probability that my app will work when deployed to a CI or production environment.

Increasing Efficiency: Inheritable Builds

Right now, it takes a minute or more for my Docker image to build. Fortunately, there are some things I can do to make Step 2 more efficient. First, I could use two build images instead of one. The first image could install dependencies; and the second could inherit the first, add the workspace again, and build and run the app. That way, I’m not installing dependencies over and over when testing the app. If I build the first image using the repository name todo like I did above, this could be the second Dockerfile which I’ll call Dockerfile-dev:

To build this Dockerfile, I’ll have to pass the filename along with the normal docker build parameters:

docker build -t dev -f Dockerfile-dev .

Rebuilding this second Dockerfile is MUCH faster, which provides a more tolerable development experience. However, I now have to remember to rebuild the first todo image if I install any new dependencies during development.

Increasing Efficiency: Docker Compose

A second way to increase efficiency is taking advantage of Docker Compose (not on Windows yet). Instead of having to remember or copy/paste complex docker run commands to recreate my Docker environment, Docker Compose (which comes with Docker on OSX/Linux) essentially stores all of the command-line arguments for creating one or more containers in a YAML file. For instance, I’m going to add this docker-compose.yml file:

The above file will build my web container with the Dockerfile in the current directory, pass the DB_HOST variable, forward the specified ports, and link it with my db container, which it will build from the mongo image. This environment is then created with the following command:

docker-compose up

Using the second development Dockerfile with Docker Compose is as easy as adding the following line under the web container properties:

 dockerfile: Dockerfile-dev

Notice: make sure you are using at least version 1.3 of Docker Compose to use the dockerfile property.

Conclusion

When developing an application, it’s one thing to get it running properly on a local workstation; but it’s quite another to get the app running in every required environment. Fortunately, Docker provides a method to remove much of the guesswork or configuration management skills needed to deploy an application to multiple environments including production. By taking the initiative to make an app Twelve-Factor or container-compatible, it will be much more accessible by a CI environment, an ops team, and the open source community.

A working example of this workflow can be found on GitHub.

If you found this post helpful, please feel free to share it!

--

--

Aaron Tribou

Husband, Full-Stack Developer, Recovering Sysadmin, Entrepreneur, 12-Factor/DevOps Advocate, #CavDad. Using @reactjs, #redux, and @reactnative.