Using Docker Compose for Rapid Local Development and Onboarding

Travis Wingo
Splunk Engineering
5 min readJun 25, 2019

--

Splunk is hiring at the fastest rate yet. With new hires coming in weekly, an efficient developer onboarding process is vital, so the time from hire to first pull request must be as short as possible.

To reduce local development environment setup times and increase reliability, my team heavily utilizes Docker and Docker Compose. We have reduced the new developer environment setup time from days to minutes, with it now being a single command: docker-compose up.

What is Docker?

Docker (https://www.docker.com) is software that provides a simple way to run our applications inside of a container — securely isolated, all dependencies baked into a binary image, and packaged with everything we need to run an application. A Docker container isolates the application from the rest of the operating system, running its own process on the host kernel. This means that Docker allows you to run most Unix based applications on any computer without installing any dependencies on the host machine.

For details on how to use Docker and create a Dockerfile, check out the Docker documentation (https://docs.docker.com).

Using Docker Compose

Docker Compose takes the Docker containerization concept and simplifies multi-container orchestration into one command: docker-compose up. From Docker:

“Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.”

At Splunk, our applications consist of multiple services all working together to achieve a common goal. Each one of those services can be defined as a Docker container, and all are needed for proper local development. This is where Docker Compose comes in.

Below is a sample docker-compose.yml file used in my team at Splunk. This file has been modified and tweaked for rapid local development with support for all the necessary services to bring the entire application to life, along with automatic reload on code changes, and code linting.

version: '2.1'services:
nginx:
image: nginx
volumes:
- ./config/nginx/nginx.dev.conf:/etc/nginx/nginx.conf:ro
links:
- ui
- api
ports:
- "3000:3000"
- "3001:3001"
- "3002:3002"
ui:
build:
context: .
dockerfile: Dockerfile_UI
volumes:
- .:/home/splunk/app
- /home/splunk/app/node_modules
- ../common-ui:/home/splunk/app/node_modules/common-ui
environment:
- NODE_ENV=development
depends_on:
- common-ui
ports:
- 3000
- 3001
- 3002
common-ui:
build:
context: ../common-ui
dockerfile: Dockerfile
volumes:
- ../common-ui:/home/splunk/app
- /home/splunk/app/node_modules
command: npm run start:dev
api:
build:
context: .
dockerfile: Dockerfile_API
volumes:
- .:/home/splunk/app
- /home/splunk/app/node_modules
- /home/splunk/app/.git
environment:
- NODE_ENV=development
ports:
- 8080
depends_on:
- db
- redis
db:
image: postgres:9.6-alpine
ports:
- "5432:5432"
environment:
- POSTGRES_USER=our_user
- POSTGRES_PASSWORD=our_password
- POSTGRES_DB=our_db
volumes:
- ./sql/migrations:/docker-entrypoint-initdb.d
redis:
image: redis:4.0.1
ports:
- 6379

While this file is small compared to many applications, it still warrants an explanation to make it easier to understand for those who are new to Docker Compose.

This application depends on six services: nginx, ui, common-ui, api, db and redis. The user interacts with a React frontend UI, which contains many shared React components defined with a common-ui node module. Requests from the UI go through an nginx reverse proxy to a Node.js backend, which uses Redis for session and cache management and stores data in a PostgreSQL database.

Docker Volume Mounting

We rely on Docker volume mounting to enable us to be highly productive while working within containers. We can still use our favorite text editor (Sublime, VSCode, Vim, etc.) and changes are instantly reflected inside the Docker containers.

For the UI service, for example, we have the following configuration in the docker-compose.yml file:

volumes:  - .:/home/splunk/app  - /home/splunk/app/node_modules  - ../common-ui:/home/splunk/app/node_modules/common-ui

This tells our docker environment three things:

  • The code in the /home/splunk/app directory in the container should load from our current directory.
  • Rather than use the node_modules/ directory from our current directory, use the one in the container.
  • Override the common-ui node module in the node_modules/ directory with the module one level up from our current, local, directory.

By configuring our volumes to mount this way, we’re now able to use a contained environment, but still write code locally and make changes to the running container. The ui service is equipped with a Webpack Dev Server and Browsersync instance that watches all files in the UI project and reloads everything on a change. Because we are mounting our local files to the files in the Docker container, we can edit the files locally, but still benefit from the isolation of containers and not worrying about messy dependencies in our host machine.

We use a similar approach with our API service:

volumes:  - .:/home/splunk/app  - /home/splunk/app/node_modules  - /home/splunk/app/.git

This time we mounted the .git folder to take advantage of the pre-commit hook in git for code linting and unit testing purposes — before each commit, our code is automatically linted and unit tested within the container. Any failures will cause the commit to abort, reducing the time each engineer needs to spend on reviewing code syntax. Engineers can focus on reviewing logic and design choices, rather than syntax, or worrying whether unit tests passed correctly.

Other notable configurations are the depends_on configuration, which ensures certain services are available before other services, and the links configuration, which allow our nginx service to find our other services within the docker network created during project runtime.

Docker and git are the only packages installed locally

A major benefit to using Docker for development is the eliminated need to install software packages locally. Our team does not have node.js installed locally, yet these services are node.js based.

Before enforcing the running of npm commands from only within the container, we’d regularly run into issues of mixed node.js versions, conflicting package-lock.json files, and packages built for different architecture types than what the developer is currently running (e.g. OS X vs. Linux). To manage npm dependencies in our projects, we run docker-compose exec service_name bash to get inside the container, and run the npm install command (or any other commands) from within the container. Because the container files are mounted onto our local filesystem, the above command updates the respective package.json and package-lock.json files locally, so there is never a versioning conflict between two developers. Now, everything just works, and we can focus 100% of our attention on shipping code.

A Decrease in Onboarding and Increase in Developer Efficiency

The entire application defined above becomes available and ready to work on simply by typing docker-compose up into the dev console. With npm packages installed and defined only within containers, we no longer find ourselves with broken builds after a git pull, and the only software required for a brand new developer to install on their computer is Docker. After that, they’re ready to work. Brand new development environment setup for our team takes minutes rather than days. And less time debugging inconsistencies between different environments and developers means many hours saved per developer per week — hours much better spent on bug fixing, application enhancements, automation, and shipping code.

Footnote: We also thank the countless engineers who have written about similar topics and we believe Docker and Docker Compose should be the standard developer toolchain for most engineering teams.

--

--