Local Development With docker-compose
Hacking override files for a low effort development environment in a microservices architecture
If you have ever worked on an application powered by microservices, you know the pain of opening a new terminal session and starting each service it depends on one by one. It’s even worse when you find out that bug you’ve been hunting happens locally only because you were running incompatible versions of services.
To avoid these issues, we want to create a local environment that is quick to start up, reproducible, and easy to develop in. These criteria will encourage developers to follow the workflow and also ease their fear of shutting down their machines. The workflow we want to achieve is to run every service we are not working on in docker containers and run the ones we’re working on from the terminal like we usually do.
To achieve this goal, we are going to leverage the power of docker-compose and override files.
What is an override file?
Override files accompany docker-compose files as an extension to the base file to include additional or override configurations. While we can’t use override files to exclude a service, we can override the
entrypoint of the service to prevent the service from starting.
In this case, our override file does three things:
- Override the
- Prevent the container from restarting.
- Reroute upstream services to find the dependency on localhost.
entrypoint: ["echo", "charter is running locally"]
Dispatcher and Charter are services in our application, and Dispatcher makes Http requests to Charter. So we set Charter’s URL to
host.docker.internal for Dispatcher, which is the computer’s localhost, instead of using
localhost, which would be the container’s localhost.
After we create an override file for each service, we can start them using
docker-compose up. Or, if we are working on charter we can run
docker-compose -f docker-compose.charter.yml. This command starts every service except Charter, then start Charter from the terminal.
Which version do we run?
We wanted our local services to be as close to production as possible, but how do we know which version is right? There may be compatibility issues between versions, so we want the docker images that are verified to be working in production. Using the
latest tag seemed promising, but what happens when we created a broken release that is not verified in production? Then everyone’s local environment will be broken. To prevent this, we built a wrapper around the docker-compose command to fetch the version number of each service from the status API in production(the status API is an endpoint built into all our services to return its current version). It then updates our local
.env file, which is picked up by docker-compose to fetch the correct image.
Our new workflow allows us to have confidence in our local environment. It is always up-to-date and starts with one command. It removes the need for us to go through each service and figure out if we left it on a branch last time we worked on it. If someone broke master, we aren’t blocked for half a day waiting for a revert. We only need to focus on the service we are working on.