Centralizing Your Docker Dependencies

Daniel Orner
Flipp Engineering
Published in
5 min readJul 26, 2021
Image from Pixabay

Developing with Docker is a great way to standardize your dependencies and make the development process less painful. Docker is a way to create lightweight virtual machines, or containers, to run programs from a known state.

Docker Compose is a feature of Docker. It’s a neat tool that allows you to spin up an entire application with all its dependencies in a single click.

There are a few different ways you can use Docker Compose in your local development. I’m going to present two ways I’ve seen, and then show you a third way which I feel gives you the best of both worlds.

The Kitchen Sink approach

Probably the most common way of using Docker Compose is to put both your application and all its dependencies in a single Compose file. For a Rails app with a Postgres database, it might look something like this:

version: “3.9”
services:
db:
image: postgres
volumes:
— ./tmp/db:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: password
expose:
— “5432:5432”
web:
build: .
command: bash -c “bundle exec rails s -p 3000 -b ‘0.0.0.0’”
volumes:
— .:/myapp
ports:
— “3000:3000”
depends_on:
— db

Now, by running docker compose up -d, you start up both the database and your app, in its own mini-network. The app uses portsto expose its port 3000 to the local machine, while the databases uses expose to only expose its 5432 port within the Docker network; it’s encapsulated away and unreachable from your local host.

The “What You Need” approach

A second way of using Docker is to only use Docker Compose for dependencies, but leave the app itself out of your configuration. On the one hand, you do lose the convenience of “one-click” development. On the other hand, it’s much easier to do things like hot reloading and running IDE debuggers if you don’t have to squeeze past Docker to do it. There are also performance concerns, especially on Mac OS.

In this case, your file would only have the DB entry and not the web app:

version: “3.9”
services:
db:
image: postgres
volumes:
— ./tmp/db:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: password
ports:
— “5432:5432”

Here, your app is running on your local host, outside the closed network. So your database has to expose its 5432 port to the local host itself so your app can access it, using the ports declaration.

Working with Multiple Repositories

When you’re in the microservice world (assuming you have multiple repositories), using Docker Compose helps to standardize your development process. However, if you have your application inside your Docker network (using the Kitchen Sink technique), you end up having to have multiple copies of all your dependencies, especially if you work with many apps that resemble each other.

Taking the above examples, if you are working on two different services, each of which want to talk to a database, you would need to spin up two databases, one for each service, neither of which can talk to each other. If you have more dependencies (such as Redis, or a messaging platform like Kafka), you’re going to have more and more containers you need to manage, gobbling up resources on your computer.

However, if you use the “What You Need” approach, where all ports are exposed to the host, you run into port conflicts. If your Docker Compose file belonging with System 1 tries exposing port 5432, and so does the one belonging to System 2, one of them simply won’t start.

Another important note is that neither of these two methods allow you to do some kinds of integration testing — for example, if you use Kafka as a message bus, you have no way of having your first app send a message which your second app receives, because in both methods, they each have their own Kafka cluster which doesn’t talk to the other one.

Centralizing Your Dependencies

Really what you need is a recognition that there are a limited number of dependencies across all your services, and create a single, centralized Docker Compose file, outside all of your project folders (perhaps in your home directory). This file can include all possible dependencies for all possible projects, and you can bring them all up with a single docker compose up. All your apps can then connect to the exposed ports normally and transparently.

Now, both apps can share a database (and any other resources they may need) on your local machine. You have one “global” dependency that all your applications can use since all ports are exposed. To ensure isolation, each app can (for example) create a separate database schema in the shared database.

However, this now individualizes the development process. Since the file lives outside any individual repo, there needs to be some kind of documentation telling new team members how to get it all started.

Tool-erizing the Solution

As with many things in life, things become easier for people if you build tools enabling them to be easier. In this case, we created a tool in Go called global-docker-compose (I know, original). This tool can be installed using Homebrew, so any dev can install it with a single command, and acts as a centralized dependency repository.

The idea behind this is that while all dependencies are defined in the tool, the service decides which dependencies it cares about. It does this via a single shell file that lives in the repo that does nothing but call the global-docker-compose tool with a set of arguments representing the services it cares about. A sample file might look like:

# gdc
global_docker_compose --services=postgres,kafka "$@"

Once this file is created, the development process is as simple as calling gdc up, gdc down, etc. If you have multiple projects, running gdc up will bring up only the containers each project cares about. There are additional convenience commands for starting a MySQL or Redis client, for example. The tool is really not much more than a wrapper around Docker Compose, so it can merge config files, show logs, and more.

Comments?

We made this tool specifically for the dependencies that we use in our group at Flipp. However, you can fork it and update its Compose file to more closely match what your own team uses.

What do you think? Would your team benefit from a workflow like this?

--

--