How Our Team Solutioned Environments Across Our Projects

Christopher Malloy
7Factor Software
Published in
5 min readMar 19, 2019

The Story

In our projects we like to use docker-compose for local development. This provides a relatively simple wrapper on top of Docker that allows for various features such as stringing together companion database containers, assigning ports, and specifying extra volumes to mount. While all of this is possible with native Docker, instead of having to write arcane docker run scripts, you can simply specify a yaml file that is clear, revisionable, and source controlled.

In this situation, the easiest way to pass environment configurations such as database connection strings is by passing a dotenv file in the form of the following:

A=BPASSWORD=a_password

This works great for local environments. With this configuration and setup placed into source control, devs simply need pull down the repo, run the docker-compose up script, and they’re off to the races.

Using this in cloud environments can work as well, for example if you wish to run these containers on a raw EC2. But it turns out this is a difficult process to automate. Our beloved CI system, Concourse, doesn’t play nicely with docker-compose, and even if it did there is still the issue of clustering to solve.

You see, using docker-compose to spin up a single “stack” of containers to have a running instance of your API and all it’s dependencies is perfectly easy. But what if you need to horizontally scale? You would need to spin up more of these containers and port map and load balance them. For these reasons and more this is a sufficiently difficult problem to solve. Some form of managed clustering solution is needed, the most famous of course being Google’s Kubernetes. But that’s a bit over-engineered and heavy handed for our purposes; That’s best left to when you need to massively scale, ie. Facebook level.

So instead we choose Amazon’s Elastic Container Service, or ECS. It’s a fantastic managed service that is solid at running single region deployments of container clusters. And when you use the EC2 launch type, it gives easy access to the underlying infrastructure so you can remote into the instances that run your containers and poke around if you need to. Yet as long as the configurations are correct and the apps are running, you usually can usually just focus on the containers and forget about the underlying hardware. The benefits of a managed service, along with the comfort of having granular control over the system. A refreshing change of pace from the overtly heavy handed “fully managed services” that “do everything you need” but not what you want and give you hardly any control.

So enough of the koolaid, onto the problem at hand. As we began automating our deployments to ECS, we wanted to reuse our environment configuration files. The only problem being, the ECS container definitions are in JSON format, like the following:

[
{
"name": "A", "value":"B" }, { "name": "PASSWORD", "value": "a_password" }]

So we couldn’t just drop our dotenv files in to ECS and walk away. We could certainly manually input each name/value pair one a time, but it would be a headache to manage multiple configuration sources. Which is one is true? Did they both get updated properly? Not to mention, what if there are 100 or more name/value pairs? At best this would require sufficient documenting to onboard new devs to make sure they understood the system, and that sounds like process that would rot and break down over time.

If only there was some kind of program that could convert the dotenv files to JSON and smash it into the container definition…

Oh that’s right, we’re software engineers aren’t we? We could just write the program ourselves! And so that’s what we did. We choose golang as our weapon of choice because it’s a flexible language that can lift heavy boulders, but also create tiny binaries like what we would need to create this tool. We choose to dockerize this tool since our CI system, Concourse, is already dockerized and it would be trivial to pull down this tool in a Docker image format and run it as we needed. This would also save us from the worry of publishing the binary as a downloadable executable. Pushing and pulling Docker images to DockerHub and automating those processes is a trivial matter.

How it Works

So let’s take a look at this in action shall we?

Here is an example of one of our application pipelines:

example pipeline

This represents the pipeline associated with a golang hello world app. But that doesn’t really matter, this solution will work for any containerized system. Each green square is a “job” in the pipeline. The jobs that are important for our discussion are the deploy-stage job and deploy-prod job. The deploy the build docker image to ECS. The steps in the job look like this:

- name: deploy-stage  plan:    - aggregate:    - get: golang-starter-src    - get: golang-starter-terraform  trigger: true    - get: golang-starter-image  passed: [build-rc]  trigger: true  params:    skip_download: true  - task: env-to-ecs    file: golang-starter-src/ci/tasks/env-to-ecs.yml    params:      ENV: stage

That’s quite a chunk and if you aren’t familiar with Concourse it’s probably overwhelming. All you need to know is that there is a “plan” block, and each value in that block is an individual step in the job. If you’ll look, you’ll see the task step called “env-to-ecs”. During that step, Concourse pulls down the Docker image with our conversion tool and runs the binary with some arguments like so:

/go/bin/cmd --input golang-starter-src/env/${ENV}/${ENV}.env --output ecs-env-blob/output.json

We invoke the binary by calling the cmd path and passing it the input file with — input, and declaring an output file with — output. It will even create the output file if it doesn’t not exist. A quick note, Concourse will resolve the variable ${ENV} to stage during the “deploy-stage” job and prod during the “deploy-prod” based on how we’ve set the pipeline up. A voila, the conversion is done! Now we have an elegant solution that “just works™”.

So that’s how we created a custom solution in the endlessly customizable world of Concourse. If you’re into continuous delivery, and you’ve never used Concourse, I highly recommend you check it out. It has a slightly steeper learning curve than most CI/CD systems, but the rewards are much greater. You can pretty much build whatever you want. You can check it out here.

Thanks for the read. Leave a clap or 50 if you learned something useful. We are 7Factor, and you can check us out here.We’re building good things everyday.

--

--