Developing locally with Vagrant and Docker

Tech @ Airtime
Airtime Platform
Published in
6 min readJul 26, 2016

by Abby Fuller

“But Abby, all of my 💩 is in the ☁️, I can’t test that locally!”

We’ve fielded a few questions in the last couple of weeks around how we handle local dev environments:

Like this.
Thanks for the awesome question, Pasha Riger!
Call me, Rami Malek 📞

Intelligent, easy to work with development environments are the gift that keeps on giving, from day #1 as a new developer, to QAing a feature right before release.

A couple of requirements when we started this project:

  • Should support developer productivity: Needs to be easy to spin up a consistent, working local environment.
  • Should have a low learning curve for contribution: Need to test out a service locally? Adding it yourself should be straightforward, and not blocked by waiting for ops.
  • Should not require knowledge of the backend services themselves. To explain: dev environments should allow engineers on the client teams, like iOS or Android, to start a working, local copy of the backend environment. It should not require them to know how to work with Docker containers.
  • Should be repeatable, and self-contained. The environment should set up everything that it requires to run properly as part of the startup process.
  • The environment should mimic the actual staging and production environments as closely as possible. This increases developer velocity, and cuts down on QA/Ops hassle at the end of the process.
Major 🔑

TL;DR: we build and test everything locally, and so should you. We build local environments with Vagrant, using Ansible as the provisioner

Back to Basics

Vagrant allows us to quickly describe our development environments (resources, exposed ports, etc.), and run a provisioner that sets up the service requirements. In our case, this means installing some dependencies, and pulling/starting the Docker containers that make up our backend architecture.

Getting started with Vagrant just means adding a Vagrantfile to the root of the project. Ours looks something like this:

A couple of things to call out here:

  • We use Ansible Vault to manage secrets. More on that later.
  • Since we have a lot of services (#microservicesproblems), we offer a couple of different provisioning options.

On to the next one

Ansible is an IT automation tool that lets us use YAML to define tasks. In our case, we use Ansible to define the requirements for our local environment, and then start the containers. A (much abbreviated) version of ours looks something like this:

This takes care of a couple of things for us: installs requirements that we need for our environment, pulls/starts our dependency containers (Redis and MongoDB), pulls our service containers from Amazon ECR, starts them, runs npm install for the containers, runs npm install for the host, and grabs a container IP that we can use for cross-container linking 🎉

A quick note on Ansible Vault and secrets

Since one of our goals was to keep local environments as similar as possible to our staging and production environments, we need to actually use our real life container images. This means we need to authenticate with AWS and ECR, which in turn requires real-life secret keys. Ansible Vault lets us safely commit those keys to Github. You can see where we referenced the password for our Vault in the Vagrantfile:

Check for vault_password_file, if not, prompt for password.

And where we referenced the vars file itself in the provisioner:

Tell Ansible where we hide our secrets 🙈

Finally, we reference values from the vault like this:

Add AWS secrets during the CLI setup process.

You can learn more about how Ansible Vault works here.

Setting up the AWS CLI

In preparation for pulling our actual images from ECR, we have to authenticate local environments with AWS:

This is what allows us to a) use AWS CLI commands to work with our AWS resources, and b) use actual, real-live versions of our containers. Remember when we said that developer environments should be as close as possible to staging and production? This is how we achieve this: the same copy of a service can be run locally, on staging, and on production.

Authenticating with ECR

hack city, b***h, hack hack city b***h

This lets us do a few things: install the AWS cli, set some defaults (like us-east-1), and handle logging into our registry on ECR.

Working with the Vagrant environment

Working with Vagrant itself is easy peasy. Just “vagrant up” from the directory containing the Vagrantfile, and away we go:

Vagrant up.

I don’t have a vault_password_file, so the provisioner prompts me for the vault password. Once I enter that, it starts running all of my setup tasks:

I can’t fight this feeling anymore, I’ve forgotten what I started provisioning for 🎶

If you’re not making changes to the backend services themselves, you can stop here: the provisioner has created a fully functional copy of the backend that you can test again. If you are working with the backend, you can SSH into the Vagrant guest to access the containers:

Coming $HOME again

So let’s recap. Here’s what we were hoping to achieve with our local developer environments:

  • Support developer productivity

Developers can start working on features quickly. Onboarding a new developer is as simple as pulling a repository, and running vagrant up. Most importantly, testing everything locally, in a consistent environment helps us keep our velocity high: tests catch errors locally before a feature makes it to staging, and developers spend more time developing, and less time trying to fix their local environments. Win win.

  • Low learning curve for contribution

We use Ansible for our provisioner, which is written in YAML. Syntax is easy to work with, and since each new service is Docker container, you can follow the existing template to add a new service quickly.

  • Should not require knowledge of the backend services themselves

Client teams can use the Vagrant environment as-is: containers are pulled directly from ECR, and you can test against them locally. No Docker, Bash, Vagrant or Ansible knowledge is required to run a local copy of the backend services. Just works.

  • Should be repeatable, and self-contained

Vagrant environments can be destroyed and rebuilt pretty easily, and the provisioner includes everything required to bootstrap a new environment. You can just as easily run the Vagrant environment on a new, wiped laptop as you can on a custom machine.

  • The environment should mimic the actual staging and production environments as closely as possible

We use the same containers locally as we do on staging and production. We authenticate with our actual ECR repositories, and run the containers the same way. This allows us to develop features accurately before they even make it to staging, which cuts down on QA time down the line. Everything still needs to be tested (duh), but we can eliminate some of the “but it works on MY machine!” situations.

Coming up next

Building better Docker containers, and custom monitoring with Cloudwatch.

❤ what we do? Come join us.

Originally published at https://techblog.airtime.com on July 26, 2016.

--

--