Spacepods: Beyond the Cloud

Driven by Code
Driven by Code
Published in
8 min readJun 12, 2019

By: Kyler Stole and Wayne Pichotta

TrueCar has been working on migrating out of our data center for three years or so, while moving to an agile development process that includes, at its core, CI/CD. In our previous post about CI/CD, we referenced Spacepods, but didn’t dive into the details (if you haven’t read it, it’s worth taking a few minutes to acquaint yourself with “why” we started down this path). In this post we introduce Spacepods and talk about the impetus behind its development.

“First we build the tools, then they build us.” — Marshall McLuhan

Reproducibility is an essential characteristic of the development process. How many times have you struggled to get answers for some issue only to hear “it works on my computer”? Local development is great for speed, but it rarely mirrors the production setup. Our move to the cloud allowed us to leverage cloud infrastructure for reproducible development environments.

Single Pane of Glass

Spacepods is the internal product we built with Ruby on Rails to manage our AWS infrastructure. It enables our engineers to create their own isolated, production-like environments on demand. It is also how we deploy to our higher-level environments in a controlled, repeatable, and collaborative way. Combining those two pieces, it provides the backbone of our CI/CD workflow and is the centerpiece of the developer toolset at TrueCar.

As the centerpiece of development, Spacepods is the glue connecting many integrations. It communicates with build machines, testing machines, and AWS services. It also ties in with our Docker registry, sends messages to Slack and responds to Slack commands, and accesses GitHub and Jira.

Encapsulation in a Pod

Spacepods manages several types of resources that are needed to run an application in AWS. Physical computing and storage resources — such as RDS database instances, EC2 computing instances, and S3 document storage — are obvious examples. We also need security groups, VPC information, and any other permissions-related information. Finally, there are the applications themselves. The applications and all their dependencies get combined into an abstraction called a Pod.

Pods help us manage our longstanding resources for our higher-level environments: QA, staging, and production. Pods also come in a personal flavor. Software engineers can create their own short-lived, personal Pods. This fixes the common issue that arises when sharing a finite collection of integration environments. By using Pods for both development and production infrastructure, we do away with the “it works on my machine” annoyance that so often plagues local development.

When a Pod is created, Spacepods takes the list of requested applications and resolves the dependency tree for each application recursively. This produces a full list of infrastructure components and is used to generate Terraform behind the scenes. The Terraform describes necessary resources, leveraging the “infrastructure as code” paradigm for reproducible AWS infrastructure. With the AWS resources provisioned, Docker containers for the applications are deployed to a running ECS cluster where load balancers will begin sending them traffic.

Personal Pods

When a developer needs to run a particular version of their application in AWS, they can easily create a personal Pod.

Creating a new Pod

This Pod will have everything necessary for the application to run in AWS, including:

  • Dependencies (other applications) — for example, a backend API for a frontend app
  • RDS database — This could be loaded with seed data or a new instance freshly restored from a snapshot of data from another environment.
  • Routing — Each application in the Pod gets its own DNS entry in Route 53, listener rule in an Application Load Balancer (ALB), and placement as a target in a target group.

With all the necessary pieces in place, Spacepods deploys the application containers to an ECS cluster (AWS’ Elastic Container Service), providing it with environment variables and secrets to inject at runtime.

Within minutes of creating the Pod, the developer will have a fully functioning environment in the cloud. Spacepods displays the pertinent Pod information: deploy details to see what is running, links to application logs for debugging, and, of course, links to the running applications to see them in action.

Add the applications you need for your particular purpose

Applications Management in Spacepods

Applications are basic components of Spacepods. Neither the application code nor compiled builds are stored in Spacepods, but it manages all the metadata to coordinate deployments. Each Spacepods application is tied to a GitHub repository, which is used for continuous integration and fetching some information directly from GitHub. Spacepods also hosts basic application settings that get used by various integrations. Once properly configured, new builds, which are Docker images stored in a separate registry, are recorded by Spacepods to be used in Pods and application deployments.

Spacepods also provides an interface for stored environment variables and secrets. Applications get a separate set of environment variables and secrets for each supported environment. Without any special configuration, these automatically include a standard set of variables, such as the environment name, the build number, and the URL for the database that the application will use. Developers can add custom environment variables on top of that, which are stored directly in Spacepods, as well as secrets that are stored in our secrets store and hidden from basic users. This integrated environment variable manager allows us to edit environment variables and secrets no matter what secret store we are using (we have used Vault by HashiCorp, AWS Parameter Store, and AWS Secrets Manager). Since deployments are handled by the same platform, it is easy for us to send environment variables along with the application containers when we run them in ECS.

Easy access to environment variables

Applications Containerized with Docker

Not too many years ago, deploys required static data centers and dense runbooks to get code into production. The advent of Docker application containers is a huge reason why we are able to use Spacepods the way we do.

When new commits are available on a branch, GitHub notifies our build machines, which begin the build process. They pull dependencies, run unit tests and other CI tasks, and finally build a Docker image for all the code in the branch. That image gets tagged with a build number and pushed to our Docker registry, after which Spacepods is notified of the successful build and stores the new build as a version of the application that can be used for deployments.

Master Deploys

Deploys to higher-level environments (which we call master environments) go through a slightly different process than Pod deploys. Whereas a personal Pod creates an entire environment to run the application, master environments are not ephemeral — the infrastructure is already provisioned and we just need to replace the running build of the application. We do use Pods to deploy infrastructure for master environments, but they rarely require updates. Master deploys also present an interesting requisite: the running application must maintain availability while we replace it with the new version. That is not the case for Pod deploys, where we start from a clean slate every time. (Even for Pod updates, we start by wiping any running applications.)

Despite the varied requirements of master and Pod deploys, there is a lot of commonality. Master deploys skip the Terraform steps that Pod deploys start with, but they both use the same deploy script after that. It locates load balancers (ALBs), sets up the database (performing any necessary migrations), and rolls out the application in steps to ensure a smooth rollover. Even though Pods are less reliant on some of the fancier steps in the deploy script, using the same script ensures parity between the two deploy processes.

  1. Clear any outdated ECS services, in case something got left in a bad state from a previous deploy failure.
  2. Register an ECS task definition with information about the application, a lot of which comes from the environment variables.
  3. Run an ECS task for database setup. E.g. run migrations
  4. Create the service. At this step, we locate a load balancer for the application and configure any missing routing components. Then we create a new ECS service for the build that is being deployed and register it with the appropriate target group.
  5. Roll out the service. Rollout is broken into stages where the new service’s desired task count is scaled up and the old service is scaled down. At each rollout stage, we poll ECS to ensure the running task count reaches the desired count. At the end, the old service’s desired count reaches zero and the old service is deleted. For auxiliary application containers like Sidekiq workers, we skip the slow rollout. If something fails along the way, then the old service desired count is restored and the new service is discarded.

Spacepods generates automated release notes for each deploy. It shows which software engineers contributed and which Jira tickets were part of the deploy. This also has a link to GitHub to compare changes in the deployed code with the previous version, links to pre-filtered logs, and a link to the results of automated testing (which we call Gatekeeper).

Easily compare what you deployed to a previous deployment

Pods Enable Continuous Integration

Our implementation of continuous integration (CI) brought with it another use for Pods: CI Pods. This special type of Pod is created automatically in the CI process with each new available build for an open GitHub pull request. CI Pods embrace the ephemeral ability to run automated integration tests on the production-like application setup.

Master Deploys Enable Continuous Deployment

When a CI Pod has been created and tested successfully, a developer may merge their changes into the default branch. With every new build of the default branch, Spacepods initiates the continuous deployment (CD) pipeline for that application, automatically deploying to supported master environments and running automated tests at each stage. Spacepods coordinates every step of the CD process, so changes will be in production in minutes without any manual interaction. Spacepods updates statuses along the way and will stop the pipeline and notify developers if any deploys or tests fail.

Easily view and understand the status of your deployment

Spacepods Gets Our Code Deployed Faster

This post only scratches the surface of how Spacepods defines and orchestrates our CI/CD process and environments. Without a platform like this to deploy our code and trigger tests, we struggled to make deployments consistent from development to production. Spacepods provides ephemeral environments as Pods that make development consistent and a single-pane-of-glass architecture that empowers our CI/CD workflow. This allows development to be driven by code and not by the deploy process, removing distractions and freeing our software engineers to focus on building a better car buying experience. With this tool TrueCar was able to execute on plans to migrate to the cloud and become truly agile while greatly increasing the frequency of deploys, which ultimately speeds up innovation.

We hope you enjoyed this post introducing Spacepods. We are already working on a new version of Spacepods that we will discuss later in subsequent posts. Stay tuned for info on how we use Terraform, more on our CI/CD pipeline (including how we test everything), dealing with multiple databases, data needed to simulate production, our integration with Slack via SpaceBot, and more!

We are hiring! If you love solving problems please reach out, we would love to have you join us!

--

--

Driven by Code
Driven by Code

Welcome to TrueCar’s technology blog, where we write about the interesting things we‘re working on. Read, engage, and come work with us!