AWS Transit Gateway Routing in Multiple Accounts

Driven by Code
Driven by Code
Published in
8 min readMay 13, 2019

By: Regis Wilson

Introduction

TrueCar has been migrating our operations out of two legacy data-centers, starting around 2015 and finally finishing late in 2018. There were a few lasting pieces of critical infrastructure that needed to be decommissioned before we finally turned out the lights and powered down the entire footprint in the datacenter. The last bits to turn off were actually some of the most interwoven and complex pieces of infrastructure: our corporate and production network. It should have been very easy to simply turn off the production networks, since we don’t run any workloads from the datacenter anymore — except that our employees still used it to VPN in and reach AWS from Direct Connect links.

AWS’s Direct Connect product was a great feature released at a time when we needed it. We were able to link the networks in our legacy datacenter to the new Virtual Private Cloud (VPC) networks that were being created in AWS for our platform. This convenient and helpful feature allowed us to move data across a private link between our legacy and AWS platforms, with all the security and privacy you expect from on-premises networks. But the feature actually became like the supposed ancient Chinese proverb: “Be careful what you wish for, lest it come true.”

Ultimately, the last vestiges of our datacenter dependencies lay in the network routing, which is central even to the corporate office networking! In the original production network design, corporate networks were actually terminated and routed in the datacenter. Thus, moving our computing workloads, storage, and data pipelines to AWS was only about 90% of the effort. We also needed to turn off all the networking equipment in the datacenter without causing our corporate offices and our access to our AWS VPCs to go dark. But there’s an even darker side to our datacenter-centric networking: the Direct Connect links to our AWS VPCs form a set of “spokes” that connect to each other and route via the “hub” at the datacenter.

The way to best describe this dependency is that it’s similar to sewing together two bodies to create a Siamese Frankenstein monster by moving the brain, limbs, organs, and blood over to the new body, but then realizing you also need to move over the heart that is still beating in the old torso. We somehow needed to move from Figure A to Figure B, and then to Figure C.

Original Sate with only Direct Connect routing in place
Transit Gateway with Direct Connect

The Design

Of course, our actual network design is much more complex than that and includes several accounts, each tied to several “environments.” Some environments may be similar to others (like, say, “development” and “QA”), while others should be entirely separate (like, say, “production”). Still other environments are shared by all (namely, “common” or “services”). Part of this design is a legacy ported over from our datacenter operations, but some parts make sense and allow us to be flexible and not too rigid (such as treating “QA” and “development” as similar).

Fortunately, AWS released the Transit Gateway product, which allowed us to remove the dependency on our legacy routers and Direct Connect links so that we could tie all our environments together in one place without creating a mesh of peers. We could also create routing policies for each of our types of environments: for example, “preprod” and “prod” could share “common,” but not have direct routes to each other. Security groups inside each environment could protect traffic at the application and groups level.

Our proposed design to include Transit Gateway appears in Figure D.

It only looks complicated…

This somewhat complicated design required us to consider a central configuration repository, such as CloudFormation, or possibly codebases to keep us up to date and straight. We would never attempt to keep this design manual by clicking around in the console management interface. Not only would the design entail multiple copies of configuration that must all be kept in sync, but the configuration would be spread across multiple accounts, forcing an administrator to log in and out of each account, clicking around and hoping that everything was correct and lined up properly.

Fortunately, we use Terraform at TrueCar for our infrastructure management, so all of our existing VPC structures were already configured in “code” (or at least version-controlled configuration template documents). Since we didn’t find many existing use cases or documents on the internet that outlined how people were using AWS Transit Gateway in this way, we set about building the proposed architecture ourselves and share our results here with you, the internet reader.

Getting Started

The first part of creating an AWS Transit Gateway that works between accounts is to enable AWS Resource Access Manager and share the gateway with all of your accounts. We have a fairly formidable list of a dozen or so accounts, so the new Access Manager was already used for sharing common pieces of cross-account access resources. We adopted the “hub-and-spoke” model, because it scales enough for our purposes and allows us to group resources together in a way that makes sense for our business. (Keep in mind that currently AWS Transit Gateway does not cross regions without a VPN connection; but when it does, we might introduce “hub-to-hub” connections to bring multiple regions together).

Configuring the Hub

Here is a snippet that creates a Transit Gateway and Resource Access Manager (RAM) share:

By default, we turn ON “auto_accept_shared_attachments” (for automation) and turn OFF “default_route_table_*” (because we explicitly create association and propagation tables). We set “var.allow_external_principals” to a list of account IDs.

Next, we configure three routing tables for the Transit Gateway. These three routing configurations will hold the routing policies for our three types of environments, namely “common” (shared), “prod” (anything production), and “preprod” (anything that isn’t shared or production). We create a hard-coded list of the routes that are reachable in common; this was the easiest way to drop routes into every routing table. Here is just one snippet showing “common” and then the snippet can be repeated twice more for “preprod” and “prod.”

Each of the routing tables associated with the Transit Gateway act like a Virtual Routing and Forwarding (VRF) table. We can then associate each environment’s spoke VPCs to a particular routing domain and also export or “propagate” the routes from spokes to other routing domains for reachability as needed.

One of the ways we mitigate the management of the various routes and routing domains is to use summary routes to describe our different environments. The heuristic that we use for creating environments and routing domains is to always align on network boundaries that are powers of two, and always start a network group on an even boundary. To show you what we mean in a bit more detail, here’s an example table that shows how we can partition the network boundaries:

Our network partition boundaries

This is just an example that is designed to show how routing by summary routes could be set up. Spending some time mapping them out before you create your networking map is encouraged so that you can easily update new routes and environments without changing ACLs or routing tables everywhere.

Configuring the Spoke(s)

As a prerequisite to starting on a spoke, we need a few pieces of information, which we probably already have or can gather easily:

  • The name of the environment (“QA”)
  • The type of environment (“preprod”)
  • The unique (non-overlapping) network CIDR for the VPC (“192.168.16.0/20”)
  • At least one subnet to attach to the Transit Gateway for routing (we typically use three, since we default to creating three subnets in separate Availability Zones, but you are only required to use a single subnet for all traffic to route in and out via Transit Gateway)
  • At least one route table that includes the subnet(s) we use for routing so that we can add routes out to the hub

The best way to gather the subnet information above is to look for subnets with a particular tag (we use the tag name “Resource” and then look for tag values such as, say, “Core.”) Using tags allows us to query for subnets that will be swept up to be attached at runtime, rather than hard-coding or writing complex logic to find interesting subnets in interesting Availability Zones. It’s then easy to find out which routing tables are attached to that subnet, which gives us most of the information we need at runtime without any hard-coding.

(There’s one small piece of hard-coded logic in which we rely on there being exactly three such subnets in each environment. Since we use the same Terraform module for each VPC creation, this is a safe assumption.)

Terraform has a neat function for querying and filtering AWS resources that can be used later. Here’s how we do it in this case:

Now that we’ve gathered enough information to use, we merely need to associate the VPC with the Transit Gateway routing table and add any needed routes to draw traffic out (using summarized routes). Lastly, we need to add a propagation to each Transit Gateway Routing Table so we can export routes to the appropriate domain. In our use case, every environment wants to export its routes to “common” and, similarly, “common” wants to export all of its routes to every environment.

Going step by step, this next section adds the VPC attachments at the spoke and corresponding hub side, and the “handshake” allows both sides to connect across accounts via the previously created Resource Access Management share. We also add a “null resource” that updates the “Name” tag for the attachment in the hub account (this is because at the time of this writing, tags do not transfer across accounts).

(Sharp-eyed detectives may notice the hard-coded dependency of using exactly three subnets above).

The next part adds one attachment per environment type and also adds the bidirectional propagation for the “common” type. The resource names and titles are highlighted in yellow to point out which sections are related. It is hard to read this code because there are a lot of duplicated (but slightly different) stanzas.

This hard-to-read section is just a bunch of copy-pasted templates that execute the logic described above. Please note also that each route table association and propagation is implemented in the hub account via the “provider” block, not via the spoke account.

Conclusion

Transit Gateway has proven to be a flexible and valuable service that allows us to migrate from our legacy routing infrastructure to the cloud. Utilizing a hub-and-spoke model rather than a mesh of peers gives us improved visibility and greater control over our routing policies. We can also capture all of the policies and connections in “code” via Terraform. Here is a final snapshot of a simplified logical view of two VPCs connected with all the components listed in the code above.

We are hiring! If you love solving problems please reach out, we would love to have you join us!

--

--

Driven by Code
Driven by Code

Welcome to TrueCar’s technology blog, where we write about the interesting things we‘re working on. Read, engage, and come work with us!