Automating Ember.js App Deployment on AWS

Piotr Steininger
19 min readSep 4, 2018

--

About a year ago, one of my clients, LaborVoices, was accepted and successfully graduated from the 500 Startups program. As one of the perks the company received some AWS credits. This was one of the motivations to consider yet again moving the systems directly onto AWS (they currently run on Heroku). At the same time, as LaborVoices CTO, I was weighing options for rebuilding the internal systems, which handle many functions from content creation and deployment to IVR systems, to data collection, verification, analysis and more.

I knew I wanted to use Ember again for the frontend, as it proved successful with one of more recent products called Symphony (a supply chain visibility product). For the backend, I chose Elixir and Phoenix framework, because I simply couldn’t resist the temptation of learning them. This story however will be strictly Ember related. There will be a whole other series on the backend stuff as it is far more complex, frustrating and involved than the frontend stuff.

I knew that comprehending and wrangling all the big and small parts of AWS will be a tough task. For that I secured the help of local AWS guru, Renato Losio, and together we chose to use Terraform to manage the initial and on-going complexity of the AWS infrastructure Renato and I were about to create.

The End Goal

What I really wanted to achieve is a similar setup to Heroku, where I, as a developer, can push to master branch, and have it auto-deploy to staging, and, conversely, push to production branch and have that deploy to production environment (this is one way of using the Pipelines feature). I wanted to have it run cheap, efficient, and relatively fast. Having not only the application code but the infrastructure setup comprehensible was also a priority. Last, but not least, we wanted to set up a DNS entry in Cloudflare, so that it points to the host URL (be it S3 or Cloudfront), because it was already in use at LaborVoices.

The Setup

There are two parts to the setup: the easy and the messy. The easy part is the installation of Ember add-ons and configuration of ember-cli-deploy plugins to be driven by environment variables. The messy part is all of the Terraform setup as well as the manual and Terraform driven pieces of the AWS puzzle

The Easy Part

Ember has an awesome set of addons, which makes configuring Ember for deployment to AWS easy. There are several variants for using AWS services for deployment, but I decided on using the following addons:

ember-cli-deploy-s3-pack, which includes s3, s3-index and other nifty plugins

ember-cli-deploy-cloudfront, which take care of CF invalidations. I opted to use Cloudfront, not just for the speed of content delivery worldwide, but also because I really don’t like to use “#” in URLs.

The (rather) Complex Part

What the ember-cli-deploy ecosystem does not provide is a facility to run automated builds and to deploy the artifacts to S3. This setup allows for deploying manually only.

In order to automate this process, I wanted to have a one-stop-shop for setting up infrastructure, not just for deployment, but also CI. Why not CodeShip, CircleCI or Travis? Well, they are more expensive than CodeBuild, and I wanted to milk our AWS credits. So I identified the need for the following stack, in order to enjoy a Heroku-like experience:

  • S3 bucket for assets and index (same bucket is enough, so let’s keep it simple)
  • Cloudfront Distribution, using the above bucket as origin
  • CodeBuild to run the build, test and deploy commands
  • Cloudflare for DNS
  • Terraform to put it all together, and easily maintain both staging and production

Piecing it all together — Step By Step

Step 1 — Ember Application setup

For the purpose of this post, I created a repo with a very basic Ember app, and installed the necessary add-ons. I also went ahead and simplified the configuration, so as to avoid repetition. In real life you may have separate AWS accounts for each environment you deploy to (some bigger teams do that to prevent dumb mistakes), but this will not be necessary in our case. I will use the same IAM user and credentials for both production and staging. The region will also be the same, only the bucket names and CF distribution names will be different. The details can be viewed in this repo.

Here are some commands you can run in your application (but that is already done in the above app):

ember install ember-cli-deploy
ember install ember-cli-deploy-s3-pack
ember install ember-cli-deploy-cloudfront

Reconfigure config/deploy.js similar or identical to the one in the repo

Step 2 — AWS IAM User for Terraform to use

Log into your AWS console and create a user with full management permissions. You might want to tighten the restrictions later, but for simplicity, I would recommend that to start.

Step 3 — Setup your local environment (AWS CLI, Terraform, etc)

If you’re on a Mac, the easiest way to get things set up is through Homebrew. Once you have Homebrew installed, install AWS CLI and Terraform client:

brew install awscli
brew install terraform

Then, create an AWS profile with the credentials of the user you just added. Using a profile with AWS CLI is a neat way to ensure you’re not doing the wrong thing in the wrong place. It is particularly useful for AWS consultants working on many different client accounts. For us, it will simplify the Terraform Provider setup. So let’s create the profile:

aws configure --profile emberjs

And follow the instructions. After you’re done you can check your work, and see how the profile is stored this way:

> cat ~/.aws/credentials
...
[emberjs]
aws_access_key_id = *****
aws_secret_access_key = *****
...

Now, if you use Cloudflare, log in and get your token, you will need it in the following step. Also, go ahead and generate a Personal Access token on Github. You will need in the next step to configure GitHub provider, and to setup post-commit hooks between GitHub and CodeBuild. Once you generated the 2 tokens, let’s set them aside for a minute.

Step 4 — Terraform Project Setup

Now that we have most of the access concerns taken care of and we have a repo, to which we will push our Ember.js code, and from which CodeBuild will be building and deploying, it’s time to setup the Terraform project. Set up a local folder (i.e. ember-deloy-aws ), initialize a git repo there and set up a repo in GitHub.

You can see how I set the repo up and follow the story commit-by-commit in this repo.

Now that your have your infrastructure repo and local directory, it is time to get started. First I recommend setting up a variables.tf file.

It’s worth noting, that the naming of file is arbitrary. I looked at the work of others and adopted some conventions. Terraform doesn’t care what your files are named. It will load all .tf and .json.tf files (and a few others). I like separating complexity, so thus the naming.

The purpose of variables.tf in the root of the project will be to define variables Terraform can use in the code without hard-coding the values and committing them to code. They just need to be provided at runtime. For example:

#variables.tf
variable
"github_token" {
type = "string"
}

Will look for an environment variable: TF_VAR_github_token if it doesn’t find it, it will ask you, each time you run terraform init|plan|apply when it sees a reference to that variable in code, like we do in providers.tf, the file we’ll designate to set up the providers. For example:

#providers.tf
provider
"github" {
organization = "psteininger"
token = "${var.github_token}"
version = "~> 1.1"
}

So to set up the variables, you’d have to run the following lines in the terminal you’ll use to run terraform commands:

export TF_VAR_cloudflare_api_token=<your_cf_token>
export TF_VAR_cloudflare_email=<your_cf_email_address>
export TF_VAR_github_token=<your_github_token>

Alternatively, you can add it to your profile file, for example ~/.bash_profile or create a shell script, and run source env.sh to set these up temporarily

So let’s put it all together, and see how our variables.tf and providers.tf look like:

#variables.tf
variable "cloudflare_email" {
type = "string"
}
variable "cloudflare_api_token" {
type = "string"
}
variable "github_token" {
type = "string"
}
#providers.tf
provider "aws" {
region = "eu-central-1"
profile = "emberjs"
version = "~> 1.23"
}
provider "cloudflare" {
email = "${var.cloudflare_email}"
token = "${var.cloudflare_api_token}"
version = "~> 1.0"
}
provider "github" {
organization = "psteininger"
token = "${var.github_token}"
version = "~> 1.1"
}

You will notice above in providers.tf that I hard-coded the region and the AWS CLI profile I created previously. The reason there are versions for each provider is because each provider is an independent open-source project, with its own versioning path. Furthermore you can restrict accidental upgrades in these providers, and potential breaking changes.

Now we should be able to initiate a local state in our project directory by running terraform init. This will download the provider code and validate the setup.

There is one optional sub-step left in Terraform setup, and that is the creation of a remote-state repository. You can read more about Terraform’s remote state and its applications, but for the purpose of this blog, it’s enough to understand that it allow for collaboration between developers: to keep things in sync and to prevent from stepping on each other’s toes by potentially running terraform plan or terraform apply at the same time. There are a number of ways to handle remote state, but one of the easiest is to use a combination of S3 bucket and a DynamoDB table. You will have to create these either via a console or CLI. Just make sure the credentials and the user you set up earlier have proper access to both. Here is how you could set up remote state:

#state.tf
terraform
{
backend "s3" {
bucket = "my-ember-tf-state"
key = "state"
region = "eu-central-1"
profile = "emberjs"
dynamodb_table = "my-ember-tf-state"
}
}

The only requirement of the DynamoDB table is to have a LockID primary key of type string. You can read more about the state locking with S3/DynamoBD here.

Note, that the backend definition does NOT support any interpolation, thus the values for the S3 bucket an DynamoDB table must be hard-coded. I also highly advise to make sure that the S3 bucket be encrypted, as the state contains sensitive data (i.e. GitHub token, etc).

After the bucket and the DynamoDB table are created, run terraform init and you should be good to go.

One final, though not optional, step is to create a workspace in terraform. A workspace in Terraform simply delineates distinct sets of environments. We will create 2 of them: staging and production. We will use these names to drive some naming conventions, and logic (i.e. master branch deploying to staging environment). So let’s create our workspaces.

terraform workspace new production
terraform workspace new staging

Now we can check all of our workspaces:

$ terraform workspace list
default
production
* staging

We are all set and the staging workspace is selected. Now we will start on setting up our hosting first and then build out the CI/CD portion. So, let’s get to work.

Hosting on S3

Terraform allows us to break out the complexity of our infrastructure into modules. These can be reused and event shared via Github. Sometimes, it’s nice to divide them purely for organizational reasons, although, in many cases, simply splitting the code up into separate files may be enough. This may be an overkill here, but I do want to demonstrate this feature of Terraform.

In order to create a module, one has to create a directory and add a *.tf file. The convention is to use a file called main.tf for the resources and variables.tf to set up variables to be passed into the module, and data.tf for defining local and remote data elements (i.e. IAM Policy Documents, existing AWS Elements, etc), but you may choose something different.

mkdir hosting
touch hosting/main.tf
touch hosting/variables.tf
touch hosting/data.tf

Variables

We are going to define several variables: the name of your project/app, your domain name, and with those we’ll calculate the bucket name to be something like: app.example.com for production and app-staging.example.com for staging. So let’s get to it:

#hosting/variables.tf
variable
"name" {
description = "name of the hosted app"
type = "string"
default = "app" # or "app-staging"
}

variable "app_domain" {
description = "The domain prefix for the bucket name, which comes after either `s3-asset-path`"

}
locals {
bucket_name = "${var.name}.${var.app_domain}"
}

You can see that we defined 2 variables and one local variable, whose scope is limited to this module. The bucket_name will save us from duplicating this logic in many places. We will pass in the name and app_domain in to feed this variable, and other resources.

The Bucket

The next step is to define our S3 bucket and give it some configuration. Let’s put the following in main.tf:

resource "aws_s3_bucket" "frontend" {
bucket = "${local.bucket_name}"
acl = "public-read"

website {
index_document = "index.html"
}

cors_rule {
allowed_headers = [
"*",
]

allowed_methods = [
"GET",
"HEAD",
]

allowed_origins = [
"http://${local.bucket_name}",
"https://${local.bucket_name}",
]

expose_headers = [
"ETag",
]

max_age_seconds = 3000
}

tags {
Name = "${var.name}"
Environment = "${terraform.workspace}"
}
}

You can see that we used our pre-calculated bucket_name in several places, as well as referenced the app name and the name of the workspace in the tags section. There is also a CORS rule, which may or may not be necessary, though it will not hurt. The acl of public-read is optional if you will definitely use Cloudfront, but, again, it does not hurt.

Access Policy

The next step is to define an IAM Policy document to allow access to the S3 bucket. In Terraform, you will see that a Policy document is not necessarily tied to a resource like an S3 bucket. So let’s open up data.tf and paste the following in:

#hosting/data.tf
data "aws_iam_policy_document" "bucket_policy" {
statement {
effect = "Allow"

principals {
identifiers = [
"*",
]

type = "AWS"
}

actions = [
"s3:GetObject",
]

resources = [
"${aws_s3_bucket.frontend.arn}/*",
]
}
}

The policy will allow anyone to access any file in the bucket Terraform will create. This will be interpolated by Terraform. But this isn’t enough. This policy needs to be assigned to the bucket. So lets do that:

#hosting/main.tf
...
resource "aws_s3_bucket_policy" "frontend" {
bucket = "${aws_s3_bucket.frontend.id}"
policy = "${data.aws_iam_policy_document.bucket_policy.json}"
}
...

Here we create an actual S3 bucket policy, based on Terraform created JSON representation of the IAM Policy we just added. At this point we are done with S3 setup.

Cloudfront Distribution

We can now continue with the Cloudfront Distribution setup, in main.tf. This is the largest chuck of code thus far, so hang in there with me. Also, it takes the longest to instantiate, due to its global complexity. Here’s the code:

#hosting/main.tf
resource "aws_cloudfront_distribution" "frontend" {
origin {
domain_name = "${aws_s3_bucket.frontend.bucket_regional_domain_name}"
origin_id = "${aws_s3_bucket.frontend.bucket}"
}

default_cache_behavior {
allowed_methods = [
"GET",
"HEAD",
]

cached_methods = [
"GET",
"HEAD",
]

forwarded_values {
query_string = false

cookies
{
forward = "none"
}
}

target_origin_id = "${aws_s3_bucket.frontend.bucket}"
viewer_protocol_policy = "redirect-to-https"
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}

restrictions {
geo_restriction {
restriction_type = "none"
}
}

viewer_certificate {
cloudfront_default_certificate = true
}

custom_error_response {
error_code = "404"
response_page_path = "/index.html"
response_code = "200"
}

enabled = true
is_ipv6_enabled = true
default_root_object = "index.html"

aliases = [
"${aws_s3_bucket.frontend.bucket}",
]

price_class = "PriceClass_200"

tags {
Name = "${var.name}"
Environment = "${terraform.workspace}"
}
}

So let’s step through it. First you will see the following block:

origin {
domain_name = "${aws_s3_bucket.frontend.bucket_regional_domain_name}"
origin_id = "${aws_s3_bucket.frontend.bucket}"
}

We make a reference to the S3 bucket we defined, and are able to get a bucket_regional_domain_name from it. This is is an S3 bucket address that includes the region, and limits the number of redirects, thus speeds up delivery. The origin_id is the name of the bucket itself. Both are necessary.

The next chunk specifies cache behavior, we won’t analyze it in-depth. You can read more about the settings, including Price Class on AWS Cloudfront docs site.

custom_error_response {
error_code = "404"
response_page_path = "/index.html"
response_code = "200"
}

The next notable chunk deals with custom error response. The above chunk instructs CF to redirect to index.html on any error and return status 200. This chunk is what gives us history based navigation (so no pesky “#” in the URL). That is it for Cloudfront setup. Last item is to setups the DNS entry. We’ll use Cloudflare for that (note that we already set up the provider, so this will be easy).

Cloudflare DNS entry

This is where Terraform proves fantastic. We can not only document our AWS infrastructure in code, but link it with resources elsewhere, like on Cloudflare. So let’s open up main.tf and add the necessary code.

resource "cloudflare_record" "frontend-dns" {
domain = "${var.app_domain}"
name = "${var.name}"
type = "CNAME"
value = "${aws_cloudfront_distribution.frontend.domain_name}"
proxied = true
}

Here, we use the domain name, our app name (which may include “-staging”), and the URL of the Cloudfront distribution. We set proxied to true because Cloudflare provides us with a free wildcard certificate, an we used the default one on Cloudfront, for simplicity and to cut costs.

Making it run

At this point if you ran terraform plan nothing would happen, that is because the module is essentially orphaned. We need to make a reference to it from the root directory of the project and provide it with variables. Before we do that, we should set up some outputs. Outputs are values that the hosting module will expose to the root of the project, and allow you to pass them into the CI/CD portion.

Outputs

Let’s set up an outputs.tf file, like so:

output "host_bucket" {
value = "${aws_s3_bucket.frontend.bucket}"
}

output "distribution_id" {
value = "${aws_cloudfront_distribution.frontend.id}"
}

Instantiating the Module

Before we instantiate the hosting module, lets set up a local variable, to decide on the subdomain of our application (“app” or “app-staging”). So let’s open up variables.tf and add:

variable "name" {
type = "string"
default = "app"
}

locals {
env_suffix = "${terraform.workspace == "production" ? "" : join("", list("-", terraform.workspace))}"
name = "${var.name}${local.env_suffix}"
}

We set up the name variable globally, with a default, and then we use, admittedly convoluted, Terraform logic syntax to calculate environment specific suffix, which we add to the local.name, which in turn we will pass into the hosting module.

Now is the time to add a main.tf file to the root directory and instantiate the hosting module:

module "hosting" {
source = "./hosting"
app_domain = "${var.app_domain}"
name = "${local.name}"
}

Yes, that is REALLY it :). Now, let’s run terraform init and if all looks good terraform plan. The latter command should tell you that it will create a number of things, which we defined in our code so far. Take the time to read through the output and familiarize yourself with it. Our next step is to set up a CI/CD with CodeBuild.

Continuous Integration with CodeBuild

Before we dive into the Terraform code that sets up the CI/CD part, let’s take a moment to understand what CodeBuild is and how it works. CodeBuild will use a docker image to spin up an instance to run your build per the details in buildspec.yml file, which you will find in the example app repo. As such it will need to have a service role with an appropriate “assume role” and “permissions” policies. The first one simply allows the instance to assume a service role, the latter drives which permissions that service role will have, and thus which external resources the build instance can access and what it can do with them.

We will create another module, called ci and follow the same conventions as before.

Service Role and Permissions

So let’s start by creating to first Policy (assume role) and attach it to the role:

#ci/data.tf
data "aws_iam_policy_document" "assume-role" {
statement {
effect = "Allow"

principals {
identifiers = [
"codebuild.amazonaws.com",
]

type = "Service"
}

actions = [
"sts:AssumeRole",
]
}
}

Then let’s create the role and link this policy:

#ci/main.tf
resource "aws_iam_role" "ci" {
name = "${var.name}-ci"
assume_role_policy = "${data.aws_iam_policy_document.assume-role.json}"
}

So far so good. Now, let’s take a look at the access policy:

#ci/data.tf
...
data
"aws_region" "current" {}
data "aws_iam_policy_document" "ci-access" {
statement {
effect = "Allow"

actions = [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents",
]

resources = [
"arn:aws:logs:${data.aws_region.current.name}::log-group:/aws/codebuild/${var.name}",
"arn:aws:logs:${data.aws_region.current.name}::log-group:/aws/codebuild/${var.name}:*",
]
}

statement {
effect = "Allow"

actions = [
"ssm:GetParameters",
]

resources = [
"arn:aws:ssm:${data.aws_region.current.name}::parameter/CodeBuild/*",
"arn:aws:ssm:${data.aws_region.current.name}::parameter/${var.name}/*",
]
}

statement {
effect = "Allow"

actions = [
"s3:GetObject",
"s3:PutObject",
"s3:PutObjectACL",
"s3:ListBucket",
]

resources = [
"arn:aws:s3:::${var.host_bucket}",
"arn:aws:s3:::${var.host_bucket}/*",
]
}

statement {
effect = "Allow"

actions = [
"cloudfront:CreateInvalidation",
]

resources = [
"*",
]
}
}
...

In the above document, the first thing we do is create a lookup to the current AWS region. Then we use the looked up value to create a restriction on resources allowed access{data.aws_region.current.name}. It is always a good practice to follow the Least Authority Principle in when designing and building systems, especially when automation is involved. The above policy doc allows the following:

  • CodeBuild to capture and store the logs
  • Access parameters kept in Parameter Store (SSM) for code build and under your custom path
  • Access S3 and thus allow the deployment, and activation
  • Create an Cloudfront Invalidation to enable visitors to see the new version of code.

Now that we have the access policy in place, let’s attach it to the role:

#ci/main.tf
...
resource "aws_iam_role_policy" "ci" {
policy = "${data.aws_iam_policy_document.ci-access.json}"
role = "${aws_iam_role.ci.id}"
}

Now, let’s get to the meat and bones of it all, the CodeBuild Project.

CodeBuild Project

Aside from the permissions we set up above, the CodeBuild project will need to know the following:

  • the source of the code to build (specifically the HTTP URL for cloning), and an OAuth token if necessary to access it
  • whether or not to create and artifact bundle (in our case we will skip it)
  • the docker image to use (AWS provides a few that will suffice)
  • environment variables that will be interpolated at build time (i.e. CF distribution ID, host buckets, and even URL(s) for your backend APIs)

Since version 1.2 the Github provider allows you to look up an existing repo (this was previously a headache) so let’s add a lookup reference:

#ci/data.tf
...
data "github_repository" "app-repo" {
full_name = "${var.app_repo}"
}

The actual name of the repo is puled out to the module variable, with a default. We will set up other important variables here, as well:

#ci/variables.tf
...
variable "app_repo" {
description = "repository <org_or_user/repo_name> with the code of the Ember app being deployed"
type = "string"
default = "psteininger/ember-deploy-app"
}
variable "github_token" {
type = "string"
}
variable "backend_api_base_url" {
type = "string"
}
variable "cf_distribution_id" {
type = "string"
}

Now, your project can have more than one API endpoint. You can set up more variables and decide on how you derive them.

We now should have enough to add our CodeBuild Project:

#ci/main.tf
...
resource "aws_codebuild_project" "ci" {
name = "${var.name}"
service_role = "${aws_iam_role.ci.arn}"

source {
type = "GITHUB"
location = "${data.github_repository.app-repo.http_clone_url}"
git_clone_depth = 1

auth = {
type = "OAUTH"
resource = "${var.github_token}"
}

buildspec = "buildspec.yml"
}

artifacts {
type = "NO_ARTIFACTS"
}

environment {
compute_type = "BUILD_GENERAL1_SMALL"
image = "aws/codebuild/nodejs:8.11.0"
type = "LINUX_CONTAINER"

environment_variable {
name = "API_URL"
value = "${var.backend_api_base_url}"
}

environment_variable {
name = "AWS_REGION"
value = "${data.aws_region.current.name}"
}

environment_variable {
name = "INDEX_${upper(terraform.workspace)}_BUCKET"
value = "${var.host_bucket}"
}

environment_variable {
name = "ASSETS_${upper(terraform.workspace)}_BUCKET"
value = "${var.host_bucket}"
}

environment_variable {
name = "CLOUDFRONT_DISTRIBUTION_ID_${upper(terraform.workspace)}"
value = "${var.cf_distribution_id}"
}

environment_variable {
name = "EMBER_CLI_DEPLOY_TARGET"
value = "${terraform.workspace}"
}
}

tags {
Name = "${var.name}"
Environment = "${terraform.workspace}"
}
}

We attached the IAM role to the project, so it can run. The we set up the source of the code. We tell CodeBuild not to produce an artifact bundle. Then we configure the Docker build image and provide all the ENV variables the ember-cli-deploy needs, as well as any other variable needed at runtime (i.e. API URL).

But we aren’t done just yet. We still need to set up the post-commit hooks so that it can all work like magic.

Github and CodeBuild Hooks

In order for our commits to trigger the CodeBuild run, we need to set up a web hook on the CodeBuild side and a post-commit web hook on GitHub. The former will provide a URL and a secret for the latter to make request to. We will also set up a filter, so that we listen to one branch, which will depend on the terraform.workspace we will be running our code from. So let’s do it:

#ci/main.tf
...
resource "aws_codebuild_webhook" "ci" {
project_name = "${aws_codebuild_project.ci.name}"
branch_filter = "${terraform.workspace == "staging" ? "master" : terraform.workspace}"
}

resource "github_repository_webhook" "ci" {
active = true

events = [
"push",
]

name = "web"
repository = "${data.github_repository.app-repo.full_name}"

configuration {
url = "${aws_codebuild_webhook.ci.payload_url}"
secret = "${aws_codebuild_webhook.ci.secret}"
content_type = "json"
insecure_ssl = false
}
}

If you have found the Terraform code a bit confusing and overwhelming, so far, I have created a GitHub repo with each step committed separately, so you can read through it step by step. Also, I highly recommend you download (and eventually purchase) one of the Jetbrains IDEs. They have a ton of useful plugins including one for Terraform. I personally use RubyMine, for all my Ruby, Elixir and Ember work.

Wrapping Up

Our work is still not done. Now that we have our module complete, it’s time to instantiate it, by adding it into the main.tf in the root of our project. Let’s open it up and add the following:

#main.tf
module "ci" {
source = "./ci"
name = "${local.name}"
host_bucket = "${module.hosting.host_bucket}"
cf_distribution_id = "${module.hosting.distribution_id}"
backend_api_base_url = "${var.backend_api_base_url}"
github_token = "${var.github_token}"
}

As you may see we reference output values from the hosting module, and passed it into the ci module. We also pulled out some input variables into top-level ones, controlled via variables.tf:

#variables.tf
...
variable "app_domain" {
description = "The domain prefix for the bucket name, which comes after either `s3-asset-path`"
type = "string"
default = "example.com"
}

variable "backend_api_base_url" {
description = "The URL pointing to your backend API"
type = "string"
default = "api.example.com"
}
...

This is pretty much it. At this point we can run terraform init and then terraform plan. If you’re ready to test this out on your own, feel free to clone the repo with all the code, change the variable names, and run terraform apply, which will run plan, and ask you to type in yes to confirm.

This will build the infrastructure for staging. When you want to create the same for production, simply select the production workspace an run terraform apply.

Conclusion and Caveats

I hope you’re still reading. When I embarked on documenting this part of our experience, I had no idea that this blog post will wind up this long and that it would take so much time to revisit, document and open-source our learnings. I sincerely hope you enjoyed reading through it and can find benefit from it.

While Terraform is not perfect, I found a couple of points fascinating:

  • one can capture, document and maintain infrastructure as code, and thus build governance around change management
  • one can tie in multiple infrastructure services into a cohesive solution, all in one place with the above-mentioned benefits
  • maintain parity between staging and production environments, at the same time being able to prove out changes in staging before promoting them to production.
  • one can not only collaborate with other engineers on building infrastructure, with help of remote state storage, but also, subdivide the infrastructure into projects and query remote state of another project, and make references to those pieces of infrastructure.

Caveats

My journey with Terraform was not always smooth. There were numerous times things didn’t work as envisioned and expected. One of the common problems is the the following: the init and plan commands work fine, but when running apply, there are errors from the provider APIs. Luckily most of the time there is enough error detail and reference to the file and location, which makes it easy to start trouble-shooting. Most of the issues we faced were around AWS resources, which cannot be simply modified, and need to go through destroy-create cycle. For these I learned to use name_prefix instead of name when naming resources.

Thanks and Feedback Welcome

I tired to cover things in as much detail as I thought necessary. This is my first big blog post in a long time, so I am not sure if I hit the nail on the head. Please feel free to point out any issues and errors. I will be glad to iron things out and update this post.

--

--