Multi-Region AWS App Deployments with Terraform Modules

When I started down the path of trying to deploy an app in multiple regions with terraform, I hit a wall. The terraform description language is just not setup to easily describe “Here’s a definition of an app now go deploy it in this list of AWS regions”. I can only surmise that the vast majority of people are still deploying their app into different AZs of the same region instead of having multi-region deployments, otherwise this type of functionality would just be built in to terraform.

I ended up creating a module for deploying an app in a specific region and it takes the region name as a parameter, so from the higher level terraform file you can just invoke this module for each region that you want to deploy in. Some notes:

  • I’ve stripped a bunch of stuff out, so that my scripts essentially deploy an instance in each region with a specific version of logstash installed
  • Each region gets a new security group SG configured to allow ICMP, basically so these instances can all ping each other, hence why they’re called “pinger”.
  • I’m using EC2 classic just to avoid the complication of dealing with creating a VPC, Subnet, Routing, Bastion Hosts, etc. It’s easy enough to add that in, but it distracts from the purpose of this example which is the multi-region aspect.
  • I install nodejs here for use with the rest of my app but you could just as easily install python or ruby if you’re more oldschool
  • I also have it setup to store my terraform state file in an s3 bucket.
  • The storage configuration in my bootstrap.sh is specific to the m3.large instance type, which is what I fill in the ec2_instance_type variable with. You might need to change that for other instance types which have different numbers of local storage volumes available.
  • I choose to pass in the variables using the -var cmdline option to terraform apply but you could just define a default inside each variable block if you prefer.
  • You might want to use different instance types in different regions, so you can change those to be different variables if it makes sense for your environment, I’m just trying to keep it simple here to highlight the multi-region use case.

Here’s what my files look like:

vars.tf

variable name {
description = “The environment name; used as a prefix when naming resources.”
}
variable aws_access_key {
description = “AWS Access Key”
}
variable aws_secret_key {
description = “AWS Secret Key”
}
variable unixid {
description = “Unix Username”
}
variable email {
description = “Email”
}
variable ssh_public_key {
description = “Add ssh keys to the cluster”
}
variable ec2_instance_type {
description = “The EC2 instance type to use”
}
variable ec2_volume_size_root {
description = “Volume size in GB”
}
variable logstash_version {
description = “The version of logstash to install”
}

main.tf

data “terraform_remote_state” “aws_global” {
backend = “s3”
  config {
region = “us-east-1”
bucket = “com.example.bucketname”
key = “${var.unixid}/terraform/env/${var.name}/terraform.tfstate”
}
}
module “launcher-us-east-1” {
source = “./launcher”
  region = “us-east-1”
name = “${var.name}”
email = “${var.email}”
ssh_public_key = “${var.ssh_public_key}”
ec2_instance_type = “${var.ec2_instance_type}”
ec2_volume_size_root = “${var.ec2_volume_size_root}”
logstash_version = “${var.logstash_version}”
}
module “launcher-us-west-2” {
source = “./launcher”
  region = “us-west-2”
name = “${var.name}”
email = “${var.email}”
ssh_public_key = “${var.ssh_public_key}”
ec2_instance_type = “${var.ec2_instance_type}”
ec2_volume_size_root = “${var.ec2_volume_size_root}”
logstash_version = “${var.logstash_version}”
}

launcher/main.tf

variable region { }
variable name { }
variable email { }
variable ssh_public_key { }
variable ec2_instance_type { }
variable ec2_volume_size_root { }
variable logstash_version { }
variable image_id {
default = “”
}
module “ami” {
source = “github.com/terraform-community-modules/tf_aws_ubuntu_ami”
region = “${var.region}”
distribution = “wily”
architecture = “amd64”
virttype = “hvm”
storagetype = “ebs”
}
provider “aws” {
alias = “myregion”
region = “${var.region}”
}
resource “aws_security_group” “allow_all_pingers” {
provider = “aws.myregion”
name = “${var.name}”
description = “Allow inbound ICMP and SSH traffic”
  # Allows ICMP Ping Type 8
# https://github.com/hashicorp/terraform/issues/1313
ingress {
from_port = 8
to_port = 0
protocol = “icmp”
cidr_blocks = [“0.0.0.0/0”]
}
 ingress {
from_port = 0
to_port = 22
protocol = “tcp”
   cidr_blocks = [“0.0.0.0/0”]
}
}
resource “aws_key_pair” “pinger” {
provider = “aws.myregion”
key_name = “${var.name}”
public_key = “${var.ssh_public_key}”
}
resource “aws_instance” “pinger” {
count = 1
provider = “aws.myregion”
instance_type = “${var.ec2_instance_type}”
ami = “${coalesce(var.image_id, module.ami.ami_id)}”
key_name = “${aws_key_pair.pinger.key_name}”
security_groups = [“${aws_security_group.allow_all_pingers.name}”]
associate_public_ip_address = true
  root_block_device {
volume_type = “gp2”
volume_size = “${var.ec2_volume_size_root}”
delete_on_termination = true
}
  ephemeral_block_device {
device_name = “/dev/sdb”
virtual_name = “ephemeral0”
}
  tags {
Name = “${var.name}”
Owner = “${var.email}”
}
  lifecycle {
create_before_destroy = true
}
  provisioner “remote-exec” {
inline = “${template_file.bootstrap.rendered}”
connection {
user = “ubuntu”
}
}
}
resource “template_file” “bootstrap” {
template = “${file(“${path.module}/bootstrap.sh”)}”
  vars {
logstash_version = “${var.logstash_version}”
region = “${var.region}”
ssh_public_key = “${var.ssh_public_key}”
}
  lifecycle {
create_before_destroy = true
}
}
output “public_dns” {
value = “${join(“,”, aws_instance.pinger.*.public_dns)}”
}

launcher/bootstrap.sh:

#!/bin/bash -e
NODE_NAME=$(curl -s http://169.254.169.254/latest/meta-data/instance-id)
if [ -z “$NODE_NAME” ]; then
NODE_NAME=$(hostname)
fi
echo -e “${ssh_public_key}” >> /home/ubuntu/.ssh/authorized_keys
# Swap
sudo umount /dev/xvdb || true
sudo mkswap /dev/xvdb
sudo swapon /dev/xvdb
grep -q ‘^/dev/xvdb’ /etc/fstab && sudo sed -i ‘s/^\/dev\/xvdb.*/\/dev\/xvdb none swap sw 0 0/’ /etc/fstab || sudo echo ‘/dev/xvdb none swap sw 0 0’ >> /etc/fstab
# turn down kswapd activity
echo vm.swappiness=0 | sudo tee -a /etc/sysctl.conf
echo 1 | sudo tee /proc/sys/vm/drop_caches
# set hostname
echo “`curl -s http://169.254.169.254/latest/meta-data/public-hostname`" | sudo tee /etc/hostname
echo “`curl -s http://169.254.169.254/latest/meta-data/public-hostname`" | sudo xargs hostname
curl -s https://packages.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
echo “deb http://packages.elastic.co/logstash/${logstash_version}/debian stable main” | sudo tee -a /etc/apt/sources.list.d/logstash.list
# Auto accept the oracle jdk licenses
yes ‘’ | sudo add-apt-repository -y ppa:webupd8team/java
echo “oracle-java8-installer shared/accepted-oracle-license-v1–1 select true” | sudo debconf-set-selections
echo debconf shared/accepted-oracle-license-v1–1 select true | sudo debconf-set-selections
echo debconf shared/accepted-oracle-license-v1–1 seen true | sudo debconf-set-selections
sudo apt-get update
sudo apt-get install -y oracle-java8-installer oracle-java8-set-default logstash build-essential
curl -sL https://deb.nodesource.com/setup_4.x | sudo -E bash -
sudo apt-get install -y nodejs
sudo service logstash restart