Let’s scale up with Terraform.io

Diego Ferreira
5 min readDec 14, 2016

--

Continuing our Terraform.io discovery, we’d like to travel outwards and go beyond the simple server provisioning. Let’s code ourselves some infra!

Our Earth is mesmerizing, but the sky is unfathomable.

There’s a lack of scope on this tools name — Terraform.io builds much more than single instances. We can code away the whole environment(s) and infra assets we need, across multiple providers — and if you wish so, include additional ones.

This post will expand on my first one, and I hope to give some insight in the power of how we use Terraform to control all the infra your endeavor needs to reach sky high.

If you’re in a hurry, grab the source material from my repo =]

https://github.com/katesclau/terraform-cluster-tutorial

The journey, so far

We’ve discussed here how Terraform.io can help you to version control your infrastructure and help you understand the basics of setting up a single instance on AWS.

If you’ve followed that, you know we already have our key pair and server generated in the AWS with our templates and can mange them through the state file. You also know that you’ll need a “aws_credentials.tf” set up with your API access and secret keys.

We are here, literally and figuratively — source Pale Blue Dot.

Now, off to our main dish (Saturn’s dish pun intended!)

Expanding

A new adventure usually starts small, and grows from that — sometimes really fast. It’s good to be prepared for growth and the Cloud provides us just that — the ability to grow as needed, with a few restrains ($$), but with wondrous possibilities.

So, the first thing we’ll do is to create a new repository and copy some of our .tf files to this new repo. We’d like to bring with us the following

Now we can start changing it and create additional assets. While we can have lot’s of static applications and websites delivered without a DB, mine usually use one, so let’s get ourselves some RDS.

RDS — Relational Database Service

Using a managed service is much better then deploying an instance and having tons of hours spend on configuration — and sometimes cheaper too! We’re going to create a new database module on main.tf for starters.

Our configuration will be stored in the repo root variables.tf file.

After that, we’ll create the module directory and touch rds.tf and variables.tf in it. Check the docs on it to get a glimpse on how to customize your rds module further.

On rds.tf we establish the base configuration for our module, sure we could just stick all the configs in here, but I want to reuse this module somewhere else, so we keep default values on variables.tf and set only references in the rds.tf file.

Notice how we have variables with default and others that are blank — this means that values with default set within rds/variables.tf may be overwritten by passing values on main.tf module declaration. I use this to easily setup on our module description, only changing what we need for each case.

Don’t forget to add to your repo root a database_credentials.tf file setting up user and password for the database.

This is our database configuration, we can plan && apply it to our provider to watch it running. I won’t bother with those steps here. There’s more to see.

Accelerating

Cloud is about provisioning and scalability — our applications desperately need that. We need to grow and contract as demand arises, to better apply our resources and to provide reliable services. One instance is not what we need for that, we need a structure that can scale and provide a single endpoint for users.

I’ll cut some steps that we already seen on the first part. Let’s first get the single endpoint set up.

ELB — Elastic Load Balancer

Create the module on main.tf, create a new folder with the module configuration (elb/elb.tf) and a elb/variables.tf file. We will hold the configuration on the repo root variables.tf file — snippets below.

This will be a classic ELB that redirects the traffic from the ELB endpont on port 80 to port 8080. It will also check for instances and only redirect for those that pass the health check.

If you’d like more about session persistence, https on ELB, follow me, I’ll dedicate a post on this topic in a few weeks.

ASG — Auto Scaling Groups

On AWS we use ASGs for this, being able to deploy multiple instances of an application/system to attend demand. Transforming our instance configuration into a launch configuration should be easy.

Pointing out two things here:

  1. We use the module.load_balancer.elb_name to pass to our ASG variable load_balancer the ELB name we’ve configured before, but when used in the ASG configuration, we declare it as a single member of the array load_balancers argument requires.
  2. I’ve changed the file name that described our single instance. When you plan this configuration, Terraform will warn you about it’s destruction.

That’s it, update your files, get && plan && apply it once again — This is where we stand.

Now you just have to use a strategy to deploy your application to all nodes (Chef, puppet, create your own AMI with Packer, use provisioning…).

Hopefully now you can expand your infra faster, and chill out more.

What’s next?

So now we have infra for scalable applications with database and a load balancer to manage all connections — but wait, there’s more!

  • EFS — we might need to use a scalable shared file system, if our application uses any resources stored in files.
  • SSL Certificates — to enable https we need a certificate entry on ACM. We’ll see to that too.
  • Sessions — Users might be annoyed by being disconnected every time our ELB sends them to another node. Let’s configure some session stickiness.
  • Create ASG scale automation — to enable the own ASG to grow or shrink on demand.
  • Using remote modules — Create repos for preconfigured/configurable modules on assets that you use frequently.

We have 4 modules, separated by assets we’ve created. There are some that create modules for environments (QA, Staging, Production) — Imagine if we could create this whole infra as a module…

There is a universe of things we can expand here. And we’ve just used AWS provider so far. I’ll keep looking into it, and if you tried Terraform yourself, I’m sure you’ll be doing just the same.

--

--