How to version control infra with Terraform.io
If you are a control freak and like things organized and easily maintainable, let me walk you through some discovery I’ve done in the last month — Terraform.io
Before I got into Terraform.io, I’ve always deployed infra manually, through the PaaS console, and depending on how much infra I needed, this used to be a pain. Well, not anymore =]
Terraform is a great OS tool that enables us to create, version control, plan and manage infrastructure like code, across multiple PaaS providers — and that’s how I fell in love with it. Scott Lowe explains here more in depth why you might wanna stick to Terraform.
This article is meant for people that are just arriving at infra coding. Comments and suggestions are always welcomed.
So let me share some love…
Baby Steps — Create a server instance
The simplest thing you can do is to deploy a server instance on a PaaS provider. For this demonstration I chose AWS.
Before we start
- Install Terraform.io;
- Get your AWS credentials key;
Getting started
Create a repo on Github to store our configuration, and clone it locally
$ git clone https://github.com/<user>/<repo>.git
We’ll start 3 .tf files
- main.tf — holds our infra configuration;
- variables.tf — we’ ll store here variables that we might change later;
- aws_credentials.tf — your AWS access key should go in here;
Here’s a simple instance configuration
And now, we just run terraform plan on our project
Check this bunch of “<computed>” values, we can configure them for our best interest.
And at the last line, planning shows how many resources are meant to be changed, added and destroyed. Since some of configuration changes on resources are destructive, we should always check those.
Let’ s terraform apply the plan, shall we?
Now you can check you AWS console and view the instance up and running, which is great, and you can commit this to your git and take a cup of coffee.
$ git add * && git commit -m “genesis 1:1” && git push
And here is our instance on AWS
WHAT? I cannot access it? No keypair? Let’ s solve this…
Updating — Creating and organizing resources
So, we forgot a simple need for instances in AWS — key pairs to access them.
No problem, let’s update the instance configuration and add a new configuration to it. We need a key pair for this, and we can go both ways here:
- Create a key pair on AWS console and download it, using it’s name for reference;
- Create a key pair with ssh_keygen and using the public key on the resource configuration;
Will do it with the last one… go to your favorite console and create the key.
$ ssh_keygen
Give it a name, “aws.pem” for example and a password. Bam! Now we have the aws.pem and aws.pem.pub. Don’t loose them! Let’ s open aws.pem.pub and configure the aws_keypair resource. Check the docs on key pairs for reference here.
We want to modularize the configuration so we can manage it better. Plus, we can use them to link different sources of modules, even from remote repositories. Check modules usage docs to dive dipper.
So will take a look at how we do it using our existent resource (the instance) and what’s new. The basic idea is to separate each resource to it’s own module, and we can export what we need to use on other modules too.
Take a look on how we define a module on main and how it interacts with the source folder and variables.
At 1, we are declaring the module “servers”, which will contain our EC2 instances on AWS.
On 2, we add to ec2/variables.tf the variables that are being passed in main.tf to “servers” module.
And using them on 3 to assign values to our ec2 instance resource.
“Wait, what’s that module.keys.key_name?”
Another thing we can do is export data from modules to use across a the configurations, check how main.tf and keys/key.tf work together.
Declare “keys” module on 1, configure and output key_name on 2 and use it with the prefix module on 3.
Now we need to run terraform get to retrieve the modules — makes more sense when we are using remote resources — plan to check what we are doing and apply to change the infrastructure setup.
Notice that when you plan this update, you will be notified
key_name: “” => “aws_access” (forces new resource)
and
Plan: 2 to add, 0 to change, 1 to destroy.
This means that our current instance will be terminated and terraform will create a new one. This might happen when we change critical parameters — be cautious and plan resources ahead!
After terraform update your infra, test the access using the public_ip that was output after running terraform apply, or run terraform show to display current infra configuration. We are using a ubuntu AMI, so we’ll use ubuntu user
$ ssh -i aws.pem ubuntu@<public_ip>
Profit!
$ git commit -m “now with keys” && git push
Conclusion
So by now you should know the basics of terraforming and may start to create your own worlds.
Terraform is meant to version control the infra, so do it!
Organize it with modules and use data providers
Remember to always plan before apply ;)
FAQ
- So, why bother with a 3rd party tool if you can do it with AWS CloudFormation?
Because we’d like VCS the infrastructure configuration and would also have the ability to deploy on multiple services if we wanted too. Plus, it’s way easier to manage changes and scale infra with it. Check Scott Lowe’ s article for a more detailed “why”.
- What do you use to edit Terraform files?
I use Emacs with terraform-mode.
- Where do I find this example?
Fork it on https://github.com/katesclau/terraform-tutorial
- I need more knowledge, where to?