DevOps101 — First Steps on Terraform: Terraform + OpenStack + Ansible
I’m currently a teacher assistant 👨💻@ Técnico Lisboa, and I’m working closely with Dr. Professor Rui Cruz 👨🏫at the It Infrastructure Management and Administration 🖥, a masters level course @ Técnico Lisboa.
This tutorial aims to provide you with an introduction to Terraform. Terraform will be leveraging the deployment an infrastructure on OpenStack. Ansible will provision the deployed machines. We will use Vagrant to manage the machine with Terraform and Ansible installed.
In DevOps101 — Infrastructure as Code With Vagrant (LAMP Stack), I’ve introduced the Agile DevOps movement. It aims to automate work and allow you to launch faster.
If you are planning to start a business or side-project, something that you have to take into account is DevOps. Why should you care? As your business needs grow, you will have different infrastructure needs. Thus, if you build everything in the right way since the beginning, the cost of changing and upgrading will be lower. Without take any longer, let’s introduce:
Terraform aims to provide a workflow to manage infrastructures. It allows you to define infrastructures, allowing for a minimal cost to change and deploy changes.
This scheme synthesizes how Terraform works:
Terraform relies on .tf files, that define the infrastructure(terraform-instances). Terraform knows what is deployed through the state file. This state is stored by default in a local file named “terraform.tfstate”. It can also be stored remotely, which works better in a team environment. Terraform allows for several providers, normally initialized in a file called terraform-provider.tf. If you want to check in more detail how it works, check this introduction video, by Terraform.
I strongly recommend you to use Vagrant. If you don’t know what Vagrant is or how it works, this article is suited for you (it even explains step by step how to configure the tool and their dependencies).
Before we start…
Step 1: Know your resources.
If you open the support files, you will see several filetypes: .tf, .yml, .cfg, .sh and a Vagrant file.
We can see 3 logical groups: files associated with Vagrant, Ansible, and Terraform. Vagrant provides you a virtual machine with the software required to run Terraform. Or, if you don’t want to use a virtual machine, you can install Terraform on your host machine.
Ansible is an open-source tool that aims to ease provisioning. Ansible will connect to the deployed instances by Terraform. After that, it will install the software required.
Finally, Terraform will be responsible for deploying the infrastructure.
Step 2: Initialize the virtual machine.
On the same directory that you have your support files, run vagrant up and wait for the machine to boot. Note that Vagrants synchronizes the current folder with terraform-experiment.
Now we are ready for the next step.
Step 3: Understand the Terraform support files.
As you can see, there are several variable declarations on this file. Some variables that are going to be passed to our provider.
Terraform’s providers are initialized in a different way, as they have different APIs. In other words, you need different variables set for each provider you are using. The variables you need for OpenStack are listed here. For this experiment, I’m going to use a private cloud from my university, based on OpenStack. You can use, for example, SUSE OpenStack Cloud, OpenTelekomCloud or any of this OpenStack-based private clouds. You can install OpenStack locally, and operate from there (the cheapest solution).
Next on DevOps101 ☄️, a tutorial on how to deploy an infrastructure on Google Cloud Engine using Terraform, stay tuned! 🦄
To access your provider, you need to fill the required variables:
Step 4: Understand the infrastructure you want to deploy
Now, study the terraform-instances.tf and terraform-networks.tf file. What are they defining? How do they achieve that?
On terraform-networks.tf we are creating a frontend network, a subnet and defining a security group. The security group allows receiving connections from port 443. The file terraform-instances.tf defines two web servers and one load balancer. It then assigns them to a network, a security group, and a subnet.
Let’s get to action!
Generate a public-private keypair. This keypair will be stored at /home/.ssh/. You can generate a key with (skip all prompts, by pressing ENTER):
$ ssh-keygen -t rsa -b 2048
Your pair of keys will be imported by Terraform (terraform.tfvars).
Next, we want to apply our plan to the real world. Note that Terraform has five essential commands that allow us to deal with an end-to-end workflow:
- terraform init: This command is used to initialize a working directory containing Terraform configuration files.
- terraform refresh: This command is used to reconcile the state Terraform knows about (via its state file) with the real-world infrastructure.
- terraform plan: Creates an execution plan. Terraform performs a refresh, and then determines what actions are necessary to achieve the desired state specified in the configuration files.
- terraform apply: Apply the changes required to reach the desired state of the configuration.
- terraform destroy: destroys Terraform-managed infrastructure.
Firstly, we need to initialize the working directory. This is the first command that should be run after writing a new Terraform configuration. Run:
$ terraform init
It should output something similar to:
In order you to know what are the changes that Terraform has to do on the provider, run:
$ terraform plan
The output specifies the plan:
It’s now time to deploy the infrastructure. Run:
$ terraform apply
By now, your infrastructure has been deployed.
To provision the machines, we are going to use Ansible. I used this collection of Ansible roles, by Oliver Louvignes. To tell Ansible which machines to target, replace the IPs of the machines on the file hosts. If you can not find them, run terraform output. Now, run the playbook which installs Node with:
$ ansible-playbook site-servers-setup-all.yml
That’s it. You have a system composed of two web servers (without any load balancing) running with Node on an OpenStack provider. Next, on “DevOps101 ☄️”, we will deploy a system composed of two web servers and one load balancer.
Ending the experiment
With two commands you will end the experiment and shut down the virtual machine. These are:
$ terraform destroy -auto-approve
$ vagrant halt
If you want to destroy the virtual machine you created, run:
$ vagrant destroy
Congratulations! 💯 You reached the end! 🦄