Terraforming an AWS VPC
Step by step journal — AWS best practice, terraform and more …
Just in case you are an explorer, here I provide you with files. https://github.com/lightninglife/Terraforming-an-AWS-VPC
Use Case:
Your team recently launched a brand new project and everything required to be build in AWS from scratch. Since we have a good command of AWS platform, you are tasked to set up all resources needed in AWS. None of your team members should have access to it prior to launching. It seems to be complex and challenging since you are the captain to be in charge of a project. However, you are very confident about it because you have a good understanding of server hosting in AWS. With a click of a button, you can wrap it up with ease!
Time to get your hands dirty! You logged to the AWS console and was ready to build up a VPC. You have built up the VPC, along with an internet gateway, a public subnet in all AZs. A couple of days later, a developer from your team informed you that next application would be crafted solely for internal use. So that you set up a private subnet for the internal apps. Down the road, there were other requirements coming along, and you handled them like a guru because of your exposure to AWS.
Abruptly, a tester reached out to you for a UAT environment, and demanded it should be identical to DEV environment. After a few seconds, you came to a conclusion you simply had to redo the project from scratch.
Sounds STUNNING and SCARY?
There we go! Infrastructure as Code with Terraform!
Infrastructure as Code
Infrastructure as code is a technique for provisioning infrastructure configuration by means of codes. It allows you to control and implement changes to your environment through code changes pushed into a source repository, resulting to a more maintainable and predictable infrastructure.
Terraform is an IaC open source software written in Golang by Hashicorp.
Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.
The key features of Terraform are:
- Infrastructure as Code
- Execution Plans
- Resource Graph
- Change Automation
Terraform is agnostic to the underlying platform, it has providers which drives the API interaction to different IaaS, PaaS, or SaaS services. For this article, we will be using Terraform’s AWS Provider.
Installing Terraform
For macOS, install Terraform using Homebrew. Check the version to confirm if it is installed correctly.
$ brew install terraform
$ terraform --version
Note: Brew needs to be preinstalled
To install Linuxbrew on your Linux distribution, fist you need to install following dependencies as shown.
--------- On Debian/Ubuntu ---------
$ sudo apt-get install build-essential curl file git
--------- On Fedora 22+ ---------
$ sudo dnf groupinstall 'Development Tools' && sudo dnf install curl file git
--------- On CentOS/RHEL ---------
$ sudo yum groupinstall 'Development Tools' && sudo yum install curl file git
Once the dependencies installed, you can use the following script to install Linuxbrew package in /home/linuxbrew/.linuxbrew (or in your home directory at ~/.linuxbrew) as shown.
$ sh -c "$(curl -fsSL https://raw.githubusercontent.com/Linuxbrew/install/master/install.sh)"
Next, you need to add the directories /home/linuxbrew/.linuxbrew/bin (or ~/.linuxbrew/bin) and /home/linuxbrew/.linuxbrew/sbin (or ~/.linuxbrew/sbin) to your PATH and to your bash shell initialization script ~/.bashrc as shown.
$ echo 'export PATH="/home/linuxbrew/.linuxbrew/bin:/home/linuxbrew/.linuxbrew/sbin/:$PATH"' >>~/.bashrc
$ echo 'export MANPATH="/home/linuxbrew/.linuxbrew/share/man:$MANPATH"' >>~/.bashrc
$ echo 'export INFOPATH="/home/linuxbrew/.linuxbrew/share/info:$INFOPATH"' >>~/.bashrc
Then source the ~/.bashrc file for the recent changes to take effect.
$ source ~/.bashrc
Check the version to confirm if it is installed correctly.
$ brew --version
For Windows, install Terraform using Chocolatey. Check the version to confirm if it is installed correctly.
$ choco install terraform
$ terraform --version
Manual installation for macOS, make a “terrform” directory under ~/Downloads find the appropriate package for your system and download it as a zip archive, unzip the package. Then move the file to
/usr/local/bin/terraform
Then vim ~/etc/profile and edit at the end of the file. Check the version to confirm if it is installed correctly.
$ mkdir terraform
$ mv ~/Downloads/terraform /usr/local/bin/terraform
$ vim cd /etc/profile
Profile
export $PATH="$PATH:/usr/local/bin/terraform"
Check the version to confirm if it is installed correctly.
$ terraform --version
Manual installation for Windows, this stack overflow article contains instructions for setting the PATH on Windows through the user interface.
Check the version to confirm if it is installed correctly.
$ terraform --version
Creating user and configure AWS
Based on AWS best practice, root user should not be used to perform any task. So that we need to login to root user and create a user with policies required for VPC and EC2.
Terraform Hands-on
First, make sure that you have AWS credentials with access to provision resources to your AWS account. The easiest way to do this is installing AWS cli and running aws configure.
Let’s create our first Terraform project. Create a new directory, then inside it create three empty files.
my-terraform-aws-vpc
├── main.tf
├── outputs.tf
└── variables.tf
Vim main.tf file and add these codes.
This tells Terraform to configure an AWS provider and set the AWS region to Sydney (ap-southeast-2). It will also create an AWS VPC with a CIDR block of 10.0.0.0/16.
In your terminal, go inside the created directory and run terraform init. This will download and initialise the AWS provider set in your main.tf file.
$ terraform init
Initializing provider plugins...
- Checking for available provider plugins..
- Downloading plugin for provider "aws" (1.5.0)...
...
Terraform has been successfully initialized!
...
Run terraform plan and see what happens!
$ terraform plan
...
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ createTerraform will perform the following actions: + aws_vpc.main
id: <computed>
assign_generated_ipv6_cidr_block: "false"
cidr_block: "10.0.0.0/16"
...
Plan: 1 to add, 0 to change, 0 to destroy.
...
Notice that it generates an execution plan with 1 VPC resource to be added. The command terraform plan is a way to see an overview of what resource will be added, changed, or destroyed in relation to the code changes, without applying the changes to your infrastructure.
After reviewing the execution plan, it’s time to implement it by the conveniently named command terraform apply.
$ terraform apply
...
aws_vpc.main: Creating...
assign_generated_ipv6_cidr_block: "" => "false"
cidr_block: "" => "10.0.0.0/16"
tags.Name: "" => "my-terraform-aws-vpc"
aws_vpc.main: Creation complete after 3s (ID: vpc-8b1cfcec)
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Verify VPC created from AWS console
Login to the AWS console to view the created VPC or run this AWS CLI command.
$ aws ec2 describe-vpcs --filters 'Name=tag:Name,Values=my-terraform-aws-vpc'
To demonstrate how easy it is to spin up your infrastructure via Terraform, we will destroy the created VPC using the command terraform destroy.
$ terraform destroy
...
Terraform will perform the following actions: - aws_vpc.mainPlan: 0 to add, 0 to change, 1 to destroy.
...
aws_vpc.main: Destroying... (ID: vpc-8b1cfcec)
aws_vpc.main: Destruction complete after 0sDestroy complete! Resources: 1 destroyed.
Then build it again using terraform plan and terraform apply.
Note: This is how terraform serves its needs in terms of infrastructure as code. You can initiate and terminate a vpc seamlessly with few command lines!
State File
Terraform saves the state in a terraform.tfstate file in JSON format, this file contains the managed infrastructure details.
my-terraform-aws-vpc
├── main.tf
├── terraform.tfstate
└── terraform.tfstate.backup
Here’s how it looks like.
By default, the state file is saved locally. However, when working with a team, this should be saved in a remote location to have a synchronized state across all terraform users.
So far, we have created our first Terraform project and ran basic commands to generate an execution plan, and then provision our first AWS VPC. In this follow up post, we will continue on updating our script to complete our VPC.
Remote State
By default, Terraform saves the state file terraform.tfstate locally. Remember that the state file keeps a record of your infrastructure. If several users are updating the same environment, then each user would generate a state file locally. Any modification done in the code would not be visible to other users. Fortunately, Terraform supports remote state storage.
In the image above, the state file is saved to a S3 bucket and is shared across all users. A team member with proper IAM credential can access the state file and update it through Terraform as it will know which resource has been modified, added, or destroyed.
Note: You must attach your appropriate policy that suits to the IAM credentials in use
Update the main.tf file to configure Terraform to create a S3 bucket.
Note: Keep in mind, the name of S3 bucket must be globally unique. So that you may name it on your own!
vim main.tf and this tells Terraform to save the state file into an S3 bucket rather than storing it locally. A new vpc.tfstate file should be uploaded to your S3 bucket.
terraform init
Initializing the backend...
Do you want to copy existing state to the new backend?
Pre-existing state was found while migrating the previous "local" backend to the
newly configured "s3" backend. No existing state was found in the newly
configured "s3" backend. Do you want to copy this state to the new "s3"
backend? Enter "yes" to copy and "no" to start with an empty state.
Enter a value: yesSuccessfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Terraform Workspace
The ability to reproduce your infrastructure for any environment, in any region, with just running a few commands, is one of the key features of Terraform. It’s ideal to have this in mind when designing our scripts before we move further and write more lines to our code. Terraform supports multiple workspaces to provision different configurable environments requiring similar resources.
Run terraform workspace list, it will show us the list of all workspaces.
$ terraform workspace list
* default
As expected, we only have the default workspace in the list, and it is the currently selected workspace as indicated by the *. What we want is to create a VPC for these environments:
- DEV
- UAT
- PROD
Let’s create a new workspace by running terraform workspace new dev.
$ terraform workspace new dev
Created and switched to workspace "dev"!You're now on a new, empty workspace. Workspaces isolate their state, so if you run "terraform plan" Terraform will not see any existing state for this configuration.$ terraform workspace list
default
* dev
We are automatically migrated to the new dev workspace. Do the same for uat and prod workspaces.
View the remote S3 bucket to see how Terraform stores separate state files for these 3 environments.
$ aws s3api list-objects --bucket my-terraform-bucket-12345 --query 'Contents[].{Key: Key}'
[
{
"Key": "env:/dev/vpc.tfstate"
},
{
"Key": "env:/prod/vpc.tfstate"
},
{
"Key": "env:/uat/vpc.tfstate"
},
{
"Key": "vpc.tfstate"
}
]
There are 3 newly created state files, each one persisting an environment’s state. We can ignore the vpc.tfstate in the root directory as this was created in the default workspace. It is preferred to use the workspace state file for multiple environments.
Change back to the dev workspace using the command terraform workspace select dev and create a VPC in this new environment. Follow the same procedure for the uat and prod workspace.
$ terraform workspace select dev
Switched to workspace "dev".$ terraform plan
...
$ terraform apply
...
Configurable Environment
Different environments have different need. A production environment needs more compute power than a development environment. You also need to consider the location where you provision your environment. For example, if your development team is in Australia, your testers are in Singapore, and your clients are in Japan, then you should build your infrastructure closest to who will be using it, it will lower latency and cost.
To achieve this, we need to modify our script and add variables. Let’s first start by using variables to set the AWS region where we will provision our VPC, the VPC CIDR block, and resource tag.
Your project should have main.tf and variables.tf files. Create a new directory env, under it create the subdirectories dev, uat, and prod. In each subdirectory, create a file vpc.tfvars to hold all the variable values distinct to each environment.
Set the variables values for each environment. The VPC CIDR block would remain the same for simplicity.
env/dev/vpc.tfvars — Asia Pacific (Sydney) region
aws_region = "ap-southeast-2"
vpc_cidr_block = "10.0.0.0/16"
env/uat/vpc.tfvars -Asia Pacific (Singapore) region
aws_region = "ap-southeast-1"
vpc_cidr_block = "10.0.0.0/16"
env/prod/vpc.tfvars -Asia Pacific (Tokyo) region
aws_region = "ap-northeast-1"
vpc_cidr_block = "10.0.0.0/16"
Open variables.tf and define the following variables.
variable "aws_region" {
description = "AWS Region"
}variable "vpc_cidr_block" {
description = "Main VPC CIDR Block"
}
Go to main.tf file and use these variables in this format ${var.variable_name}.
The variable ${terraform.workspace} is an interpolation sequence to get the current workspace name to set in our configuration.
Update the dev environment by setting the correct workspace, running terraform plan and apply while using the attribute var-file to tell Terraform which values to substitute to the variables. Do the same for uat and prod., selecting appropriate workspace and changing the vpc.tfvars location. Go to your AWS console, select the region and search for the created VPC.
Completing the VPC 🏁
Now that we have setup a remote state file and made the code reusable for different environments, it’s time to finish up our VPC. In order to do this, we will be adding these resources:
- AWS VPC (done)
- Internet Gateway
- 3x Public subnet — one for each AZ
- 3x Private subnet — one for each AZ
- 3x Database subnet — one for each AZ
- Public subnet route table
- Private subnet route table
- Database subnet route table
- EC2 Bastion Host
- Elastic IP Address
- NAT Gateway
The diagram below shows us how we want to setup the VPC.
Subnets
Add the AZs and public subnet CIDR block variables to variables.tf.
variable "availability_zones" {
type = "list"
description = "AWS Region Availability Zones"
}variable "public_subnet_cidr_block" {
type = "list"
description = "Public Subnet CIDR Block"
}
In env/dev/vpc.tfvars, set the values.
availability_zones = ["ap-southeast-2a", "ap-southeast-2b", "ap-southeast-2c"]
public_subnet_cidr_block = ["10.0.0.0/24", "10.0.1.0/24", "10.0.2.0/24"]
The Asia Pacific (Sydney) region has three availability zones as of this writing. We will create a public subnet in each AZ. Then add the subnet resource to main.tf.
We are using several interpolation syntax from Terraform, this would create 3 public subnets in each AZ, and would assign a CIDR block defined in the variable.
Creating the private and database subnet is trivial as we would just need to copy and paste what we did for the public subnet.
Add these codes in variables.tf.
variable "private_subnet_cidr_block" {
type = "list"
description = "Private Subnet CIDR Block"
}variable "database_subnet_cidr_block" {
type = "list"
description = "Database Subnet CIDR Block"
}
In env/dev/vpc.tfvars.
private_subnet_cidr_block = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]
database_subnet_cidr_block = ["10.0.201.0/24", "10.0.202.0/24", "10.0.203.0/24"]
And finally in main.tf
Again, let’s test it immediately while our script is not that complex yet. Run plan and apply, this time we specify the environment var-file for the set values to propagate to the variables in the script.
$ terraform plan -var-file=env/dev/vpc.tfvars
...
$ terraform apply -var-file=env/dev/vpc.tfvars
...
Internet Gateway, NAT Gateway, and Route Tables
Managing traffic in our subnets is controlled by the VPC router through the route tables, internet and NAT gateway. The public subnet (DMZ) is where we will deploy our internet facing servers (e.g. bastion host), this is done by routing the inbound and outbound traffic through an Internet Gateway. While both the database and private subnets are limited only to outbound traffic through the NAT Gateway. We could add more routing rules through a Network ACL for stateless traffic, but this discussion will probably be best to have its own blog post.
Add this in the main.tf file. It would create an Internet Gateway, and a NAT Gateway with an allocated EIP.
Once this is setup, we will now associate the route table to the subnets. A subnet can have only one route table, while a single route table can be associated to multiple subnets.
Bastion Host
A VPC is a logically isolated section of AWS cloud which provides admins complete control of the virtual network. EC2 instances launched in the private subnet is inaccessible from the internet. You might ask how are we going to access these private EC2 instances, the obvious answer is through the bastion host.
Be aware that Terraform is not able to create a key pair for us, so we need to generate our own. Generate a key pair using ssh-keygen or your preferred tool.
Create a variable and set the public key location. Define it as a required variable in variables.tf.
variable "bastion_host_public_key" {
description = "Bastion host public key"
}
Set the file location in env/dev/vpc.tfvars. In this example, I’m using my generated public key stored in ~/.ssh/id_rsa.pub.
bastion_host_public_key = "~/.ssh/id_rsa.pub"
Then update the main.tf file with these codes.
We’ve added three resources:
- Public key to allow SSH access to the bastion host EC2 instance
- Security group (firewall)
- EC2 instance using the latest Ubuntu AMI
Build your infrastructure
The final and most rewarding step is making our codes turn to life by building our infrastructure. Run plan and apply and we’re all set! Try to ssh to your bastion host.
$ terraform plan -var-file=env/dev/vpc.tfvars
...
$ terraform apply -var-file=env/dev/vpc.tfvars
...
Login to bastion host EC2 instance
$ ssh -i "bastion_host_key.pem" ubuntu@13.211.176.156
Welcome to Ubuntu 16.04.6 LTS (GNU/Linux 4.4.0-1105-aws x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
0 packages can be updated.
0 updates are security updates.
The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
ubuntu@ip-10-0-1-174:~$
Verify security group
EC2 created using latest Ubuntu AMI
My takeaway from this project:
This project covers almost every aspect of using terraform to create VPC and related resources.
In terms of terraform, it provides us with Iac (Infrastructure as Code), which lays a solid foundation for documentation and editability in the future. As we dicussed at the very beginning, to rebuild the infrstructure from scratch is not cost-effective and very much time-consuming. With Terraform, every file and element could be brought to table and in use in a few minutes. Apart from this, terraform validate also allows us to identify issues prior to VPC building process. Errors associated with detailed description gives us a good sense of probmes we may encounter, which is serving as an effective tool to debug errors and contributing to the ultimate solutions.
In terms of AWS, it truly backs up the acknowledge why it is the leading power in cloud industry. By using AWS CLI, we are able to build up VPC infrastructure seamlessly. Besides, by following the best practice of AWS, security is being put the priority. For instance, we do not login as root on a daily basis. Instead, users attached with appropriate policies are in place to guarantee no user has access to more resources that he or she may require. With that said, the intentional or unintentional deletion could be averted. Moreover, by using CLI, we significantly reduce the amount of time to build up our infrastructure. Rather than clicking tens of buttons, we only type in few command lines. So that our project can’t be fulfilled in a timely and effective manner.
At the end of the project, I will provide the link for all files required for this project for your convenience. Please visit https://github.com/lightninglife/Terraforming-an-AWS-VPC