AWS Infrastructure as code (IAC) with Terraform

Surjeet Singh
Sep 7, 2018 · 10 min read

This blog helps to uncover the provisioning of IAC (infrastructure as code) using the tool called terraform, which will be used to provision the AWS infra (VPC, EC2, IAM and their respective sub-components).

Pre-requisites:

a. AWS account with credentials (aws access keys) with access to create the infra.

b. Basic understanding of terraform

By the end of this blog:

a. You should be able to provision your own public and private network.

b. create a bastion (aka jumpbox) instance within public subnet of vpc with only ssh access from the internet.

c. create a microservice instance in private subnet of the vpc with no inbound access from internet. Outbound access to the internet is provided through NAT gateway and SSH access is possible only through bastion instance via SSH proxy.

The below network diagram shows how we are creating our infra in AWS:

PS: This is depicting 2 Azs ( Better to use 3 availability zones for HA)

We have the terraform modules as described below for provisioning the aws infra. These modules are generic and can be invoked to provision multiple environments by just maintaining the specific variable file per environment.

VPC module for creating the network:

  • Public and Private subnets
  • Elastic IP(EIP)
  • Internet Gateway
  • Nat Gateways
  • Routes Tables and defined routes
  • Route Table Associations

BASTION module for creating the public instance:

  • EIP
  • Security groups and rules
  • user_data templates
  • Launch configuration
  • Autoscaling group

ServiceOne module for creating the private instance:

  • Security groups and rules
  • Launch configuration
  • Autoscaling group

IAM module for creating the instance roles for bastion and serviceOne modules:

  • iam policy
  • iam instance profile
  • iam instance role
  • iam policy attachments

Before we dive in deep, let’s take a quick look at the terraform and AWS components and understand briefly about them:

Terraform is a tool for automating infrastructure management. It can be used for a simple task like managing single application instance or more complex ones like managing entire data center or virtual cloud. The infrastructure Terraform can manage includes low-level components such as compute instances, storage, and networking, as well as high-level components such as DNS entries, SaaS features, and others. It is a great tool to have in a DevOps environment and I find it very powerful but simple to use when it comes to managing infrastructure as a code (IaaC). Moreover, this tool is cloud-agnostic meaning you can use the same tool for multiple cloud providers like AWS, Digital Ocean, OpenStack, Microsoft Azure, Google Cloud etc. The latest-version available for the tool during the writing of this blog is 0.10.30

To install terraform follow the simple steps from the install web page Getting Started.

VPC stands for Virtual Private Cloud which basically provides a way to structure your own network the way you want in AWS. Before we dive in let’s get familiar with some basic VPC components and concepts:

  • CIDR — Classless Intern-Domain Routing, let’s choose the address range which will represent our VPC (10.0.0.0/16) which in turn gives the following in the binary format 1010 1100.1111 0000.0000 0000.0000 0000 the slash 16 (/16) indicates the first 16 bits will be fixed and represents the network and the rest 16 bits will represent the hosts. I would recommend to using the /16 range, which will have 64000 IP addresses, which is nice for carving out some large networks.
  • Subnets: In this tutorial we will setup total 6 subnets, three public ( 10.0.1.0/24,10.0.3.0/24,10.0.5.0/24 ) and three private subnets ( 10.0.2.0./24,10.4.0.0/24,10.0.6.0/24 ), so that a pair of public/private subnet can be available across 3 availability zones. All the public subnets can be accessible from the internet and the private subnets cannot be accessible from the internet. Each subnet will have 251 IP addresses.
  • Internet Gateway: An Internet Gateway is a VPC component that allows communication between your VPC and the internet. This is attached to all the public subnets.
  • Nat Gateway: Network Address Translation Gateway is used to allow the instances in the private subnets to connect to the internet but the other way around is not possible for example it will prevent the internet from initiating a connection with the instances in the private subnets. We will be creating 3 NAT gateways (one per availability zone in public subnets).
  • Route Table: The route table basically contains rules regarding where the packets should go, in our case we will have total 4 route tables, one for the all the public subnets in which all connections to the internet will go through the Internet Gateway and 3 route tables(one per subnet)for the private subnets connections to the internet will go through the NAT Gateway.
  • Elastic EIP: An EIP will be associated with the NAT Gateway and we will attach it to the bastion box as well (just in case your instance goes down and new comes up, the IP would still be the same), so basically an elastic IP is a static IP associated with your instances in public subnet.
  • Route Association Table: This means you can associate your subnets to the route tables you’ve defined, therefore the rules defined in that route table will be applied to the subnets.

Bastion box here is nothing but a linux host in one of the public subnets with an Elastic IP address to allow inbound Secure Shell (SSH) access to EC2 instances in public and private subnets. For high availability, multiple boxes can be provisioned across 3 Azs. Including bastion hosts in your VPC environment enables you to securely connect to your Linux instances without exposing your environment to the Internet. This allows you to lock down the Security Groups of your private instances to a smaller scale as all incoming connectivity will only be from the Bastion Host instead of setting up a range of IP addresses. After you set up your bastion hosts, you can access the other instances in your VPC through Secure Shell (SSH) connections on Linux. Bastion hosts are also configured with security groups to provide fine-grained ingress control.

serviceOne module is nothing but an autoscaled ec2 instance in the private subnet. This instance can be treated as your microservice app instance which has no inbound access to internet and has an SSH port opened only to bastion box. One can login to this instance from the local machine only via ssh proxy through bastion provide the keys are available on the local box.

Both the bastion and service now will launch the instances in autoscaled groups with launch configurations.

IAM -The Identity Access Management Service allows you to set up Users, Groups, Policies and Roles to control who has the authorization to access resources and perform certain tasks within your AWS account.

Roles within IAM can be assigned to users as well as an EC2 resource. This functionality allows the EC2 instance to make an API request to another service without having to have a hardcoded username and password in the application making the call. For example, you may have an application that needs write access to a specific bucket in S3, assigning a role with this access to the EC2 resource is all that’s needed.

Here, we are just creating a simple IAM roles:

bastion-instance-role ( assigned to bastion box in public subnet)

serviceOne-instance-role ( assigned to serviceOne box in private subnet)

For example, IAM role for the bastion ec2 instance with the below IAM actions are described in the below iam-policy. This policy will give the bastion instance access to get the instance id and associate the elastic ip to the instance id.

For serviceOne IAM role, since we are not deploying anything as such on this instance, we have not added any iam actions to the policy. However, this can be customized as per the need.

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:AssociateAddresss",
"ec2:DescribeAddresses",
"ec2:AllocateAddress",
"ec2:EIPAssociation",
"ec2:DisassociateAddress"
],
"Resource": "*"
}
]
}

Code-repo: Github

Please see the readme on source repo on how to get started with creating the aws infra through terraform before proceeding ahead.

On executing the below script, will output the complete plan of your aws resources that will be created as seen below. This entire plan will be stored in a file plan.out, which will be used as an input during the terraform apply.

Note: Please make sure that you are using the terraform version as 0.10.0 or above

./tfplan.sh

This script is nothing but a wrapper around setting up the aws credentials for terraform and executing multiple terraform commands at one go as seen below.

AWS_SHARED_CREDENTIALS_FILE=~/.aws/credentials
export AWS_SHARED_CREDENTIALS_FILE
terraform init
terraform get
terraform plan -out ./plan.out -lock=false

Above will generate this terraform plan.

Once the above plan is created, we can go ahead and execute the

./tfapply.sh

which is again nothing but a wrapper around terraform apply commands as seen below. This will start creating the infra as seen below starting with the VPC module and so on.

AWS_SHARED_CREDENTIALS_FILE=~/.aws/credentials
export AWS_SHARED_CREDENTIALS_FILE
terraform apply -lock=false ./plan.out

Below will start creating the infra after executing the ./tfapply.sh (below is just for your reference and not the complete logs of terraform apply)

➜  terraform-aws-vpc git:(master) ./tfapply.sh
+ AWS_SHARED_CREDENTIALS_FILE=/Users/surjeetsingh/.aws/credentials
+ export AWS_SHARED_CREDENTIALS_FILE
+ terraform apply -lock=false ./plan.out
module.aws_vpc.aws_vpc.main: Creating...
assign_generated_ipv6_cidr_block: "" => "false"
cidr_block: "" => "10.0.0.0/16"
default_network_acl_id: "" => "<computed>"
default_route_table_id: "" => "<computed>"
default_security_group_id: "" => "<computed>"
dhcp_options_id: "" => "<computed>"
enable_classiclink: "" => "<computed>"
enable_classiclink_dns_support: "" => "<computed>"
enable_dns_hostnames: "" => "true"
enable_dns_support: "" => "true"
instance_tenancy: "" => "default"
ipv6_association_id: "" => "<computed>"
ipv6_cidr_block: "" => "<computed>"
main_route_table_id: "" => "<computed>"
tags.%: "" => "1"
tags.Name: "" => "myproduct-dev-vpc"
module.iam.aws_iam_policy.serviceOne-instance-policy: Creating...
arn: "" => "<computed>"
.
.
.
.
.
module.aws_vpc.aws_route.route-natgw.1: Creation complete (ID: r-rtb-cb0104b01080289494)
module.aws_vpc.aws_route.route-natgw.0: Creation complete (ID: r-rtb-1f0104641080289494)
module.aws_vpc.aws_route.route-natgw.2: Creation complete (ID: r-rtb-480603331080289494)
Apply complete! Resources: 48 added, 0 changed, 0 destroyed.
..

Once the apply is completed successfully, the state of your infrastructure will be stored in the terraform state file terraform.tfstate.This state is used by Terraform to map real world resources to your configuration, keeping track of metadata, and to improve performance for large infrastructures.This state is stored by default in a local file named “terraform.tfstate”, but it can also be stored remotely.

Now, Let’s take a look at the AWS console and see what all resources have been created after apply.

VPC module:

Here’s the VPC that we have created

6 subnets (3 public and 3 private spread across 3 AZs)

5 Route tables ( 1 public and 3 private, plus one default route created along with the VPC. Default route table of vpc is always captioned as main as seen it is captioned ‘yes’. However, we will not be using the default route table in our example. single public route table is associated with all the 3 public subnets whereas 3 private route tables are associated with 3 private subnets.

Internet gateway, which will facilitate the connectivity to internet from our env.

3 NAT gateways, one per availability zone. Each nat gateway is attached to the elastic ip address.

Bastion and serviceOne Module:

Launch configurations for bastion and serviceOne instances:

Autoscaling groups which will use the respective launch configurations:

EC2 instances for bastion and serviceOne:

IAM module:

We now have the IAM roles created one for bastion and another for serviceOne.

Further looking at the bastion iam role, will show you the policy attached to that IAM role

You should be having all the infra up in AWS by this time.

You can now ssh to bastion instance using the below command, since we have attached the EIP to bastion instance we will be using the same public Ip as seen in the below screenshot.

ssh -i mykey.pem ec2-user@<bastion-public-ip>

You can now login to serviceOne instance via the ssh proxy through bastion box, copy the private Ip from service one instance and use the below command to tunnel into the private service box via bastion:

ssh -i mykey.pem -o ProxyCommand="ssh -i mykey.pem ec2-user@<bastion-public-ip> nc %h %p" ec2-user@<serviceOne-private-ip>

Hope you have learned from this article. Please feel free to give feedback.

Surjeet Singh

Written by

DevOps Pro, Techie

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade