Terraform (IaC) to provision Autoscaling groups and application load balancer

Marcrinemm
9 min readApr 10, 2023

Introduction

Most DevOPs engineers prefer using infrastructure as code (IaC) to deploy resources on different cloud services (GCP, AWS, and Azure). This is because it is easier and fast to deploy resources using IaC compared to using the console. IaC templates are also reusable and can be used to deploy multiple resources with one command.

Terraform is a great tool to learn because it can be used to deploy resources to any cloud vendors compared to other tools such as cloudformation which can only be used to deploy resources to AWS. It is good for an engineer to learn how to use most tools to understand their differences.

For this article, we are going to provision resources to AWS using terraform. Below is an architecture of a highly available, secure and scalable application on AWS cloud.

architecture

Terraform steps

  1. Step 1

Create a vpc.tf file and add the following code defining the provider to use and use your preferred region.

provider "aws" {
region = "us-east-1"
}

Now you can go ahead and create a vpc by adding the below code to vpc.tf file. The below code defines the cidr range for the vpc.

# creating vpc
resource aws_vpc "cloudforce_vpc"{
cidr_block = "10.0.0.0/16"

tags = {
"Name" = "cloudforce_vpc"
}
}

Add the below code to vpc.tf file. It is used to create subnets and in this case we are creating 2 public subnets and 2 private subnets for the application server and web server.

# creating public subnet 1
resource aws_subnet "cloudforce_publicA" {
vpc_id = aws_vpc.cloudforce_vpc.id
cidr_block = "10.0.1.0/24"
availability_zone = "us-east-1a"

tags = {
"Name" = "cloudforce_publicA"
}
}
# creating private subnet 1
resource aws_subnet "cloudforce_privateA"{
vpc_id = aws_vpc.cloudforce_vpc.id
cidr_block = "10.0.2.0/24"
availability_zone = "us-east-1a"

tags = {
"Name" = "cloudforce_privateA"
}
}
# creating public subnet 2
resource aws_subnet "cloudforce_publicB"{
vpc_id = aws_vpc.cloudforce_vpc.id
cidr_block = "10.0.3.0/24"
availability_zone = "us-east-1b"

tags = {
"Name" = "cloudforce_publicB"
}
}
# creating private subnet 2
resource aws_subnet "cloudforce_privateB"{
vpc_id = aws_vpc.cloudforce_vpc.id
cidr_block = "10.0.4.0/24"
availability_zone = "us-east-1b"

tags = {
"Name" = "cloudforce_privateB"
}
}

Internet gateway is used to allow the resources in public subnet to interact with the internet. Add the below code to vpc.tf file to create the internet gateway.

# creating an internet gateway
resource "aws_internet_gateway" "cloudforce_igw" {
vpc_id = aws_vpc.cloudforce_vpc.id

tags = {
"Name" = "cloudforce_igw"
}
}

Create route table, routes and associate to public subnets using the below code. Add it to vpc.tf.

# creating a route table 
resource "aws_route_table" "cloudforce_rtb" {
vpc_id = aws_vpc.cloudforce_vpc.id

tags = {
"Name" = "cloudforce_rtb"
}
}
# creating a route
resource "aws_route" "cloudforce_rt" {
route_table_id = aws_route_table.cloudforce_rtb.id
destination_cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.cloudforce_igw.id

}
# associating the route table to public subnet 1
resource "aws_route_table_association" "cloudforce_rtb_assoc1" {
subnet_id = aws_subnet.cloudforce_publicA.id
route_table_id = aws_route_table.cloudforce_rtb.id
}
# associating the route table to public subnet 2
resource "aws_route_table_association" "cloudforce_rtb_assoc2" {
subnet_id = aws_subnet.cloudforce_publicB.id
route_table_id = aws_route_table.cloudforce_rtb.id
}

Create a NAT gateway to allow private subnets to communicate with the internet.

# Creating a NAT gateway in public subnet 1
resource "aws_nat_gateway" "cloudNAT" {
depends_on = [
aws_eip.Nat-Gateway-EIP
]

# Allocating the Elastic IP to the NAT Gateway!
allocation_id = aws_eip.Nat-Gateway-EIP.id

# Associating it in the Public Subnet!
subnet_id = aws_subnet.cloudforce_publicA.id
tags = {
Name = "NAT gateway 1"
}
}

# Creating a Route Table for the Nat Gateway
resource "aws_route_table" "NAT-Gateway-RT" {
depends_on = [
aws_nat_gateway.cloudNATA
]

vpc_id = aws_vpc.cloudforce_vpc.id

route {
cidr_block = "0.0.0.0/0"
nat_gateway_id = aws_nat_gateway.cloudNAT.id
}

tags = {
Name = "Route Table for NAT Gateway"
}

}


# Associating route table for NAT gateway to public subnet A
resource "aws_route_table_association" "Nat-Gateway-RT-Association" {
depends_on = [
aws_route_table.NAT-Gateway-RT
]

# Private Subnet ID for adding this route table to the DHCP server of Private subnet!
subnet_id = aws_subnet.cloudforce_privateA.id

# Route Table ID
route_table_id = aws_route_table.NAT-Gateway-RT.id
}

# Associating route table for NAT gateway to public subnet B
resource "aws_route_table_association" "Nat-Gateway-RT-Association" {
depends_on = [
aws_route_table.NAT-Gateway-RT
]

# Private Subnet ID for adding this route table to the DHCP server of Private subnet!
subnet_id = aws_subnet.cloudforce_privateB.id

# Route Table ID
route_table_id = aws_route_table.NAT-Gateway-RT.id
}

2. Create a security group for EC2 instances

Create a file instancesSG.tf and add the below code. The security group allows traffic from port 80, 3000, 22 and allows all outgoing traffic.

resource "aws_security_group" "instancesg" {
name = "instancesg"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
security_groups = [aws_security_group.lbsecuritygroupB.id]
cidr_blocks = ["0.0.0.0/0"]
}

ingress {
from_port = 0
to_port = 80
protocol = "tcp"
security_groups = [aws_security_group.lbsecuritygroupB.id]
cidr_blocks = ["0.0.0.0/0"]
}

ingress {
from_port = 3000
to_port = 3000
protocol = "tcp"
security_groups = [aws_security_group.lbsecuritygroupB.id]
cidr_blocks = ["0.0.0.0/0"]
}

ingress {
from_port = 22
to_port = 22
protocol = "tcp"
security_groups = [aws_security_group.lbsecuritygroupB.id]
cidr_blocks = ["0.0.0.0/0"]
}

egress {
from_port = 0
to_port = 0
protocol = "-1"
security_groups = [aws_security_group.lbsecuritygroupB.id]
cidr_blocks = ["0.0.0.0/0"]

}
vpc_id = aws_vpc.cloudforce_vpc.id

}

3. Create a launch template for web servers.

The template below contains the image id which can be either for amazon linux, or ubuntu depending with your preference. In this case it contains image id for ubuntu linux. It also contains the instance type to use and in this case we are using t2.micro which is in the free tier.

Create a launchweb.tf file and add the below code.

resource "aws_launch_template" "frontend" {
name_prefix = "frontend"
image_id = "ami-0557a15b87f6559cf"
instance_type = "t2.micro"
network_interfaces {
security_groups = [ "${aws_security_group.instancesg.id}" ]

associate_public_ip_address = true
}

user_data = "${base64encode(file("frontenddata.sh"))}"
lifecycle {
create_before_destroy = true
}
}

4. Create a user data for our launch template.

The below user data is used to install docker and pull docker image from docker hub and running it on port 3000. Add the code in frontenddata.sh file.

#!/bin/bash
# Install docker
apt-get update
apt-get install -y apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
apt-get update
apt-get install -y docker-ce
usermod -aG docker ubuntu
# pull frontend image from dockerhub
sudo docker pull marcrine/bookappfrontend
sudo docker run -d -p 80:3000 marcrine/bookappfrontend

5. Create a launch template for the application server, that is the backend services. We will use the same configurations used in the web server template. Create a launchapp.tf file and add the below code.

resource "aws_launch_template" "backend" {
name_prefix = "backend"
image_id = "ami-0557a15b87f6559cf"
instance_type = "t2.micro"
network_interfaces {
security_groups = [ "${aws_security_group.instancesg.id}" ]

associate_public_ip_address = true
}

user_data = "${base64encode(file("backenddata.sh"))}"
lifecycle {
create_before_destroy = true
}
}

6. Create a user data for the backend template.

The below code is used to install docker and pull backend docker image from docker hub, and lastly run the image in the server on port 5000. Add the code in backenddata.sh file.

#!/bin/bash
# Install docker
apt-get update
apt-get install -y apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
apt-get update
apt-get install -y docker-ce
usermod -aG docker ubuntu
# pull backend image from dockerhub
sudo docker pull marcrine/bookappbackend
sudo docker run -d -p 80:5000 marcrine/bookappbackend

Load Balancers 🥳 🥳 😁

7. Now we create an internet facing load balancer

This will be used to balance traffic from users to our web servers. The code below creates an application load balancer, a target group and a listener listening on port 80. It also create a health check on port 80 for our target groups. In this case our target group is the instances on the public subnets.

Create a frontendlb.tf file and add the below code.

resource "aws_lb" "frontend_lb" {
name = "frontend-lb"
internal = false
load_balancer_type = "application"
security_groups = [aws_security_group.lbsecuritygroupB.id]

subnets = [
aws_subnet.cloudforce_publicA.id,
aws_subnet.cloudforce_publicB.id
]
}

resource "aws_lb_target_group" "frontendTG" {
name = "frontendTG"
port = 80
protocol = "HTTP"
vpc_id = aws_vpc.cloudforce_vpc.id

health_check {
enabled = true
port = 80
protocol = "HTTP"
path = "/"
matcher = "200"
healthy_threshold = 3
unhealthy_threshold = 3
}
}

resource "aws_lb_listener" "frontendListener" {
load_balancer_arn = aws_lb.frontend_lb.arn
port = "80"
protocol = "HTTP"

default_action {
type = "forward"
target_group_arn = aws_lb_target_group.frontendTG.arn
}
}

8. Let’s create a load balancer for our backend service too.

Here the target groups are the instances on private subnets.

Create a backendlb.tf file and add the code.

resource "aws_lb" "backend_lb" {
name = "backend-lb"
internal = true
load_balancer_type = "application"
security_groups = [aws_security_group.lbsecuritygroupB.id]

subnets = [
aws_subnet.cloudforce_privateA.id,
aws_subnet.cloudforce_privateB.id
]
}

resource "aws_lb_target_group" "backendTG" {
name = "backendTG"
port = 80
protocol = "HTTP"
vpc_id = aws_vpc.cloudforce_vpc.id

health_check {
enabled = true
port = 80
interval = 240
protocol = "HTTP"
path = "/health"
matcher = "200"
healthy_threshold = 3
unhealthy_threshold = 3
}
}

resource "aws_lb_listener" "backendListener" {
load_balancer_arn = aws_lb.backend_lb.arn
port = "80"
protocol = "HTTP"

default_action {
type = "forward"
target_group_arn = aws_lb_target_group.backendTG.arn
}
}

9. Our load balancers need security groups to open some ports.

Our load balancers will have port 80 and 3000 open from incoming traffic. It will allow all outgoing traffic.

Create a loadbalancersg.tf file.

resource "aws_security_group" "lbsecuritygroupB" {
name = "lbsecuritygroupB"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

ingress {
from_port = 0
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

ingress {
from_port = 3000
to_port = 3000
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
vpc_id = aws_vpc.cloudforce_vpc.id

}

Autoscaling groups 😎

Now, we are on the last step of creating terraform files 🥳.

10. Create an autoscaling group for the frontend service or web servers.

Create a frontendASG.tf file

resource "aws_autoscaling_group" "frontendASG" {
name = "frontendASG"
min_size = 2
max_size = 6

health_check_type = "EC2"

vpc_zone_identifier = [
aws_subnet.cloudforce_publicA.id,
aws_subnet.cloudforce_publicB.id
]

target_group_arns = [aws_lb_target_group.frontendTG.arn]

mixed_instances_policy {
launch_template {
launch_template_specification {
launch_template_id = aws_launch_template.frontend.id
}

}
}
}

resource "aws_autoscaling_policy" "asgpolicy" {
name = "asgpolicy"
policy_type = "TargetTrackingScaling"
autoscaling_group_name = aws_autoscaling_group.frontendASG.name

estimated_instance_warmup = 300

target_tracking_configuration {
predefined_metric_specification {
predefined_metric_type = "ASGAverageCPUUtilization"
}

target_value = 25.0
}
}

11. Create an autoscaling group for the backend service or application servers.

Create a backendASG.tf file and add the below code.

resource "aws_autoscaling_group" "backendASG" {
name = "backendASG"
min_size = 2
max_size = 6

health_check_type = "EC2"

vpc_zone_identifier = [
aws_subnet.cloudforce_privateA.id,
aws_subnet.cloudforce_privateB.id
]

target_group_arns = [aws_lb_target_group.backendTG.arn]

mixed_instances_policy {
launch_template {
launch_template_specification {
launch_template_id = aws_launch_template.backend.id
}

}
}
}

resource "aws_autoscaling_policy" "asgpolicy2" {
name = "asgpolicy"
policy_type = "TargetTrackingScaling"
autoscaling_group_name = aws_autoscaling_group.backendASG.name

estimated_instance_warmup = 300

target_tracking_configuration {
predefined_metric_specification {
predefined_metric_type = "ASGAverageCPUUtilization"
}

target_value = 25.0
}
}

Terraform commands

Now you can run the terraform commands after creating all the above files.

first command is used to initialize terraform.

terraform init
terraform init

The second command is used to confirm the resources that are to be deployed on AWS.

terraform plan
terraform plan

The last command is used to provision all the resources.

terraform apply
terraform apply

After the terraform apply command has run completely and successfully, you can check your AWS console to confirm the resources and ensure they are all running.

Confirm that the load balancers are active.

application load balancers

Check the target groups, and ensure that there are healthy targets. In the below diagram you can see there are 4 healthy targets, and this means we can send requests to our load balancer.

target groups health checks

Lastly, test if the load balancer is working. Copy the DNS for the load balancer and paste it in your browser, you should see the following page.

nerdshub application

You can choose to destroy the created resources using the below command.

terraform destroy
Terraform destroy

Find the complete code on github

--

--

Marcrinemm

DevOps Engineer || AWS Cloud engineer || Backend Engineer