Leveraging Terraform for Deploying a Web Server Cluster and Load Balancer on AWS

Chidubem Chinwuba
13 min readJul 3, 2023

--

Crafting Resilient Cloud Infrastructures with Emphasis on Security, High Availability, and Fault Tolerance.

Introduction

Terraform holds significant importance in crafting resilient cloud infrastructures with a focus on security, high availability, and fault tolerance in AWS.

Firstly, Terraform provides a declarative approach to infrastructure provisioning, allowing users to define their AWS resources and configurations as code. This enables consistent and repeatable deployments, reducing the chances of human error and misconfigurations that could compromise security.

When it comes to security, Terraform offers features such as encryption, secure network configurations, and fine-grained access control. It allows organizations to define strict security policies and enforce them consistently across their AWS infrastructure, ensuring data protection and compliance with industry standards.

In terms of high availability, Terraform enables the creation of highly resilient architectures in AWS. It facilitates the deployment of redundant resources, load balancers, and auto-scaling groups to distribute traffic and handle increased demand. With Terraform, organizations can easily implement multi-region deployments, ensuring that their applications remain available and performant even in the face of regional outages or spikes in user traffic.

Additionally, Terraform aids in fault tolerance by providing the ability to define disaster recovery strategies. It allows for the replication of resources across different AWS availability zones and regions, ensuring business continuity and minimizing downtime in the event of failures or disasters.

Overall, Terraform empowers organizations to design, provision, and manage their AWS infrastructure in a resilient and secure manner. By treating infrastructure as code, teams can collaborate, version, and review changes, making it easier to maintain and enhance the security, high availability, and fault tolerance of their cloud infrastructures in AWS.

AWS Services Utilized

  • Amazon EC2 Instances
  • Auto Scaling Group (ASG)
  • Application Load Balancer (ALB)
  • Security Groups
  • Listeners (for load balancer)
  • Subnets
  • Virtual Private Cloud (VPC)
  • Target Group

Case Scenario

XYZ Company is a rapidly growing e-commerce platform that experiences high traffic volumes on their website. They want to ensure high availability and scalability by deploying a web server cluster and a load balancer on AWS using Terraform.

Requirements:

  1. Provision an AWS Virtual Private Cloud (VPC) to host the infrastructure.
  2. Create a load balancer that distributes incoming traffic across multiple web server instances.
  3. Deploy a cluster of web servers running on Amazon EC2 instances within the VPC.
  4. Configure auto scaling for the web server instances to handle increased traffic and maintain performance.
  5. Implement security measures such as security groups and subnets to protect the infrastructure.
  6. Utilize Terraform to define and manage the infrastructure as code.

Procedure

  1. Install Terraform
  2. Create Access Keys on AWS
  3. Create a Launch Configuration
  4. Create an Autoscaling Group
  5. Create a VPC and Subnet
  6. Create an AWS Load Balancer(ALB)
  7. Create a Terraform Workflow
  8. Destroy Infrastructure

Step 1: Install Terraform

To simplify the installation process of Terraform, it is recommended to utilize your operating system’s package manager. If you are a Homebrew user on macOS, you can easily install Terraform by running the following command as an example:

$ brew tap hashicorp/tap
$ brew install hashicorp/tap/terraform

If you are a Chocolatey user on Windows, you can install Terraform by executing the following command:

$ choco install terraform

Refer to the Terraform documentation for installation instructions on different operating systems, including various Linux distributions.

Alternatively, manually install Terraform by visiting the Terraform homepage, downloading the ZIP archive for your operating system, and extracting it into your desired installation directory. Inside the archive, you will find a binary named “terraform,” which should be added to your PATH environment variable.

Terraform successfully installed!!!!!

Step 2: Create Access keys on AWS

To grant Terraform the necessary permissions to modify your AWS account, you need to create an IAM user, access keys and authenticate using the CLI (command line interface).

IAM user created
IAM user granted administrative access
Create Access Key
Access Key Created
aws configure
Authenticate using the CLI and input the details prompted

For detailed information, refer to the comprehensive guide on authenticating to AWS on the command line.

To facilitate your comprehension and ensure smooth execution of the demonstration, kindly clone the project’s repository from my GitHub by clicking on the provided link. Throughout the process, feel free to make any necessary edits to the files.

Step 3: Create a Launch Configuration

A launch configuration is a template that defines the configuration parameters for launching Amazon EC2 instances in an Auto Scaling group. It specifies the instance type, AMI (Amazon Machine Image), security groups, block device mappings, and other settings required to create EC2 instances.

resource "aws_launch_configuration" "example" {
image_id = "ami-0261755bbcb8c4a84"
instance_type = "t2.micro"
security_groups = [aws_security_group.instance.id]

user_data = <<-EOF
#!/bin/bash
echo "welcome to a resilient cloud infrastructure, with emphasis on security, high availability and fault tolerance." > index.html
nohup busybox httpd -f -p ${var.server_port} &
EOF
# Required when using a launch configuration with an ASG.
lifecycle {
create_before_destroy = true
}
}

variable "server_port" {
description = "The port the server will use for HTTP requests"
type = number
default = 8080
}

resource "aws_security_group" "instance" {
name = "terraform-example-instance"
ingress {
from_port = var.server_port
to_port = var.server_port
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}

Code Explanation:

  1. Resource Block:
resource "aws_launch_configuration" "example" {
// Resource configuration goes here
}

This block defines a resource of type aws_launch_configuration and names it "example". This resource will represent an AWS Launch Configuration.

2. Attribute Assignments:

image_id        = "ami-0261755bbcb8c4a84"
instance_type = "t2.micro"
security_groups = [aws_security_group.instance.id]

These lines assign values to various attributes of the aws_launch_configuration resource. Here, it sets the AMI ID to "ami-0261755bbcb8c4a84", the instance type to "t2.micro", and the security groups to the ID of a specific security group (referenced as aws_security_group.instance.id).

3. User Data:

user_data = <<-EOF
#!/bin/bash
echo "welcome to a resilient cloud infrastructure, with emphasis on security, high availability and fault tolerance." > index.html
nohup busybox httpd -f -p ${var.server_port} &
EOF

The user_data attribute allows you to specify a script or commands to be run when launching an EC2 instance. In this case, it contains a Bash script. It writes a welcome message to an index.html file and starts an HTTP server using busybox httpd. The server is configured to listen on the port specified by the variable ${var.server_port}.

4. Lifecycle Block:

lifecycle {
create_before_destroy = true
}

This block specifies a lifecycle configuration for the resource. The create_before_destroy attribute set to true indicates that when updating the launch configuration, Terraform should create a new configuration before destroying the existing one. This helps ensure a smooth replacement of instances when using the configuration with an Auto Scaling Group.

5. Variable Block:

variable "server_port" {
description = "The port the server will use for HTTP requests"
type = number
default = 8080
}

Variables allow you to parameterize your Terraform code and provide flexibility. In this case:

  • The description field provides a description of the variable's purpose.
  • The type field specifies the variable's type, which is set to "number" in this case.
  • The default field sets a default value of 8080 for the variable. If no value is provided when using this code, the default value will be used.

6. Resource Block:

resource "aws_security_group" "instance" {
name = "terraform-example-instance"
ingress {
from_port = var.server_port
to_port = var.server_port
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}

This block defines an AWS Security Group resource named “instance”. An AWS Security Group acts as a virtual firewall to control inbound and outbound traffic to your EC2 instances. Here’s what the block does:

  • The name attribute sets the name of the security group to "terraform-example-instance".
  • The ingress block specifies the inbound rules for the security group.
  • The from_port and to_port attributes are set to the value of the var.server_port variable, allowing incoming traffic on the specified port.
  • The protocol attribute is set to "tcp" to allow TCP traffic.
  • The cidr_blocks attribute defines the source IP ranges allowed to access the instances. In this case, "0.0.0.0/0" allows traffic from any IP address.

Step 4: Create an Autoscaling Group

An Auto Scaling group in AWS is a feature that automatically adjusts the number of EC2 instances in response to changes in demand. It helps ensure that the desired number of instances are running to handle the application’s workload. Auto Scaling groups allow you to scale your infrastructure dynamically, improving availability, performance, and cost efficiency.

resource "aws_autoscaling_group" "example" {
launch_configuration = aws_launch_configuration.example.name
vpc_zone_identifier = data.aws_subnets.default.ids
target_group_arns = [aws_lb_target_group.asg.arn]
health_check_type = "ELB"
min_size = 2
max_size = 10

tag {
key = "Name"
value = "terraform-asg-example"
propagate_at_launch = true
}
}

Code Explanation:

The code creates an AWS Auto Scaling Group named “example” with the following configuration:

  • It uses a specified launch configuration, subnet identifier, and target group ARN.
  • The health check type is set to “ELB”.
  • The minimum number of instances is 2, and the maximum number is 10.
  • It adds a tag with the key “Name” and value “terraform-asg-example” to the Auto Scaling Group.

Overall, the code sets up an Auto Scaling Group that maintains a desired number of instances within a range, associates with a target group for load balancing, and includes a tag for identification.

Step 5: Create a VPC and Subnet

A VPC (Virtual Private Cloud) is a virtual network in the Amazon Web Services (AWS) cloud. It allows you to create a logically isolated section of the AWS cloud where you can launch AWS resources such as EC2 instances, RDS databases, and more. With a VPC, you have control over the network configuration, including IP addressing, subnets, routing tables, and security settings.

Subnets, on the other hand, are subdivisions of a VPC. They are used to partition the IP address range of a VPC into smaller networks. Each subnet represents a specific availability zone within a region and can contain resources like EC2 instances. Subnets enable you to organize your infrastructure, control network traffic flow, and apply different security and routing rules within a VPC.

data "aws_vpc" "default" {
default = true
}

data "aws_subnets" "default" {
filter {
name = "vpc-id"
values = [data.aws_vpc.default.id]
}
}

Code Explanation:

This code retrieves information about the default VPC and its associated subnets in AWS:

  • The data "aws_vpc" "default" block fetches details about the default VPC in your AWS account by setting default = true.
  • The data "aws_subnets" "default" block retrieves information about the subnets associated with the default VPC. It filters the subnets based on the "vpc-id" attribute, using the ID of the default VPC obtained from the previous data block (data.aws_vpc.default.id).

This code gathers information about the default VPC and its subnets, allowing you to reference and utilize these details in your Terraform configuration.

Step 6: Create an AWS Load Balancer (ALB)

An AWS Load Balancer (ALB) is a service that evenly distributes incoming network traffic across multiple targets to improve availability and performance. It operates at the application layer and automatically scales resources based on demand. ALBs are used to balance traffic, enhance fault tolerance, and offload SSL encryption.

resource "aws_lb" "example" {
name = "terraform-asg-example"
load_balancer_type = "application"
subnets = data.aws_subnets.default.ids
security_groups = [aws_security_group.alb.id]
}

resource "aws_lb_listener" "http" {
load_balancer_arn = aws_lb.example.arn
port = 80
protocol = "HTTP"

# By default, return a simple 404 page
default_action {
type = "fixed-response"

fixed_response {
content_type = "text/plain"
message_body = "404: page not found"
status_code = 404
}
}
}

resource "aws_security_group" "alb" {
name = "terraform-example-alb"
# Allow inbound HTTP requests
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

# Allow all outbound requests
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}

resource "aws_lb_target_group" "asg" {
name = "terraform-asg-example"
port = var.server_port
protocol = "HTTP"
vpc_id = data.aws_vpc.default.id

health_check {
path = "/"
protocol = "HTTP"
matcher = "200"
interval = 15
timeout = 3
healthy_threshold = 2
unhealthy_threshold = 2
}
}

resource "aws_lb_listener_rule" "asg" {
listener_arn = aws_lb_listener.http.arn
priority = 100

condition {
path_pattern {
values = ["*"]
}
}

action {
type = "forward"
target_group_arn = aws_lb_target_group.asg.arn
}
}

output "alb_dns_name" {
value = aws_lb.example.dns_name
description = "The domain name of the load balancer"
}

Code Explanation:

This code provisions an AWS Application Load Balancer (ALB) with an HTTP listener, a target group, and a listener rule. It also sets up a security group and defines an output variable for the ALB’s domain name.

  • The ALB is named “terraform-asg-example” and is configured as an application load balancer.
  • The ALB’s HTTP listener listens on port 80 and returns a fixed 404 response by default.
  • The ALB is associated with a security group that allows inbound HTTP requests on port 80 and allows all outbound requests.
  • The target group is named “terraform-asg-example” and listens on a variable-defined server port for HTTP protocol. It also has health checks configured.
  • The listener rule, with a priority of 100, matches any path and forwards traffic to the target group.
  • The output variable “alb_dns_name” provides the domain name of the ALB.

This code essentially creates a load balancer that distributes traffic to a group of instances or services based on the defined rules and configurations.

Step 7: Create a Terraform Workflow

The fundamental process of using Terraform involves three main stages:

  1. Writing: Creating infrastructure by defining it as code.
  2. Planning: Previewing the anticipated changes before applying them.
  3. Applying: Provisioning consistent and replicable infrastructure.

This documentation gives a detailed explanation of the whole process.

  1. Terraform init:
    terraform init is a command that prepares a Terraform project for use. It downloads and installs the necessary provider plugins, initializes the backend for state management, and sets up the project directory. It should be run before executing any other Terraform commands.

In your terminal, execute the following command:

terraform init
Initializing the backend...

Initializing provider plugins...
- Reusing previous version of hashicorp/aws from the dependency lock file
- Using previously-installed hashicorp/aws v5.4.0

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

............

2. Terraform plan

terraform plan is a command that creates an execution plan for applying changes to infrastructure. It analyzes the Terraform configuration files and the current state of the infrastructure to determine what actions will be taken when terraform apply is run. In your terminal, execute the following command:

terraform plan
MacBook-Air terraform-new % terraform plan
data.aws_vpc.default: Reading...
data.aws_vpc.default: Read complete after 8s [id=vpc-0166ae7f4973626f3]
data.aws_subnets.default: Reading...
data.aws_subnets.default: Read complete after 1s [id=us-east-1]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create

Terraform will perform the following actions:

# aws_autoscaling_group.example will be created
+ resource "aws_autoscaling_group" "example" {
+ arn = (known after apply)
+ availability_zones = (known after apply)
+ default_cooldown = (known after apply)
+ desired_capacity = (known after apply)
+ force_delete = false
+ force_delete_warm_pool

...............

3. Terraform apply:

terraform apply is a command used to apply changes to your infrastructure based on the Terraform configuration. It reads the configuration files, creates or modifies the necessary resources, and updates the state file to reflect the new state of the infrastructure. The apply command prompts for confirmation before making any changes and provides a summary of the actions that will be performed. It is a crucial step in the Terraform workflow to deploy and manage infrastructure in a consistent and reproducible manner. In your terminal, execute the following command:

terraform apply
chidubem@Chidubems-MacBook-Air terraform-new % terraform apply
data.aws_vpc.default: Reading...
data.aws_vpc.default: Read complete after 3s [id=vpc-0166ae7f4973626f3]
data.aws_subnets.default: Reading...
data.aws_subnets.default: Read complete after 1s [id=us-east-1]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create

Terraform will perform the following actions:

# aws_autoscaling_group.example will be created
+ resource "aws_autoscaling_group" "example" {
+ arn = (known after apply)
+ availability_zones = (known after apply)
+ default_cooldown = (known after apply)
+ desired_capacity = (known after apply)
+ force_delete = false

Apply complete! Resources: 8 added, 0 changed, 0 destroyed.

Outputs:

alb_dns_name = "terraform-asg-example-26019589.us-east-1.elb.amazonaws.com”

Step 8: Verification of Resources

In the AWS Management Console, navigate to the EC2 dashboard and verify that the two instances were launched from the auto scaling group(ASG).

EC2 instances created by the autoscaling group (ASG)

Also search for target groups on the console.

Also search for load balancers on the console.

Step 9: Certification of the ALB DNS name

Type in the ALB DNS output into your browser:

Outputs:

alb_dns_name = "terraform-asg-example-26019589.us-east-1.elb.amazonaws.com”
YAY!!!!!!!

Congratulations!!!!! you have successfully crafted a resilient cloud infrastructure with emphasis on security, high availability and fault tolerance

Final Step: Destroy infrastructure

Run the follow command to remove/delete/tear down all the resources previously provisioned from Terraform

terraform destroy

You would be prompted to destroy the infrastructure, type in ‘yes’.

THE END!!!!!!!!!!!!

Chidubem Chinwuba is a dedicated Cloud/DevOps Engineer with a background in Pharmacy, holding a Bachelor’s degree. He possesses a deep passion for technology and its transformative potential across industries. With his attention to detail and problem-solving mindset. Overall, Chidubem is driven by his passion for technology and his aspiration to make a meaningful impact in the Cloud/DevOps domain. He is excited to continue his professional growth and contribute to projects that shape the future of technology.

--

--