Auto-Scaling Private EC2 Instances with Terraform

Adam Leonard
Nerd For Tech
Published in
8 min readMar 5, 2023

Terraform Infrastructure Objectives

Create

  1. Custom VPC with 2 public and 2 private subnets
  2. Security group in the public subnets that allows traffic from the internet and associate it with the Application Load Balancer, Internet Gateway and a NAT gateway
  3. Auto-Scaling Group of EC2 instances running Apache Web Service that allows traffic from the Application Load Balancer in the private subnets

Check

  1. Terminate one of the EC2 instances to verify the Auto-Scaling Group is working properly
  2. Output the public DNS name of the Application Load Balancer to verify you are able to reach the Apache Web Service from the private subnets

Prerequisites

🟣Preferred IDE with Terraform and Amazon CLI installed

🟣AWS Free-Tier Account

🟣Basic Linux and Terraform Knowledge

Step 1: Specify our Providers and Variables

Specify the provider information needed for this deployment. For this examples all we are going to need to include is the Terraform version and AWS. Also add which region we will be working in.

Include a data block request in order to retrieve available availability zones and regions to export into main.tf

provider.tf

terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}

required_version = "~> 1.3.0"
}

provider "aws" {
region = var.aws_region
}
#Retrieve the list of AZs in the current AWS region
data "aws_availability_zones" "available" {}
data "aws_region" "current" {}

Variables give specific information in order to make the code reusable. The code below includes the specifics for each resource block. In theory, this file can be changed to different resources and still be deployed the same.

For this example we are using Ubuntu 20.04 & t2.micro

#Variables


#VPC Variables

variable "aws_region" {
description = "Region we deploy resources"
type = string
default = "us-east-1"
}

variable "vpc_name" {
type = string
default = "new_vpc"
}

variable "vpc_cidr" {
type = string
default = "10.0.0.0/16"
}

variable "public_subnet_cidr" {
description = "Public Subnet cidr block"
type = list(string)
default = ["10.0.1.0/24", "10.0.2.0/24"]
}

variable "private_subnet_cidr" {
description = "Private Subnet cidr block"
type = list(string)
default = ["10.0.3.0/24", "10.0.4.0/24"]
}

#EC2 Variables
variable "ami" {
description = "ami of ec2 instance"
type = string
default = "ami-09cd747c78a9add63"
}

variable "instance_type" {
description = "Size of EC2 Instances"
type = string
default = "t2.micro"
}

variable "user_data" {
description = "boostrap script for apache"
type = string
default = <<EOF
#!/bin/bash

# Install Apache
sudo apt update -y
sudo apt upgrade -y
sudo apt install -y apache2

sudo ufw allow 'Apache'

# Start Apache Service
sudo systemctl enable apache2
sudo systemctl start apache2

EOF
}

Step 2: Create VPC and Resources

The VPC resources are the largest part of most infrastructure. The order the resources at created can vary. Another plus to Terraform, until it is deployed, the resources of an initial deployment can usually be the preference of the individual.

Start with the new VPC CIDR Block and Name

Then create 2 Public Subnets and 2 Private Subnets. IP CIDR information is all declared in the variables.tf file. By not being hard coded, this helps to ensure the file is more reusable.

Both sets of subnets will use the data block specified in the providers.tf file to pull availability zone information.

main.tf

resource "aws_vpc" "vpc" {
cidr_block = var.vpc_cidr

tags = {
Name = "new-vpc"
}
}

#Deploy Subnets
resource "aws_subnet" "public_subnet" {
vpc_id = aws_vpc.vpc.id
cidr_block = var.public_subnet_cidr[count.index]
count = 2
map_public_ip_on_launch = true
availability_zone = data.aws_availability_zones.available.names[count.index]
}

resource "aws_subnet" "private_subnet" {
vpc_id = aws_vpc.vpc.id
cidr_block = var.private_subnet_cidr[count.index]
count = 2
availability_zone = data.aws_availability_zones.available.names[count.index]

Create Public and Private Route Tables to associate each tier with each other. The private gateway will also be associated with the NAT gateway. NAT gateway associated with Internet gateway

#Route table for public subnets
resource "aws_route_table" "public_route_table" {
vpc_id = aws_vpc.vpc.id

route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.internet_gateway.id
}
}

resource "aws_route_table_association" "public_route_assoc" {
count = 2
subnet_id = aws_subnet.public_subnet[count.index].id
route_table_id = aws_route_table.public_route_table.id
}


#Route table for private subnets and associate with NAT gateway
resource "aws_route_table" "private_route_table" {
vpc_id = aws_vpc.vpc.id

route {
cidr_block = "0.0.0.0/0"
nat_gateway_id = aws_nat_gateway.nat_gateway.id
}
}

resource "aws_route_table_association" "private_route_assoc" {
count = 2
subnet_id = aws_subnet.private_subnet[count.index].id
route_table_id = aws_route_table.private_route_table.id
resource "aws_internet_gateway" "internet_gateway" {
vpc_id = aws_vpc.vpc.id
tags = {
Name = "new_igw"
}
}
}

Create the Internet Gateway, an Elastic IP for the IG and a NAT Gateway in the public subnet,

resource "aws_internet_gateway" "internet_gateway" {
vpc_id = aws_vpc.vpc.id
tags = {
Name = "new_igw"
}
}


resource "aws_eip" "elastic_ip" {
vpc = true
depends_on = [aws_internet_gateway.internet_gateway]
tags = {
Name = "igw_eip"
}
}


#Create nat gateway for
resource "aws_nat_gateway" "nat_gateway" {
allocation_id = aws_eip.elastic_ip.id
connectivity_type = "public"
subnet_id = aws_subnet.public_subnet[0].id
}

Step 3: Security Group, Auto-Scaling, and Load Balancers

First create the Security Group for the Application Load Balancer to receive internet traffic. This is going to be the only mean of our EC2 instances in the private subnet to interact with the internet.

Next create the Security Group for the Auto-Scaling Group. The only way this group will connect to the internet is through the Application Load Balancer.

## Security Group Resources

resource "aws_security_group" "alb_security_group" {
name = "alb-security-group"
description = "ALB Security Group"
vpc_id = aws_vpc.vpc.id

ingress {
description = "HTTP from Internet"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}

}

resource "aws_security_group" "asg_security_group" {
name = "asg-security-group"
description = "ASG Security Group"
vpc_id = aws_vpc.vpc.id

ingress {
description = "HTTP from ALB"
from_port = 80
to_port = 80
protocol = "tcp"
security_groups = [aws_security_group.alb_security_group.id]
}

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}

Now create a Launch Template for the Auto-Scaling Group. Include the EC2 image and size needed for deployment. Associate it with the security group created for it. Add a User Data Script to install and launch the Apache Webservice.

The Auto-Scaling Group will launch with a 2 instances, which is also our minimum and desired capacity.

## Launch Template and Security Group

resource "aws_launch_template" "launch_template" {
name = "aws-launch-template"
image_id = var.ami
instance_type = var.instance_type

network_interfaces {
device_index = 0
security_groups = [aws_security_group.asg_security_group.id]
}
user_data = base64encode("${var.user_data}")

tags = {
Name = "asg-ec2-template"
}
}



resource "aws_autoscaling_group" "auto_scaling_group" {
desired_capacity = 2
max_size = 5
min_size = 2
vpc_zone_identifier = [for i in aws_subnet.private_subnet[*] : i.id]
target_group_arns = [aws_lb_target_group.lb_target_group.arn]
name = "ec2-asg"

launch_template {
id = aws_launch_template.launch_template.id
version = aws_launch_template.launch_template.latest_version
}
}

Input all of the information needed for the Application Load Balancer. Including where it will receive traffic from (The Internet), and where it will send traffic to (Auto-Scaling Group).

# ALB Info

resource "aws_lb" "alb" {
name = "public-alb"
internal = false
load_balancer_type = "application"
security_groups = [aws_security_group.alb_security_group.id]
subnets = [for i in aws_subnet.public_subnet : i.id]
}

resource "aws_lb_target_group" "lb_target_group" {
name = "lb-target-group"
port = 80
protocol = "HTTP"
vpc_id = aws_vpc.vpc.id
}

resource "aws_lb_listener" "alb_listener" {
load_balancer_arn = aws_lb.alb.arn
port = "80"
protocol = "HTTP"

default_action {
type = "forward"
target_group_arn = aws_lb_target_group.lb_target_group.arn
}
}

Finally, add an output block to display the Public URL of the ALB to check our connectivity.

output "alb_public_url" {
description = "Public URL for Application Load Balancer"
value = aws_lb.alb.dns_name
}

Step 4: Deployment Time & Checks

These commands should be run throughout building to ensure resources are working as you are creating them. This will assist with getting stuck with a large amount of errors when ready to deploy.

First is to initialize(init) Terraform. Init should be run anytime resources are added or removed. For me, if I am undecided if it will make a difference, I just run it during building.

Validate does just what is says for the code, and fmt will fix from formatting issues. Both of these can also be run numerous times during building.

terraform init
terraform fmt
terraform validate

If were good the this point, we can run the plan command and see what resources will be created.

terraform plan

Now the apply command to create our resources. A yes prompy will show to ensure we are ready to deploy.

terraform apply

If everything goes as planned. You will get to see this Outstanding message along with the output for our URL we specified.

Moreover, if you can see this screen after putting the url in a search bar, its even better!

Test out the auto-scaling group by running the follow code in the CLI.

aws ec2 terminate-instances --region <region> --instance-ids <instance id>

Instance ending in ‘837d’ is being shut down, and a new instance immediately is spinning up.

This was a long walkthrough, but hopefully the processes were broken down easy enough to follow along.

I enjoy Terraform since the error messages are easy to ready and in my opinion, easy to remedy.

Thank you so much and please give me a follow and connect with me on Linkedin.

https://www.linkedin.com/in/adamcleonard/

With everything completed, we can destroy all the resources and call this one a wrap!

terraform destroy

--

--