Auto Scaling with Terraform

A Beginner’s Guide to Deploying an Auto Scaling Group

Melissa (Mel) Foster
Women in Technology
11 min readJul 10, 2023

--

Edited Adobe Free Stock Image & Edited Logo

If you have stumbled upon this article, I am so glad you are here! I am exploring Terraform and continuing growing my passion for it. This is my second project and the focus today will be deploying an auto scaling group along with a custom Apache webpage. This is a beginner’s project, and it is helpful to have a little familiarity with Terraform and the Terraform Workflow. I will break the project down into achievable steps and hope you discover a desire to grow your knowledge with utilizing Terraform too!

A little background //

Terraform is an open-sourced infrastructure as code tool allowing DevOps Engineers the ability to define both cloud and on-prem resources in human-readable configuration files which can be versioned, reused, and shared. Follows a consistent workflow to provision and manage all of your infrastructure throughout its lifecycle.

Terraform Workflow: Write→ Plan →Apply

Terraform Basic Commands:

Basic commands Terraform uses during lifecycle

terraform init initializes cwd (current working directory) containing Terraform configuration files. The first command that should be run after writing a new Terraform configuration. Note: it is safe to run this command multiple times.

terraform fmt checks formatting and correct the spacing to correct HCL format.

terraform fmt -recursive will check formatting for all files in current working directory

terraform validate verifies if the configuration is syntactically valid and internally consistent

terraform plan creates an execution plan showing all the resource which will be created once the Terraform file is applied

terraform apply executes the actions proposed in Terraform plan to create, update, or destroy the infrastructure

terraform destroy will destroy all remote objects managed by the particular Terraform configuration.

Note: We will be working on launching an Apache HTTP Server. Apache is a free and open-source cross-platform web server software. Most commonly used on Linux OS, but can be supported on wide variety of Unix-like systems.

Scenario //

An e-commerce company needs to handle a surge in traffic during the holiday season. The company wants to ensure that their website remains available and responsive to customers even during high traffic periods.

Objective //

  • Create an S3 bucket and set it as your remote backend.
  • Launch an Auto Scaling group that spans 2 subnets in your default vpc.
  • Create a security group that allows traffic from the internet and associate it with the Auto Scaling group instances.
  • Include a script in your user data to launch an Apache webserver. The Auto Scaling group should have a min of 2 and max of 5.
  • To verify everything is working check the public ip addresses of the two instances. Manually terminate one of the instances to verify that another one spins up to meet the minimum requirement of 2 instances.

To follow along with this project you will need //

  • Access to AWS
  • An Configured AWS Cloud9 with AWS CLI
  • Default VPC & Default Subnet
  • An Optional GitHub Account
  • Attention to Details

Setting up Cloud9 for Success//

Note: If you haven’t set up an AWS Cloud9 Environment, you can refer to this article, and follow the steps to create your environment.

If you previously have not configured your Cloud9, this is the best place to start. This will ensure that we do not need to hardcode any sensitive information (IAM Access Key or Secret Access Key).

  • Configure AWS Credentials
  • Run the following commands separately with your information from the Cloud9 Terminal
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
export AWS_DEFAULT_REGION=

Note: If you misplaced or need to create new you can following the documentation linked here.

Add Optional GitHub Repo //

I always recommend pushing any and all code you create to GitHub. I will be continuing to build off a previously cloned repo in my established AWS Cloud9 environment. If you would like to please take a moment for these steps.

  • Select Source Control from left hand menu
  • Select Clone Repo
Screen capture ©Melissa (Mel) Foster
  • Add Clone GitHub Repo link
  • Create new branch from Source Control Tab
    It is best practice to create a new branch to prevent from directly merging files without verification/approval.

Creating a Working Directory //

If you have been following along with my previous Terraform projects, I highly recommend to continue the best practice of creating a new directory. This will help you stay organized. Example:wk21Terraform

mkdir <NAME YOUR DIRECTORY>
Environment Example
cd <Current Working Directory>

Building our Infrastructure //

Once in the directory you would like to use for this project, begin by following the Terraform Workflow with writing our files. We will be creating a main.tf, providers.tf, variables.tf, script.sh.

Creating Files //

  • Create File → New Text File → Save As → providers.tf→ Save
  • Create File → New Text File → Save As → main.tf→ Save
  • Create File → New Text File → Save As → terraform.tf→ Save
  • Create File → New Text File → Save As → variables.tf→ Save
  • Create File → New Text File → Save As → script.sh→ Save

Note: Utilize Documentation in Helpful Resource Section, to help build your infrastructure. I keep all files open and rotate through them as needed while building my infrastructure. In time, you will find a rhythm that works for you. For this walk-through I will try to break down into sections to aid with understanding our objectives.

providers.tf //

  • Enter terraform source & version
  • Enter provider & region
#providers for wk21jenkins

terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "5.6.2"
}
}
}

provider "aws" {
region = var.region
}

With our providers.tf complete, we will move onto building our main.tf.

Creating & Configuring S3 Bucket in main.tf //

  • Create our S3 Bucket
    Note: AWS S3 Bucket names are globally unique
#main.tf for wk21

#Create S3 bucket PRIVATE BY DEFAULT
resource "aws_s3_bucket" "s3bucket-week21-melfoster" {
bucket = "s3bucket-week21-melfoster"

tags = {
Name = "Wk21 S3 Bucket"
Environment = "development"
}
}

#Enable versioning
resource "aws_s3_bucket_versioning" "s3bucket-week21-melfoster" {
bucket = aws_s3_bucket.s3bucket-week21-melfoster.id

versioning_configuration {
status = "Enabled"
}
}
#Block public access to the S3 bucket created above
resource "aws_s3_bucket_public_access_block" "s3bucket-week21-melfoster-accessblock" {
bucket = aws_s3_bucket.s3bucket-week21-melfoster.id

block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}
  • Run the following work flow commands
terraform init

terraform fmt

terraform validate

terraform plan
  • Apply
terraform apply -auto-approve
Successful Creation of S3
  • Navigate on your AWS Console to S3
  • Verify S3 Bucket Creation
Successful Creation! Note: Access: Bucket & Objects not public!

Since we have our S3 Bucket successfully created, we want to configure it to set as our backend.

  • Configure created S3 Bucket as our backend using our terraform.tf file
  • Enter code into terraform.tf
terraform {
backend "s3" {
bucket = "s3bucket-week21-melfoster"
key = "State-files/terraform.tfstate"
region = "us-east-1"
}
}
  • ctrl + S to save
  • Run terraform init command and notice the prompt
Entering “yes” will set the S3 Bucket we created as our backend
  • Enter “yes”
Success!
  • Navigate back to your S3 Dashboard
  • Open your S3 Bucket to verify your folder and your files
Successful Folder
Successful File

Awesome job so far!! We now, successfully have our S3 Bucket set up as our backend. Let’s keep building our main.tf file by setting up our VPC Subnets, ASG, and Security Groups.

Define VPC Subnets in main.tf //

# Defining subnets from my default vpc
data "aws_subnets" "selected_subnets" {
filter {
name = "vpc-id"
values = [var.vpc_id]
}
filter {
name = "subnet-id"
values = [var.subnet_id_1, var.subnet_id_2]
}
}

Create a Launch Template in main.tf //

  • Create launch template
    This will allow us to set variables to launch our EC2s
# Create Launch Template Resource Block
resource "aws_launch_template" "asg_ec2_template" {
name = var.environment
image_id = var.instance_ami
instance_type = var.instance_type
vpc_security_group_ids = [aws_security_group.wk21_security_group.id]
user_data = filebase64("script.sh")
tags = {
Name = "wk21_instance"
}

Create a Auto Scale Resource Block in main.tf //

  • Configure min & max
  • Configure VPC Zone to Subnets
  • Configure EC2 Name
    Note: This part I got hung up on a little bit, as I wanted it to work just like I did in my last Terraform project. However, after several attempts, I found a solution in Terraform Documentation and had success. Remember, running into errors is just part of the process. Doing a deep dive on documentation and troubleshooting will only aid in your success in being a DevOps Engineer.
# Create ASG Resource Block
resource "aws_autoscaling_group" "wk21asg" {
name = var.environment
vpc_zone_identifier = data.aws_subnets.selected_subnets.ids
desired_capacity = 2
max_size = 5
min_size = 2
tag {
key = "Name"
value = "wk21EC2_Foster"
propagate_at_launch = true
}
launch_template {
id = aws_launch_template.asg_ec2_template.id
version = aws_launch_template.asg_ec2_template.latest_version
}
}
  • Configure our Security Group
# Create Security Group Block
resource "aws_security_group" "wk21_security_group" {
name = "wk21_security_group"
description = "Allow web traffic"
vpc_id = var.vpc_id

ingress {
description = "Allow port 80"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "Allow port 443"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}

tags = {
Name = var.environment
}
}

Note: Build your variable.tf file as you go, and run Terraform workflow commands to check for formatting and validating syntax.

variables.tf //

I mentioned at the beginning of building our infrastructure I worked will all my files open in my Cloud9 Environment and built the code as went as it corresponded to each file. Your variable.tf should include:

  • Define Region
  • Define Availability Zones
  • Define Environment Variables
  • Define Default VPC
  • Define Instance AMI — I used Ubuntu
  • Define Instance Type
#variables for Wk21 main.tf

#AWS Region Variables
variable "aws_region" {
description = "AWS region"
type = string
default = "us-east-1"
}

#Availablity Zones
variable "availability_zones" {
type = list(string)
default = (["us-east-1a", "us-east-1b"])
}


#Environement variable
variable "environment" {
description = "Environment name for deployment"
type = string
default = "wk21Project"
}

variable "vpc_id" {
description = "ID of the VPC"
default = "ENTER DEFAULT VPC"
}

variable "vpc_cidr_block" {
type = string
default = "ENTER DEFAULT VPC CIDR"
}

variable "subnet_cidr_block_1" {
type = string
default = "ENTER DEFAULT SUBNET 1 CIDR"
}

variable "subnet_cidr_block_2" {
type = string
default = "ENTER DEFAULT SUBNET 2 CIDR"
}

variable "subnet_id_1" {
type = string
default = "ENTER YOUR PUBLIC SUBNET 1"
}

variable "subnet_id_2" {
type = string
default = "ENTER YOUR DEFAULT PUBLIC SUBNET 2"
}

variable "instance_ami" {
type = string
description = "AMI ID for the Ubuntu EC2 instance"
default = "ami-053b0d53c279acc90"
}

variable "instance_type" {
type = string
default = "t2.micro"
}

script.sh //

Lastly, we will create our script.sh to bootstrap at launch of our EC2s.

#!/bin/bash
sudo apt-get update -y
sudo apt-get install -y apache2
sudo apt-get systemctl start apache2
sudo apt-get systemctl enable apache2

echo "<html><head><title> Apache 2023 Terraform </title>
</head>
<body>
<p> Welcome Green Team!! WK21 Terraform ASG by Mel Foster 07/2023
</body>
</html>" >/var/www/html/index.html

This really reminded my of my previous project where we launched a custom Apache webpage. I love seeing all the growth. Alright, are you ready to deploy?? We got this!

Deploy our Infrastructure //

It’s a big moment!! Let’s run our work flow commands once again.

terraform init

terraform fmt

terraform validate

terraform plan

terraform apply -auto-approve
Note: If you use the tag -auto-approve it will say yes and start deploying the infrastructure. If you do not use, you will be prompted to answer “yes.”
Success!! Note: I found a spelling error and adjusted that is what the change is

Verify //

This is the bittersweet part. I always enjoy making sure things are running appropriately but get a little anxious each time. What if it doesn’t work? Well, you just try again! From our AWS Console we will verify all the things we created.

  • Verify Security Group
Success! You can see my Cloud9 Environment SG & WK21 Custom SG
  • Verify Auto Scaling Group
  • Scroll down to verify ASG is associated with subnets
Success! Auto Scale Group is associated with our two subnets.
  • Verify EC2s
  • Verify ASG is working, Stop one EC2
Awesome! Just what we want to see! A new EC2 is being launched after stopping an EC2.
Officially Running & Officially Terminated!

Lastly, we need to verify our webpage. My favorite part! It just always feels so good to see..

  • Verify our Apache Webpage via public IP of one of the EC2 Instances
Note: Verified both EC2 IP Addresses, included one screen shot
Woo-hoo!! Feels good to complete another challenge!!

Optional GitHub //

As always, highly recommend pushing all code you create to GitHub. I have included a direct link to this project.

Edited GitHub Logo
Adobe Free Stock Image

Clean Up Time //

Once again, it’s time to clean up and terminate what we no longer need.

  • Navigate to S3 on AWS Console
  • Select your bucket
  • Select Empty
  • Enter permanently delete
  • Select Empty
  • Navigate back to Cloud9 to run one last command from CLI
terraform destroy -auto-approve

Tip //

Only use the -auto-approve tag when you are absolutely confident in what you are approving. You never want to destroy something you might still need or will continue using.

Adobe Free Stock Image

Thank you so much for joining me today! If you were following along hands-on, I hope you were successful. Remember running into errors, is all part of the process. I always strive to look at failure as a stepping stone to success. Getting errors just means it’s time for trouble shooting, which results in you being a strong DevOps Engineer! Look forward to sharing another project with you soon!

--

--

Melissa (Mel) Foster
Women in Technology

𝔻𝕖𝕧𝕆𝕡𝕤 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿 |𝒲𝑜𝓂𝑒𝓃 𝐼𝓃 𝒯𝑒𝒸𝒽 𝒜𝒹𝓋𝑜𝒸𝒶𝓉𝑒 | 𝚂𝚘𝚌𝚒𝚊𝚕 𝙼𝚎𝚍𝚒𝚊 𝙲𝚛𝚎𝚊𝚝𝚘𝚛 | Photographer