Terraform: Using Local Modules to Create a Two-Tier Architecture

Dahmear Johnson
Nerd For Tech
Published in
12 min readMar 9, 2023

This project write-up aims to deploy a highly-available two-tier architecture using a local module in Terraform. A module is a container for multiple resources that are used together. You can use modules to create lightweight abstractions to describe your infrastructure in terms of its architecture rather than directly in terms of physical objects. A module can be defined locally within a new directory or remotely using a remote repository hosted in the Terraform public/private registry, GitHub, Bitbucket, etc.

Any combination of resources and other constructs can be factored into a module. Still, overusing modules can make your overall Terraform configuration harder to understand and maintain. Therefore, Terraform administrators must assess their organizational needs to determine if modules are necessary for their infrastructure deployment.

For more information about Terraform Modules, I recommend reviewing official Terraform documentation.

Modules Overview — Configuration Language | Terraform | HashiCorp Developer

Without further ado, let’s deploy our two-tier architecture in AWS using a local Terraform module! 🚀

Objectives:

  • Create a highly available Two-Tier AWS architecture using a Local Terraform Module.
  • Create 2 Public Subnets for the Web Server Tier and 2 Private Subnets for the RDS Tier.
  • Configure routing and security groups for the Web Server and RDS Tier resources.
  • Deploy EC2 instances running Apache with custom web pages in each Public Web Tier subnet.
  • Create an Internet-facing Application Load Balancer targeting the web servers.
  • Deploy an RDS MySQL instance and a Standby instance (Multi-AZ) in the Private RDS Tier Subnets.

Prerequisites

  • Basic Terraform Knowledge
  • AWS account with Administrator Access permissions
  • AWS CLI installed and configured with your programmatic access credentials
  • Terraform installed (version ~> 1.3.0)

📝NOTE: I am using the WSL terminal in this demonstration, but you can follow along using any terminal supporting the above-mentioned prerequisites. ☝🏽

Step 1: Create Terraform Configuration Files

In this step, we will create our parent directory (root module), child-module sub-directory (child module), and Terraform configuration files.

Create directories.

mkdir -p ./two-tier/child-module

Change into the “child-module” sub-directory.

cd two-tier/child-module
Use “pwd” to confirm you are in your child-module sub-directory.

Create the following tf files and copy and paste the contents below using a text editor of your choice.

alb.tf

################################################################################
# Security Group
################################################################################

resource "aws_security_group" "alb_security_group" {
name = "${var.env}-alb-security-group"
description = "ALB Security Group"
vpc_id = aws_vpc.vpc.id

ingress {
description = "HTTP from Internet"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}

tags = {
Name = "${var.env}-alb-security-group"
Environment = var.env
}
}


################################################################################
# Application Load Balancer
################################################################################

resource "aws_lb" "alb" {
name = "${var.env}-alb"
internal = false
load_balancer_type = "application"
security_groups = [aws_security_group.alb_security_group.id]
subnets = [for i in aws_subnet.public_subnet : i.id]

tags = {
Name = "${var.env}-alb"
Environment = var.env
}
}

resource "aws_lb_target_group" "target_group" {
name = "${var.env}-target-group"
port = 80
protocol = "HTTP"
vpc_id = aws_vpc.vpc.id

health_check {
path = "/"
matcher = 200
}
}

resource "aws_lb_target_group_attachment" "target_group_attachment" {
count = 2

target_group_arn = aws_lb_target_group.target_group.arn
target_id = aws_instance.web_server[count.index].id
port = 80
}

resource "aws_lb_listener" "alb_listener" {
load_balancer_arn = aws_lb.alb.arn
port = "80"
protocol = "HTTP"

default_action {
type = "forward"
target_group_arn = aws_lb_target_group.target_group.arn
}

tags = {
Name = "${var.env}-alb-listenter"
Environment = var.env
}
}

ec2.tf

################################################################################
# Security Group
################################################################################

# Obtain User Local Public IP
data "external" "myipaddr" {
program = ["bash", "-c", "curl -s 'https://ipinfo.io/json'"]
}

resource "aws_security_group" "ec2_security_group" {
name = "${var.env}-ec2-security-group"
description = "Security Group for EC2 Web Servers"
vpc_id = aws_vpc.vpc.id

ingress {
description = "Allow SSH from MY Public IP"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["${data.external.myipaddr.result.ip}/32"]
}

ingress {
description = "HTTP from Internet"
from_port = 80
to_port = 80
protocol = "tcp"
security_groups = [aws_security_group.alb_security_group.id]
}

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}

tags = {
Name = "${var.env}-ec2-security-group"
Environment = var.env
}
}

################################################################################
# SSH Keys
################################################################################

resource "tls_private_key" "generated" {
algorithm = "RSA"
}

resource "local_file" "private_key_pem" {
content = tls_private_key.generated.private_key_pem
filename = "${var.ssh_key}.pem"
file_permission = "0400"
}

resource "aws_key_pair" "generated" {
key_name = var.ssh_key
public_key = tls_private_key.generated.public_key_openssh
}

################################################################################
# EC2
################################################################################

data "aws_ami" "ubuntu" {
most_recent = true

filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-*"]
}

filter {
name = "virtualization-type"
values = ["hvm"]
}

owners = ["099720109477"]
}

resource "aws_instance" "web_server" {
count = 2
ami = data.aws_ami.ubuntu.id
instance_type = "t2.micro"
subnet_id = aws_subnet.public_subnet[count.index].id
vpc_security_group_ids = [aws_security_group.ec2_security_group.id]
user_data = var.user_data
key_name = var.ssh_key

tags = {
Name = "${var.env}-${var.ec2_name}-${count.index}"
Environment = var.env
}
}

outputs.tf

################################################################################
# Outputs
################################################################################

output "web_server_public_ip" {
description = "Public IP of Web Servers"
value = [for i in aws_instance.web_server[*] : i.public_ip]
}

output "ec2_ssh_access" {
description = "SSH Remote Access to the first EC2 instance"
value = "ssh -i ${aws_key_pair.generated.key_name}.pem ubuntu@${aws_instance.web_server[0].public_ip}"
}

output "db_name" {
description = "Database Name"
value = aws_db_instance.db_instance.db_name
}

output "db_address" {
description = "The hostname of the RDS instance"
value = aws_db_instance.db_instance.address
}

output "connect_to_database" {
description = "Command to connect to database from EC2"
value = "mysql --host=${aws_db_instance.db_instance.address} --user=${var.db_username} -p"
}

output "alb_public_url" {
description = "Public URL for Application Load Balancer"
value = aws_lb.alb.dns_name
}

providers.tf

################################################################################
# Terraform and Provider Blocks
################################################################################

terraform {
required_providers {
aws = {
version = "~> 4.55"
source = "hashicorp/aws"
}
}

required_version = "~> 1.3.0"
}

provider "aws" {
region = var.aws_region
}

rds.tf

################################################################################
# Security Group
################################################################################

resource "aws_security_group" "db_security_group" {
name = "${var.env}-db-security-group"
description = "Security Group for RDS instance"
vpc_id = aws_vpc.vpc.id

ingress {
description = "MySQL traffic from Web Servers"
from_port = 3306
to_port = 3306
protocol = "tcp"
security_groups = [aws_security_group.ec2_security_group.id]
}

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}

tags = {
Name = "${var.env}-db-security-group"
Environment = var.env
}
}

################################################################################
# DB Subnet Group
################################################################################

resource "aws_db_subnet_group" "db_subnet_group" {
name = "${var.env}-db-subnet-group"
subnet_ids = [for i in aws_subnet.private_subnet[*] : i.id]

tags = {
Name = "${var.env}-db-subnet-group"
Environment = var.env
}
}

################################################################################
# RDS
################################################################################

resource "aws_db_instance" "db_instance" {
allocated_storage = var.db_allocated_storage
db_name = var.db_name
engine = var.db_engine
engine_version = var.db_engine_version
instance_class = var.db_instance_class
username = var.db_username
password = var.db_password
skip_final_snapshot = true
multi_az = true
db_subnet_group_name = aws_db_subnet_group.db_subnet_group.name
vpc_security_group_ids = [aws_security_group.db_security_group.id]

tags = {
Name = "${var.env}-db-instance"
Environment = var.env
}
}

variables.tf

################################################################################
# Variables
################################################################################

variable "aws_region" {
description = "AWS deployment region"
type = string
}

variable "env" {
description = "Environment Name"
type = string
}

variable "vpc_cidr_block" {
description = "VPC IPv4 CIDR block"
type = string
}

variable "public_subnet_cidr_block" {
description = "Public Subnet CIDR blocks"
type = list(string)
}

variable "private_subnet_cidr_block" {
description = "Private Subnet CIDR blocks"
type = list(string)
}

variable "ec2_name" {
description = "EC2 Web Server name"
type = string
}

variable "ssh_key" {
description = "ssh key name"
type = string
}

variable "user_data" {
description = "User Data Shell script for Apache installation"
type = string
default = <<EOF
#!/bin/bash

# Install Apache on Ubuntu

sudo apt update -y
sudo apt install -y apache2
sudo apt install -y mysql-client

sudo systemctl start apache2
sudo systemctl enable apache2
EC2AZ=$(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone)
echo '<center><h1>This Amazon EC2 instance is located in Availability Zone: AZID </h1></center>' > /var/www/html/index.txt
sed "s/AZID/$EC2AZ/" /var/www/html/index.txt > /var/www/html/index.html
EOF
}

variable "db_allocated_storage" {
description = "The allocated storage in gibibytes"
type = string
default = 10
}

variable "db_name" {
description = "The database name"
type = string
}

variable "db_engine" {
description = "The database engine to use"
type = string
default = "mysql"
}

variable "db_engine_version" {
description = "The database engine version to use"
type = string
default = "8.0.32"
}

variable "db_instance_class" {
description = "The instance type of the RDS instance"
type = string
default = "db.t2.small"
}

variable "db_username" {
description = "The master username for the database"
type = string
}

variable "db_password" {
description = "Password for the master DB user"
type = string
sensitive = true
}

vpc.tf

################################################################################
# VPC
################################################################################

resource "aws_vpc" "vpc" {
cidr_block = var.vpc_cidr_block
enable_dns_hostnames = true

tags = {
Name = "${var.env}-vpc"
Environment = var.env
}
}

################################################################################
# Public and Private Subnets
################################################################################

data "aws_availability_zones" "az" {
state = "available"
}

resource "aws_subnet" "public_subnet" {
count = 2
vpc_id = aws_vpc.vpc.id
cidr_block = var.public_subnet_cidr_block[count.index]
availability_zone = data.aws_availability_zones.az.names[count.index]
map_public_ip_on_launch = true

tags = {
Name = join("-", ["${var.env}-public-subnet", data.aws_availability_zones.az.names[count.index]])
Environment = var.env
}
}

resource "aws_subnet" "private_subnet" {
count = 2
vpc_id = aws_vpc.vpc.id
cidr_block = var.private_subnet_cidr_block[count.index]
availability_zone = data.aws_availability_zones.az.names[count.index]

tags = {
Name = join("-", ["${var.env}-private-subnet", data.aws_availability_zones.az.names[count.index]])
Environment = var.env
}
}

################################################################################
# Internet and NAT Gateway
################################################################################

resource "aws_internet_gateway" "igw" {
vpc_id = aws_vpc.vpc.id

tags = {
Name = "${var.env}-internet-gw"
Environment = var.env
}
}

resource "aws_eip" "elastic_ip" {
count = 2

tags = {
Name = "${var.env}-elastic-ip"
Environment = var.env
}
}

resource "aws_nat_gateway" "ngw" {
count = 2
allocation_id = aws_eip.elastic_ip[count.index].id
connectivity_type = "public"
subnet_id = aws_subnet.public_subnet[count.index].id

tags = {
Name = "${var.env}-nat-gw"
Environment = var.env
}
}

################################################################################
# Route Tables
################################################################################

resource "aws_route_table" "public_route_table" {
vpc_id = aws_vpc.vpc.id

route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.igw.id
}

tags = {
Name = "${var.env}-public-route-table"
Environment = var.env
}
}

resource "aws_route_table" "private_route_table" {
count = 2
vpc_id = aws_vpc.vpc.id

route {
cidr_block = "0.0.0.0/0"
nat_gateway_id = aws_nat_gateway.ngw[count.index].id
}

tags = {
Name = "${var.env}-private-route-table"
Environment = var.env
}
}

resource "aws_route_table_association" "public_rt_assoc" {
count = 2
subnet_id = aws_subnet.public_subnet[count.index].id
route_table_id = aws_route_table.public_route_table.id
}

resource "aws_route_table_association" "private_rt_assoc" {
count = 2
subnet_id = aws_subnet.private_subnet[count.index].id
route_table_id = aws_route_table.private_route_table[count.index].id
}

You should have the following files in your child-module directory.

List directory contents with “ls” command.

Now, let's switch to the parent directory, “two-tier”.

cd ..

Create the following tf files and copy and paste the contents below.

main.tf

################################################################################
# Module
################################################################################

module "create_two_tier_aws" {
source = "./child-module"

env = var.env
aws_region = var.aws_region
vpc_cidr_block = "172.16.0.0/16"
public_subnet_cidr_block = ["172.16.0.0/24", "172.16.1.0/24"]
private_subnet_cidr_block = ["172.16.10.0/24", "172.16.11.0/24"]

ec2_name = var.ec2_name
ssh_key = var.ssh_key

db_name = var.db_name
db_username = var.db_username
db_password = var.db_password
}

📝NOTE: I have chosen to hard code the VPC and public/private subnet CIDRs to show the flexibility between using variables from the variables.tf file and hard-coded values. 🙃

outputs.tf

################################################################################
# Outputs
################################################################################

output "web_server_public_ip" {
description = "Public IP of Web Servers"
value = module.create_two_tier_aws.web_server_public_ip
}

output "ec2_ssh_access" {
description = "Remote Access to EC2"
value = module.create_two_tier_aws.ec2_ssh_access
}

output "db_name" {
description = "Database Name"
value = module.create_two_tier_aws.db_name
}

output "db_address" {
description = "The hostname of the RDS instance"
value = module.create_two_tier_aws.db_address
}

output "alb_public_url" {
description = "Public URL for Application Load Balancer"
value = module.create_two_tier_aws.alb_public_url
}

output "connect_to_database" {
description = "Command to connect to database from EC2"
value = module.create_two_tier_aws.connect_to_database
}

variables.tf

################################################################################
# Variables
################################################################################

variable "env" {
description = "Environment Name"
type = string
default = "terraform-demo"
}

variable "aws_region" {
description = "AWS deployment region"
type = string
default = "us-east-1"
}

variable "ec2_name" {
description = "EC2 Web Server name"
type = string
default = "Web-Server"
}

variable "ssh_key" {
description = "ssh key name"
type = string
default = "MySSHKey"
}

variable "db_name" {
description = "The database name"
type = string
default = "terraformdatabase1"
}

variable "db_username" {
description = "The master username for the database"
type = string
default = "admin"
}

variable "db_password" {
description = "Password for the master DB user"
type = string
sensitive = true
}

You should have the following files in your current working directory.

👍🏽

Since we have added quite a few configuration files, I would like to highlight a few critical topics.

Within the main.tf file in our parent directory, we use the module block to reference a child module and its contents in the child-module directory. We also pass input variables (e.g., var.env) defined in our variables.tf file to arguments used by the child module (e.g., env). These arguments are also input variables within the child module.

Many of our variables are configured to have default values in this deployment, but some are not. We can overwrite the input variables in the child module by passing the root module’s input variables as arguments within this module block.

Next, I want to highlight the outputs.tf files in both the parent and child-module directories. To retrieve outputs generated by resources from a child module, they must first be defined in the child module’s output.tf file. Then, we can reference those outputs in the root module using the following syntax “module.<module name>.<child module output name>”.

child module outputs.tf example:

root module outputs.tf example:

👍🏽

Step 2: Terraform Plan

In this step, we will perform a few steps to prepare our environment to deploy resources using Terraform.

First, let’s initialize the backend and configured child module and download the provider plugins used in this deployment.

terraform init

By executing the following, we can validate our Terraform code for any syntax or formatting issues.

terraform fmt -recursive
terraform validate

📝NOTE: “terraform fmt” won’t return any output if there aren’t any formatting issues. 😊

To stay in line with best practices, we will execute the “terraform plan” command to preview the changes Terraform plans to make on our behalf.

terraform plan
Example return output. 👍🏽

Step 3: Terraform Apply

Let’s deploy our environment!

Since we are deploying an RDS MySQL database as a part of this deployement, we must specify a database password with our terraform apply command. We could have hard coded this value as a variable in variables.tf or within the module block, but… that didn’t quite sit well will me from a security perspective. 🤓

terraform apply -auto-approve -var=db_password=<your db password>

This will take several minutes to deploy, so go for a walk 🚶🏽 or grab a snack 🍿 in the meantime!

👍🏽

Step 4: Verify Deployment

We will verify our deployment’s success in a few ways. First, let’s curl the Application Load Balancer URL returned in our outputs. We will do this a few times to verify we get the custom web page on both instances.

curl <alb_public_url>
Success!!! 👍🏽

📝NOTE: If you attempt to curl the web server's public IPs directly, you will notice that you can’t. Our security groups only allows HTTP traffic via the Application Load Balancer.

Next, let’s SSH to one of our Web servers. Our EC2 security group allows SSH access from our local public IP ONLY.. 💪🏽Our “ec2_ssh_access” output value provided a command we can use.

terraform output
ssh -i MySSHKey.pem ubuntu@<EC2 public IP>
Success!!! 👍🏽

Now, we will verify that we can connect to the RDS MySQL database from our Web server. Use the command value returned for the “connect_to_database” output.

mysql --host=<database_endpoint_address> --user=<master db username> -p

You will also have to use the database password you passed to the terraform apply command in the previous step!

Success!!! 👍🏽

Once connected to the MySQL database, execute the following command to view a list of MySQL databases.

SHOW DATABASES;

This should match the “db_name” returned in your outputs.

Now, return to your local terminal, and enter “exit” twice.

Step 5: Terraform Destroy

We have reached the end of our demonstration. Before walking away with our heads held high, we must tear down our infrastructure to avoid a huge AWS bill! ☝🏽

Terminate deployed resources.

terraform destroy -auto-approve -var=db_password=<your db password>
Success!!! 👍🏽

That completes this project write-up. Take care and until next time! 👋🏽

Link to My GitHub Repo: IaC/terraform/create-two-tier-aws at master · dahjohnson/IaC (github.com)

--

--

Dahmear Johnson
Nerd For Tech

Cloud Engineer ☁️ | DevOps Enthusiast ♾️ | 2x Azure/AWS Certified