Creating Three-Tier Architecture with Terraform and AWS

Nick Sanders
6 min readFeb 3, 2023

--

A three-tier architecture is an application architecture that organizes applications into three computing tiers:

  1. Presentation tier or user interface
  2. Application or logic tier where data is processed
  3. Database tier where data is stored

A good example of a three-tier architecture would be stackoverflow.com. It consists of the presentation tier, which is the HTML and CSS viewers see; the application tier that manages users permissions and upvoting/downvoting, and a database tier that stores questions, answers, and user data.

Prerequisites

  • AWS account
  • Terraform installed
  • VSCode or other code editor

Getting Started

Our Terraform infrastructure will consist of the following components:

  • VPC
  • Subnets
  • EC2 Instances
  • Security Groups
  • Internet Gateway
  • Route Tables
  • NAT Gateway
  • Elastic IP
  • Application Load Balancer

We’ll first need to create a key pair that will be used to SSH into our EC2 instances. To create a key pair, go to the EC2 console and click “Key Pairs” under “Network & Security” on the left side. Name your key pair and make sure “Key pair type” is set to “RSA” and “Private key file format” is set to “.pem”. Once your key pair is downloaded, open up the terminal and run the following command:

chmod 600 <keypair.pem>

Make sure you are in the same directory as key pair when you run the command. Run the following code to create the directory we’ll be working in with all the necessary files:

mkdir terraform-3-tier-architecture
cd terraform-3-tier-architecture
touch providers.tf
touch variables.tf
touch vpc.tf
touch ec2.tf
touch subnets.tf
touch sg.tf
touch igw.tf
touch route_tables.tf
touch natGW.tf
touch eip.tf
touch alb.tf
code .

Once VSCode is opened import your .pem file we created earlier.

Open the providers.tf file and add our providers. I will be using AWS profiles to set up or credentials. A tutorial on setting up an AWS profile can be found here. After setting up our providers, run “terraform init”.

terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}

provider "aws" {
region = "us-east-1"
}

Now let’s add in some variables in our variables.tf file:

variable "cidr_block" {
type = list(any)
default = ["10.0.1.0/24", "10.0.2.0/24"]
}

variable "az" {
type = list(any)
default = ["us-east-1a", "us-east-1b"]
}

We are using the “list” type as we are assigning multiple values to our variables. The cidr_block variable has two cidr_blocks that we’ll assign to our public subnets and az variable has two availability zones, one for each public subnet.

Creating Our Resources

We’ll first create our VPC which houses all our other resources. Open up the vpc.tf file and paste in the following:

resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
}

Next, we’ll create our subnets. We are creating two public subnets and one private subnets. Instead of creating two seperate resource blocks for our public subnets, we can use the count meta-argument to specify how many resources we want to create. For more information about Terraform meta-arguments, Spacelift.io has a great article breaking it down. Set the VPC ID equal to the VPC we previously made and the CIDR block and availability zones to the variable we made earlier. For the private subnet, we’ll add in the values like so:

resource "aws_subnet" "public" {
vpc_id = aws_vpc.main.id
cidr_block = var.cidr_block[count.index]
availability_zone = var.az[count.index]
count = 2
}

resource "aws_subnet" "private" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.3.0/24"
availability_zone = "us-east-1a"
}

data "aws_subnets" "subnet_id" {
filter {
name = "vpc-id"
values = [aws_vpc.main.id]
}
}

Now, open the sg.tf file and let’s create security groups that will be attached to our EC2 instances. For our public instances, we’ll allow inbound traffic from port 22 which is used for SSH access, and port 80 for internet access. For our outbound settings, we’ll set protocol equal to “-1” which allows all protocols access.

resource "aws_security_group" "allow_all_tls" {
name = "allow_tls"
description = "Allows TLS inbound traffic"
vpc_id = aws_vpc.main.id

ingress {
description = "TLS from VPC"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

ingress {
description = "TLS from VPC"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}

For the private security group, we’ll change the ports on the second security group to allow access from MySQL databases.

resource "aws_security_group" "allow_tls_db" {
name = "allow_tls_db"
description = "Allows TLS inbound traffic"
vpc_id = aws_vpc.main.id

ingress {
description = "TLS from VPC"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

ingress {
description = "TLS from VPC"
from_port = 3306
to_port = 3306
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}

Open the ec2.tf. For our EC2 instances, we’ll be using an Amazon Linux AMI that is included in the free tier. The instance type is a “t2.micro” and the key name is equal to our key pair we created earlier. Add the [count.index] to the subnet ID to include the IDs from both public subnets. Make sure count is equal to 2. Next, we’ll create the provisioner block which allows us to SSH into our EC2 instances using our key pair.

resource "aws_instance" "web" {
ami = "ami-007868005aea67c54"
instance_type = "t2.micro"
key_name = "nsands"
subnet_id = aws_subnet.public[count.index].id
vpc_security_group_ids = [aws_security_group.allow_all_tls.id]
associate_public_ip_address = true
count = 2

provisioner "file" {
source = "./nsands.pem"
destination = "/home/ec2-user/nsands.pem"

connection {
type = "ssh"
host = self.public_ip
user = "ec2-user"
private_key = file("./nsands.pem")
}
}
}

Lastly, let’s create our private EC2 instance that will be used to host our database. We’ll use the same AMI, instance type, and key name as before for this EC2 instance. We’ll change the subnet ID to the private subnet and the VPC security group to the private one we made previously.

resource "aws_instance" "db" {
ami = "ami-007868005aea67c54"
instance_type = "t2.micro"
key_name = "nsands"
subnet_id = aws_subnet.private.id
vpc_security_group_ids = [aws_security_group.allow_tls_db.id]

}

The internet gateway allows internet access to our subnet. Open up the igw.tf file and paste in the following code:

resource "aws_internet_gateway" "igw" {
vpc_id = aws_vpc.main.id
}

Now that the internet gateway is created, we need to create route tables that routes traffic from the internet into our public subnets. Once we create our route tables, we need to create two associations that connects our subnets to the route table. Again, we’ll use the [count.index] meta-argument to specify we want to create two of these resources. Paste following code into route_table.tf.

resource "aws_route_table" "rtb" {
vpc_id = aws_vpc.main.id

route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.igw.id
}
}

resource "aws_route_table_association" "a" {
subnet_id = aws_subnet.public[count.index].id
route_table_id = aws_route_table.rtb.id
count = 2
}

We must also create a default route table that connects our private subnet to the NAT gateway. The NAT gateway is how the private subnet gets internet access. We’ll be creating that shortly.

resource "aws_default_route_table" "dfltrtb" {
default_route_table_id = aws_vpc.main.default_route_table_id

route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_nat_gateway.natGW.id
}
}

In eip.tf, we’ll create an elastic IP address that connects to the private subnet from the NAT gateway in one of our public subnets.

resource "aws_eip" "eip" {
vpc = true
}

Let’s create the NAT gateway and attach our elastic IP. For the subnet ID, we’ll pass in [0] to choose the first subnet. Remember, when indexing when start at 0, so technically 0 is the first object in an array.

The last resource we’ll create is the application load balancer. The load balancer distributes requests across multiple targets. In our case, it is our EC2 instances. We have to create the load balancer, along with the target group. The target is accessed through port 80 and the target type is set to instance. Next, we’ll attach the target group. Once again, we’ll use the [count.index] meta-argument. Lastly, we create the load balancer listener that listens for HTTP requests on port 80.

Let’s run “terraform apply” to create our infrastructure. Terraform will add 21 new resources. Once the infrastructure is finished building, go to the EC2 console to the get IP addresses for the public EC2 instances. Copy on of the IP addresses and run the following command:

ssh -i <keypair.pem> ec2-user@<public-ip-address>

Congratulations! You have successfully launched a three-tier architecture using just Terraform. If you have trouble accessing your EC2 instance, I will link a few resources that may solve any problems you may face:

Make sure to run “terraform destroy” to avoid any unwanted charges to yout AWS account. To automate the infrastructure’s deployment, read my article about Automating Terraform Infrastructure Using GitHub Actions.

--

--