Tutorial: Create a Three-Tier WordPress Application in AWS with Terraform — Part One

Dan Phillips
Version 1
Published in
13 min readNov 1, 2023
Photo by Justin Morgan on Unsplash

According to a study by w3Techs, WordPress is used by 43.2% of all websites (65.2% of sites which use a Content Management System — CMS). That’s an astonishing statistic, and the same study found its usage has increased by an average of 12% per year since 2011.

If you combine this popularity with the dominance of AWS in the cloud platform field, you have a solid base to meet the needs of a myriad of applications.

With this in mind, I have created this step-by-step tutorial to help you understand how you can install and run WordPress on AWS, via the popular Infrastructure as Code (IaC) tool, Terraform.

My approach will guide you through building a secure, three-tier application stack, with a private subnet to host our WordPress app, a private subnet to host our database, and a public subnet to enable us to install our WordPress instance and manage future updates and patches.

We will create our application across two Availability Zones and implement an Application Load Balancer (ALB) to control and manage access to that application, and an Auto Scaling Group (ASG) which will enable our WordPress site to scale horizontally depending on demand or health.

We’ll also use AWS Secrets Manager to generate and store the sensitive values we need to set up and manage our database, all from within Terraform.

In this first section of the tutorial, we will build our VPC in the Network layer — the very foundation that our app will run on, and within.

Pre-Requisites

To follow along with this guide, I have assumed you have an AWS account, the AWS CLI configured with your access keys and secret access keys, an IDE (I’m using vsCode), a Bash terminal, and Terraform installed on your computer.

Architecture Overview

Before going any further, let’s understand what it is we are ultimately going to build.

Archtiecture Diagram of a three tier WordPress app

First, we are going to create a VPC which will house our entire application architecture. Within this, we are going to create six subnets across two Availability Zones (AZ): each AZ will contain a public subnet to house a Nat Gateway with an Elastic IP, which will allow our private subnets to access the internet; a private subnet to house our WordPress application; a separate private subnet to house our WordPress database. In this tutorial, we’re going to use AWS Relational Database Service (RDS) for the MySQL relational database which WordPress requires, and which is an excellent managed database service from AWS.

Our VPC will also need an Internet Gateway (IGW) through which all traffic in and out of our VPC will flow.

The WordPress instances in our private subnet will not have a public IP address and will only accept traffic, via Security Group rules, that comes through our Application Load Balancer (ALB). In addition, our database subnet will only accept traffic from our private application subnet.

Our WordPress EC2 instances will be created from a launch configuration template, which will enable our Auto Scaling Group (ASG) to add or remove instances to maintain the settings that we choose or based on demand via CloudWatch alarms. Our ALB will monitor the health of our instances, and if it finds that one or more instances are unhealthy, our ASG will be triggered to create a new instance to match our conditions.

We will install a primary MySQL database instance in AZ A which our WordPress instances will read from, however thanks to RDS, we will replicate this database across to AZ B, and in the event that AZ A fails, our failover database will become the primary database for our WordPress application.

NB: I am decoupling the infrastructure for this project into two distinct layers: a Network layer for our VPC, and an Application layer. Separating the creation of a VPC into a distinct Terraform state file offers several benefits, such as modularity and reusability; clear separation of concerns between infrastructure and applications; reduced risk through a smaller blast radius for changes; the ability to parallelize deployments; granular access control; improved state locking management, and structured change management.

This decoupling practice enables teams to efficiently manage VPC configurations independently while enhancing collaboration and minimizing the potential for disruptions when making changes to the underlying networking infrastructure, ultimately leading to more scalable and maintainable AWS environments.

Pre-Build Preparation

For this project, we are going to use remote state files for our Terraform infrastructure.

Please be aware that the resources we are going to deploy in AWS for this project will incur some minor charges. To keep your costs to a minimum, do not leave resources running in AWS when it is not required.

Remote state files in Terraform offer several advantages. They enable collaborative infrastructure management by allowing multiple team members to work simultaneously while maintaining consistency and a single source of truth. They can also enhance security by protecting sensitive information and providing encryption options.

Additionally, remote state files also offer centralized management, scalability, versioning, and robust locking mechanisms, reducing the risk of conflicts and ensuring data integrity. They simplify workflow, provide auditability, and enable backup and disaster recovery, making them essential for efficient and reliable infrastructure as code practices, especially in team environments and large-scale deployments.

We will use an Amazon S3 bucket to hold our state, so before we dive into our IDE, go to your AWS console, navigate to the S3 section, and create a new bucket.

Give your bucket a unique name (and remember it) and create it in the same AWS region that you will create your VPC in. You can leave all other settings as default.

A screenshot of the AWS console

Network Layer

Time to get started! The very foundation of our application is our VPC — it will house our whole architecture and enable us to control who accesses our application, and how.

To begin, open a terminal and navigate to where you’d like to create your project, make a new directory for your project, and cd into it. You can call your directory whatever you like, but for the purposes of this tutorial, I will call mine ‘aws_wordpress_demo’.

mkdir aws_wordpress_demo && cd aws_wordpress_demo

We’re now in the root of our project, and our next step is to create a directory to hold our first layer, the network, in which we will build our VPC infrastructure:

mkdir network && cd network

Now we’re in the root of our network directory. Before we can get started creating our infrastructure resources, we need to create two files, provider.tf and backend.tf:

touch provider.tf backend.tf

The provider.tf file in Terraform is used to configure and define the providers that our configuration will use. Providers are responsible for interacting with specific infrastructure platforms or services, such as AWS, Azure or Google Cloud. The provider file allows us to specify which providers our Terraform configuration will use and configure the necessary settings for them. I am also assigning my provider region as ‘eu-west-1’, and this is the region where all our architecture will be housed. You can change this to whichever region you prefer, but ensure the availability zones you specify later, are in the region you declare in your provider.tf.

# provider.tf

terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "5.13.1"
}
}
}

provider "aws" {
region = "eu-west-1"
}

The backend.tf file lets Terraform know where to store our state, and where it will be retrieved from when required. The code below tells Terraform to store our state file for this layer in the S3 bucket we created earlier (wordpress-tutorial-state-store), in a file called terraform.tfstate, in a network directory:

# backend.tf

terraform {
backend "s3" {
bucket = "wordpress-tutorial-state-store"
key = "network/terraform.tfstate"
region = "eu-west-1"
}
}

Now, to initialise Terraform which will allow us to create plans, apply (and destroy) our infrastructure, run the following in your terminal:

terraform init

Great! Hopefully, your Terraform has been successfully initialised and we’re now ready to begin getting our feet wet in creating our app! Next, let’s create all the files we’re going to need for this network layer. In your terminal, run the following:

touch vpc.tf subnets.tf variables.tf route_tables.tf nat_gateway.tf outputs.tf

In your vpc.tf file, we’re going to create our VPC and our IGW resources. Remember, the VPC is the container for all our other resources, and the IGW allows traffic to enter and exit our VPC:

# vpc.tf /24

resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/24"
tags = {
Name = "aws_wordpress_tutorial"
}
}

resource "aws_internet_gateway" "igw" {
vpc_id = aws_vpc.main.id
tags = {
Name = "Internet Gateway"
}
}

Describing how CIDR blocks and subnet masking work is out of the scope of this tutorial, however, this resource creates a VPC environment called “aws_wordpress_tutorial”, with 256 internal IP addresses available (minus five which AWS reserves). These IP addresses will be allocated evenly across our subnets. The second resource in this code block creates an IGW and attaches it to our VPC.

Next, open variables.tf and enter the following:

# variables.tf

variable "public_subnet_cidrs" {
type = list(string)
description = "Public Subnet CIDR Values"
default = ["10.0.0.0/27", "10.0.0.32/27"]
}

variable "private_application_subnet_cidrs" {
type = list(string)
description = "Public Subnet CIDR Values"
default = ["10.0.0.64/27", "10.0.0.96/27"]
}

variable "private_data_subnet_cidrs" {
type = list(string)
description = "Public Subnet CIDR Values"
default = ["10.0.0.128/27", "10.0.0.160/27"]
}

variable "azs" {
type = list(string)
description = "Availability Zones"
default = ["eu-west-1a", "eu-west-1b"]
}

Here we have created non-conflicting CIDR subnet mask values which will be applied to our six subnets, allocating 32 IP addresses to each.

We have also created a list of two availability zones in our chosen region into which we will launch our subnets (my chosen region is ‘eu-west-1’, as declared in provider.tf).

In subnets.tf, add the following code:

# subnets.tf

# Public Subnets
resource "aws_subnet" "public_subnets" {
depends_on = [aws_vpc.main]
count = length(var.public_subnet_cidrs)
vpc_id = aws_vpc.main.id
cidr_block = element(var.public_subnet_cidrs, count.index)
availability_zone = element(var.azs, count.index)
map_public_ip_on_launch = true
tags = { Name = "${var.azs[count.index]} Public Subnet" }
}

# Private App Subnets
resource "aws_subnet" "private_subnets_application" {
depends_on = [aws_vpc.main]
count = length(var.private_application_subnet_cidrs)
vpc_id = aws_vpc.main.id
cidr_block = element(var.private_application_subnet_cidrs, count.index)
availability_zone = element(var.azs, count.index)
tags = { Name = "${var.azs[count.index]} Private App Subnet" }
}

# Private Data Subnets for RDS
resource "aws_subnet" "private_subnets_data" {
depends_on = [aws_vpc.main]
count = length(var.private_data_subnet_cidrs)
vpc_id = aws_vpc.main.id
cidr_block = element(var.private_data_subnet_cidrs, count.index)
availability_zone = element(var.azs, count.index)
tags = { Name = "${var.azs[count.index]} Private Data Subnet" }
}

resource "aws_db_subnet_group" "rds_subnet_group" {
name = "wordpress-demo--rds--subnet-group"
subnet_ids = aws_subnet.private_subnets_data[*].id
}

It looks like a lot is going on here, but in reality, three of these resources are doing identical things - they’re just being applied to different subnets.

They all use depends_on to ensure our VPC is created first, before attaching themselves to that VPC with vpc_id. They also each use a count value which is obtained from the length of the CIDR list we created in variables.tf (var.public_subnet_cidrs, var.private_application_subnet_cidrs and var.private_data_subnet_cidrs ). They loop over the CIDR values we created to assign those Subnet Masks to each subnet, in each AZ we described in our variables.tf file. Finally. we add a relevant naming structure to each subnet with tags.

We also create an aws_db_subnet_group resource, which allocates our private data subnets to be used for housing our RDS Database primary and failover database.

Next, in nat_gateway.tf, enter the following:

# nat_gateway.tf

resource "aws_nat_gateway" "wordpress" {
count = 2
depends_on = [aws_eip.eip_001, aws_subnet.public_subnets[0], aws_subnet.public_subnets[1]]
allocation_id = aws_eip.eip_001[count.index].id
subnet_id = aws_subnet.public_subnets[count.index].id
connectivity_type = "public"
tags = {
Name = "${var.azs[count.index]} Nat_Gateway"
}
}

resource "aws_eip" "eip_001" {
count = 2
domain = "vpc"
tags = {
Name = "WordPress - Nat_GateWay_EIP"
}
}

This Terraform file creates two Nat Gateways, assigns an Elastic IP to each one, and places one Nat Gateway in each of our AZs’ public subnets.

The purpose of a Nat Gateway is to provide external access for instances in private subnets. Each instance in a private subnet receives a private IP address from our VPC, but without a NAT gateway, instances cannot access the Internet. In short, our Nat Gateways allow us to install packages, patches and our WordPress in subnets which have no direct exposure to the internet.

So far, we have our VPC, an IGW, all of our subnets, and a Nat Gateway. However, none of our subnets yet know how they going to connect to the internet, and this is where route_tables.tf comes in:

# route_tables.tf

# Public Route Table
resource "aws_route_table" "Public_SubNet_RouteTable" {
depends_on = [aws_internet_gateway.igw]
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.igw.id
}
tags = {
Name = "Public Route Table"
}
}

# Private Route Tables
resource "aws_route_table" "az_A__Private_SubNet_RouteTable" {
depends_on = [aws_nat_gateway.wordpress[0]]
vpc_id = aws_vpc.main.id
tags = {
Name = "${var.azs[0]} Private Route Table"
}
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_nat_gateway.wordpress[0].id
}
}

resource "aws_route_table" "az_B__Private_SubNet_RouteTable" {
depends_on = [aws_nat_gateway.wordpress[1]]

vpc_id = aws_vpc.main.id
tags = {
Name = "${var.azs[1]} Private Route Table"
}
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_nat_gateway.wordpress[1].id
}
}

# Association for Public Subnets
resource "aws_route_table_association" "public_subnet_association" {
count = length(var.public_subnet_cidrs)
subnet_id = element(aws_subnet.public_subnets[*].id, count.index)
route_table_id = aws_route_table.Public_SubNet_RouteTable.id
}
# Association for Private App Subnets (AZ A)
resource "aws_route_table_association" "azA_private_application_subnet_association" {
subnet_id = aws_subnet.private_subnets_application[0].id
route_table_id = aws_route_table.az_A__Private_SubNet_RouteTable.id
}
# Association for Private App Subnets (AZ B)
resource "aws_route_table_association" "azB_private_application_subnet_association" {
subnet_id = aws_subnet.private_subnets_application[1].id
route_table_id = aws_route_table.az_B__Private_SubNet_RouteTable.id
}

Here, we define three route tables and associate them with each of our subnets (CIDR blocks). A route table is a fundamental networking component used to control the flow of network traffic within a VPC. It acts as a set of rules that determine how data packets should be routed between different network destinations within the VPC. Each route table contains a list of route entries (‘route’ in our Terraform code), where each entry specifies a target (such as an IGW, virtual private gateway, or an EC2 instance). In our case, we are specifying that our public subnets in each AZ will accept entry traffic from the IGW only, while our private subnets will only accept entry traffic via the Nat Gateway in the same AZ as the subnet.

When a packet enters the VPC, the route table is consulted to determine the appropriate target for forwarding the packet based on its destination IP address, allowing us to define the network paths and connectivity within our AWS environment.

We identify our target for forwarding in theaws_route_table_association’ resource. As with the creation of our subnets, we use the count attribute to loop through the same CIDR blocks we defined in our variables.tf file and used to create our subnets, and then assign them to the correct route table with the route_table_id command.

Finally, in outputs.tf, we will output some of the resource values we have created in this tutorial, which we are going to need to import into our application state layer, to allow us to associate other resources later:

# outputs.tf

output "vpc" {
value = aws_vpc.main.id
}

output "public_subnets" {
value = aws_subnet.public_subnets[*].id
}

output "private_app_subnets" {
value = aws_subnet.private_subnets_application[*].id
}

output "db_subnet_group_name" {
value = aws_db_subnet_group.rds_subnet_group.name
}

And that’s it! If you’ve come this far, congratulations! You now have everything in place to create the foundation for your entire application.

In a terminal, run:

terraform plan

Terraform plan is a critical command that allows us to preview the changes that will be made to our infrastructure before applying them. When executed, it analyses the current state of the infrastructure as defined in the Terraform configuration files and compares it with the desired state (at the moment, we don’t have a current configuration as we haven’t yet applied the resources we have outlined).

The command then generates an execution plan that outlines what resources will be created, modified, or destroyed to achieve the desired state. This preview provides valuable insights into the potential impact of your infrastructure changes, including any potential conflicts or errors, enabling you to assess and verify your modifications before applying them, helping to prevent unintended or destructive changes to your infrastructure.

Now for the magic! In your terminal, run:

terraform apply -auto-approve

Terraform apply is a command used to execute the planned changes defined in a Terraform configuration. When we run “terraform apply,” Terraform reads the configuration files, compares the desired state specified in these files with the current state of the target infrastructure (we have no current state as this is the first time we have run apply), and then calculates the necessary actions to bring the infrastructure into the desired state.

It will create, update, or delete resources as needed to achieve this state. If we don’t include the -auto-approve flag, Terraform will ask us to confirm if we want to apply our changes, but in this case, we know we do and adding this flag cuts down on our input.

In your terminal, you can watch as Terraform creates all the resources you have defined in the files above. It can sometimes take a minute or two to provision everything, so sit back and relax while the IaC magic happens!

When your apply command has run successfully, you will get a confirmation message in your terminal similar to this:

Your id numbers will vary, however by creating outputs in this state file (outputs.tf), we can easily access these in other state files.

If you navigate to the VPC area of your AWS console, you will find your newly created environment complete with a resource map, which visually shows us how our architecture is separated by AZ’s, subnets, and their respective paths within, and out of, our VPC.

And that’s it for part one. Congratulations! You have created a dual availability zone, three-tier environment in IaC!

The repository of our code base at this stage is available on this branch at GitHub.

In part two of our tutorial, we’ll begin to create our application layer, starting with outlining the Security Group rules for our resources and creating an RDS MySQL instance for our WordPress application.

Feel free to connect with me on LinkedIn and GitHub.

About the author

Dan Phillips is an Associate AWS DevOps Engineer here at Version 1.

--

--

Dan Phillips
Version 1

I'm a DevOps and software engineer based in Newcastle-Upon-Tyne, UK.