Tutorial: Create a Three-Tier WordPress Application in AWS with Terraform — Part Four

Dan Phillips
Version 1
Published in
7 min readNov 3, 2023

Welcome to the final part of this tutorial on launching a WordPress application in AWS. We’re now on the home straight and will soon be able to apply our Terraform Infrastructure as Code (IaC) and launch our website.

Photo by Joshua Hoehne on Unsplash

Before we begin, let’s recap on what we have built so far:

  • Part One: We outlined our architecture diagram and created our entire Virtual Private Cloud (VPC) environment in a specific network layer.
  • Part Two: We created all the security group rules our website requires in a new application layer, resourced an AWS Relational Database Service (RDS) instance, and introduced AWS Secrets Manager to generate and store sensitive values.
  • Part Three: We added the launch configuration resource we will use in this section which installs the Apache web server on our application instances, as well as installing and configuring WordPress.

We only need to create one more file to finalise our project. CD into the application directory of our project. The current structure should look like this:

wordpress_demo
network
application
backend.tf
data.tf
provider.tf
rds.tf
secrets.tf
security_groups.tf
ec2.tf

Presentation

In your terminal, create a presentation.tf file:

touch presentation.tf

We’ll begin by resourcing an Application Load Balancer (ALB), which will distribute incoming HTTP traffic across the WordPress EC2 instances which exist in our private application subnet:

# presentation.tf

resource "aws_lb" "wordpress_alb" {
name = "LoadBalancer"
internal = false
load_balancer_type = "application"
security_groups = [aws_security_group.ALB_SG.id]
subnets = [for subnet in data.terraform_remote_state.network.outputs.public_subnets : subnet]
}
  • name = “LoadBalancer”: This is the name that will be displayed in the AWS Management Console and can be used to identify this specific ALB resource.
  • internal = false: This indicates that the ALB is not an internal load balancer, meaning it is publicly accessible over the Internet. If set to true, it would be internal, and accessible only within the VPC.
  • load_balancer_type=”application”: This specifies the type of load balancer as an Application Load Balancer. AWS offers different types of load balancers, such as Network Load Balancers (NLB), but here we set it to "application," which is suitable for routing HTTP and HTTPS traffic at the application layer.
  • security_groups = [aws_security_group.ALB_SG.id]: This defines the security group(s) associated with the ALB. It references the existing AWS security group resource we created earlier and uses its ID.
  • subnets = [for subnet in data.terraform_remote_state.network.outputs.public_subnets : subnet]: This line specifies the subnets where the ALB will be deployed. It uses a for expression to loop through the list of public subnets obtained from our external network Terraform state and assigns them to the ALB. The ALB will distribute traffic across any instances located in these subnets.

N.B. When our application is complete, our ALB will produce a DNS address where we can view our website. This address can be found in the AWS console, but as we’re building our application as IaC, we will get the address in our terminal by creating an output in a new outputs.tf file:

# outputs.tf

output "Website_Address" {
value = aws_lb.wordpress_alb.dns_name
}

Back in presentation.tf, underneath our ALB resource, add the following:

# presentation.tf 
...
resource "aws_autoscaling_group" "wordpress-asg" {
min_size = 2
max_size = 4
desired_capacity = 2
launch_configuration = aws_launch_configuration.wordpress_ec2.name
vpc_zone_identifier = data.terraform_remote_state.network.outputs.private_app_subnets[*]
name = "WP_AutoScalingGroup"
}

This creates an Auto Scaling Group (ASG), which is a fundamental component of our orchestration that automatically manages the scaling of a group of virtual instances to meet changing workloads and demands.

ASGs monitor the health of instances and ensure that a specified number of instances, known as the desired capacity, are running at all times. When traffic or demand increases, the ASG dynamically adds new instances, and conversely, it removes instances during periods of low demand, helping to optimize resource utilization and cost efficiency.

ASGs are commonly used in cloud environments to enhance availability, fault tolerance, and scalability, ensuring that applications can handle variable workloads effectively without manual intervention.

Here we have set the minimum number of our launch configuration EC2 instances to two, and the maximum to four. These instances will be distributed evenly across our private application subnets with the vpc_zone_identifier command, and the ASG will maintain a minimum of two instances at all times, as specified in the desired_capacity command.

Underneath our ASG in presentation.tf, add the following resources:


# presentation.tf
...
resource "aws_lb_target_group" "private_application_tg" {
name = "EC2--Target-Group"
port = 80
protocol = "HTTP"
vpc_id = data.terraform_remote_state.network.outputs.vpc
}

resource "aws_autoscaling_attachment" "wordpress" {
autoscaling_group_name = aws_autoscaling_group.wordpress-asg.id
lb_target_group_arn = aws_lb_target_group.private_application_tg.arn
}

In this block, we create an aws_lb_target_group resource which listens on port 80 for incoming HTTP traffic within the specified VPC. It is used to route incoming requests from the ALB to a set of instances that serve as the application’s endpoints, allowing the ALB to distribute traffic effectively across these instances.

We then create an aws_autoscaling_attachment resource which creates an attachment between our ASG (“wordpress-asg”) and our newly specified ALB target group (“private_application_tg”).

This attachment allows instances launched by the ASG to be automatically registered with the target group, enabling the ALB to distribute incoming traffic to these instances based on the rules configured in the target group, such as routing requests to healthy instances or based on URL paths.

Finally, in the same presentation.tf, we will create an aws_lb_listener resource:

# presentation.tf 
...
resource "aws_lb_listener" "listener" {
load_balancer_arn = aws_lb.wordpress_alb.arn
port = "80"
protocol = "HTTP"
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.private_application_tg.arn
}
}

The ALB listener acts as a traffic gateway that receives incoming requests, examines their attributes, and forwards them to the appropriate backend resources based on configured routing rules. It plays a crucial role in load balancing, traffic distribution, SSL/TLS termination, and routing to ensure high availability, scalability, and efficient request handling for applications deployed in AWS.

Here’s what our ALB listener does:

  • load_balancer_arn = aws_lb.wordpress_alb.arn: This specifies the ARN (Amazon Resource Name) of the ALB to which this listener should be associated. It references our existing ALB resource named "wordpress_alb" and retrieves its ARN. This listener will be attached to that ALB.
  • port = “80”: This sets the port number (HTTP port) on which the listener will listen for incoming traffic. In this case, it's set to port 80, which is the default port for unencrypted HTTP traffic.
  • protocol = “HTTP”: This defines the protocol used for communication within the listener. Here, it's set to "HTTP," indicating that the listener handles HTTP traffic.
  • default_action { … }: Here, we define the default action for the listener, specifying what should happen to incoming requests that match this listener's rules. In this configuration, type = "forward" instructs the listener to forward incoming requests to a target group, which is specified with target_group_arn = aws_lb_target_group.private_application_tg.arn.

The purpose of this listener is to route incoming HTTP traffic to instances or backend resources registered with this target group.

That’s all we need to be able to launch our application! Open your terminal and ensure you’re in the application directory of your project. Then run:

terraform plan

This will output all the configuration changes in our terminal and allow us to check that we’re happy with our changes before running:

terraform apply -auto-approve

We have created a lot of resources in our application layer, which means this stage can take a few minutes to create. However, upon completion, your terminal will tell you that your resources have been deployed successfully, and will output the DNS address of our ALB:

Enter the DNS address into your browser address bar, and you should be greeted by the WordPress set-up screen

N.B. Initialising the user data script in our EC2.tf file can take a few extra minutes, so if you see a gateway error or a welcome page for the Apache web server, wait a few more minutes and refresh the browser window. Eventually, you’ll see the WordPress install screen:

Congratulations! You have created a scalable, robust and secure WordPress installation across multiple availability zones! Complete the form with your details, click the Install WordPress button and give yourself a huge round of applause — you’re done!

You now have a solid base to explore your application and build out its features using other AWS services, and testing the settings you have applied, including:

  • Logging into your AWS console and destroy an instance, and watch as AWS automatically replaces it thanks to our ASG and ALB Listener resources.
  • You can attach an SSM IAM role to your instances and examine the file structure we created in our user data to see how WordPress is installed and what goes into your EC2 instances.
  • Add CloudWatch rules to your ASG to scale out (and in) instances dependant on demand (CPU resource etc).
  • Create a ‘Golden AMI’ and update the launch configuration to instantiate new WordPress instances from the stored AMI ID instead of using user data. A ‘Golden AMI’ refers to a specific type of AMI that serves as a trusted and well-configured starting point for creating instances. Deploying instances this way can be faster and more efficient than configuring each instance manually, as we do presently. It can save time during provisioning and scaling operations, which could be crucial when horizontally scaling.
  • Look into restricting traffic to HTTPS (Port 443) in our Security Group rules (you’ll need to generate an SSL certificate to be able to do this, but it’s worth exploring the process).

I hope you have enjoyed following this tutorial and it has helped you better understand some key AWS services, and how we can create and interact with them through Terraform.

You can download the full repository for this tutorial here, where each stage of our build is separated into branches to enable you to apply our IaC as you go. Feel free to connect with me on LinkedIn and GitHub.

Happy coding!

About the author

Dan Phillips is an Associate AWS DevOps Engineer here at Version 1.

--

--

Dan Phillips
Version 1

I'm a DevOps and software engineer based in Newcastle-Upon-Tyne, UK.