AWS Two-Tier Architecture Deployment With Terraform

Tanner Clegg
Nerd For Tech
Published in
12 min readDec 9, 2022

Terraform is a tool created by HashiCorp for creating, managing, and updating your infrastructure. It helps make managing resources across multiple cloud providers — such as AWS, Azure, and Google Cloud Platform — easier than ever. With Terraform, you can create a single configuration file that describes all of your infrastructure in a format that’s both human-readable and machine-readable. Let’s take a closer look at what Terraform is and how it works.

What Does Terraform Do?

At its core, Terraform is an Infrastructure as Code (IaC) tool. That means it lets you define the entire infrastructure needed to run your applications using code instead of manually configuring each resource. This includes virtual machines (VMs), networks, storage accounts, and databases. For example, if you need to spin up ten VMs in an Azure environment for testing purposes, you can use Terraform to create those VMs with just one command instead of having to manually configure each VM individually.

In addition to simplifying infrastructure setup and management tasks, Terraform also helps ensure that your infrastructure remains consistent over time with its native integration with many popular version control systems — including GitHub, GitLab, Atlassian BitBucket, and Azure DevOps. This means any changes made to the infrastructure can be tracked in a version control system so you can easily see what has been changed and when it was changed. This makes it easy roll back changes if something goes wrong or if someone accidentally deletes a resource they didn’t mean to delete.

How Does Terraform Work?

Terraform works by taking a set of configuration files written in the HashiCorp Configuration Language (HCL) and translating them into calls that are sent to the cloud provider’s APIs to provision resources on their platform. For example, if you want to create an EC2 instance on AWS with Terraform, you would write the configuration file using HCL and then pass it through Terraform which would translate it into an API call that gets sent to AWS’s API. The response from the API call is then used by Terraform to determine whether or not the resource was successfully created or not. If there were any errors during this process, they will be logged so that they can be easily identified and fixed before continuing with the rest of the setup process.

The Terraform Workflow

To further understand the functionality of Terraform, I will demonstrate how to use it to deploy a two-tier architecture from the AWS Cloud9 IDE. For many organizations, a two tier architecture on AWS is an ideal cloud structure for scalability and flexibility. This two-tier architecture will meet the following objectives:

  1. A VPC with CIDR 10.0.0.0/16 with 2 public subnets with CIDR 10.0.1.0/24 and 10.0.2.0/24. Each public subnet should be in a different AZ for high availability.
  2. Two private subnet with CIDR ‘10.0.3.0/24’ and ‘10.0.4.0/24’ with an RDS MySQL instance (micro) in one of the subnets. Each private subnet should be in a different AZ. Note that you don’t want the size of your RDS to big too big or it will take an extra long time to deploy.
  3. A load balancer that will direct traffic to the public subnets.
  4. One EC2 t2.micro instance in each public subnet.

For this example, I will have all of my code in a single main.tf file. Although storing all your code in a single main.tf file, or what’s known as a monolith, can provide the appearance of simplicity and clarity, it is important to indicate caution before diving into this set up completely. Monoliths run the risk of losing reuse and readability while at the same time making development hard — as your infrastructure increases in complexity, you need to write large amounts of hardcoded data instead of utilizing reusable modules and variables. In addition, without an organizational strategy, it can be difficult to determine in what order Terraform should run these configurations. Therefore, while there are times when this may be acceptable for smaller projects or those not requiring long-term maintenance, oftentimes the alternative (breaking out configurations into files and externalizing data) makes for better long-term management practices.

Prerequisites

  • Signed into AWS with IAM user access (Avoid using root user access to enable higher account security).
  • Terraform installed on IDE (I will be using the AWS Cloud9 IDE).

Step 1 — Initiate Working Directory and Provider Configurations

The code to build infrastructure in this sample project was constructed with reference to the Terraform Registry. Further support for syntax logic can be found here.

This sample project is broken down into gists, which can be key to understanding the entire concept. It offers an easier way to comprehend the bigger picture of Terraform while avoiding being overwhelmed by the details or complexity of the entire project. Additionally, a complete view of my code repository can be viewed below:

In the IDE, create a new working directory and change into that directory with this syntax:

mkdir <directory name>
cd <directory name>

In the new working directory, create a main.tf file to begin writing our cloud infrastructure:

touch main.tf

The main.tf file is a basic configuration file used in Terraform which defines most of the infrastructure elements and settings required to create a cloud infrastructure. This file contains base configurations such as providers, networking settings, storage requirements, security regulations and other important parameters associated with the resources that need to be created. It makes the deployment process much more efficient, reliable and secure in comparison to manually setting up every single element from scratch.

Selecting the main.tf file, we can insert HCL to deploy our infrastructure. Configuring a provider in file is the first step for accurately building and managing the two-tier architecture on AWS. This is accomplished with the following code:

Providers are an essential part of the Terraform configuration syntax. These custom plugins enable users to connect and communicate with back-end infrastructure such as public clouds, private datacenters, or software-as-a-service solutions. In essence, providers are responsible for understanding API interactions and managing resources that comprise cloud environments without manually logging into cloud interfaces. Furthermore, Terraform users can also develop their own custom providers to install in the Terraform system, which gives them greater control over the resources they may access and use within their chosen user environment.

Step 2 — Declare VPC and Internet Gateway Resources

Resource addressing with HCL syntax is designed to be easily read and written by humans. We will follow a similar structural pattern for our resource code throughout the main.tf file as shown below:

With the provider configured in the previous step, let’s declare the VPC and attached Internet Gateway for the infrastructure:

As a quick refresher, a VPC on AWS is a virtual network dedicated to your account that helps you improve the security of your data traffic. It can span one or more Availability Zones, allowing you to place resources such as EC2 instances in separate subnets for network protection. An Internet Gateway is a component of a VPC that allows private networks to access and exchange information with greater external networks, like the internet. Through the gateway, subnets in a VPC have direct connectivity with each other, as well as external device addresses outside of their own networks. This provides users with higher bandwidths, better performance through faster latency rates and improved overall security.

Step 3 — Enable Public and Private Subnets

Let’s declare both the public and private subnets in the infrastructure with the following syntax:

Utilizing public and private subnets in AWS can be an extremely powerful way to improve the security of your system. By setting up separate subnets for each type of traffic, it allows you to isolate and control access to specific resources. Public subnets are typically used for publicly accessible resources such as web servers, while private subnets are typically used for databases and other sensitive information that shouldn’t be exposed direct to the internet. Firewalls can also be set up between the two, ensuring that only certain types of traffic can reach certain types of resources.

Step 4 — Set Route Table and Subnet Associations

Route Tables are an essential tool for controlling the flow of traffic between different availability zones, networks, and subnets in the cloud. Route tables filter network traffic by allowing only authorized connections to communicate with each other. They enable you to securely connect your internal management networks and public services to their own subnets which keeps your entire AWS infrastructure secure.

The following syntax will declare the route table and its subnet associations for the infrastructure:

Step 5 — Configure Security Groups

Security Groups provide an extra layer of protection by allowing the user to create a digital barrier for their resources. This allows for more control when it comes to deciding who has access to the infrastructure, making it possible to lock out malicious users from certain areas of your subscription. With EC2 security groups, you can effortlessly add and remove rules that control inbound and outbound internet traffic. They provide an easy way to restrict access to applications or systems while simultaneously providing a secure infrastructure on which to host them. In short, they are an essential tool for keeping your data safe and secure when using AWS services.

The following gist provides the HCL syntax to properly declare these critical pieces of the infrastructure:

The first block of code defines the public security group rules, which allows the public subnets to be accessible by HTTP and SSH. The second block of code defines the private security group rules, which allows the private subnets to be accessible by the first tier, SSH, and the default port for MySQL. Finally, the third block of code defines the security group for the Application Load Balancer (which will be created in the next step).

Step 6 — Configure the Application Load Balancer

The following syntax will address the resources and arguments needed to properly build the Application Load Balancer in the architecture:

The Application Load Balancer makes it easy to ensure your web applications are up and running and available at optimal speeds. This powerful tool automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions, in order to improve the performance and availability of your applications. It also supports advanced routing rules that you can use to route requests to different targets based on conditions like the path in the URL and HTTP headers.

Step 7 — Set-Up EC2 Instances With Bootstrapping

Bootstrapping is an automated process that allows users to quickly launch EC2 instances with preconfigured packages. This enables users to save time by not having to manually configure each instance after they have been launched. Instead, bootstrapping will automatically install and configure the necessary packages and applications on each instance as soon as it is launched. This makes it easier for users to get up and running faster with fewer manual steps involved.

To enable bootstrapping, a script is included in both EC2 instance user-data fields to automate the installation of an Apache server to display custom text when the instances are started. This is addressed with the HCL syntax below:

With the EC2 instances deployed into both of the public subnets, our client/web tier configuration is concluded.

Step 8 — Configure RDS in Private Subnets

We will finalize our data tier with the following syntax:

Amazon Relational Database Service (RDS) is one of the most powerful and widely used database services in the cloud. RDS provides a managed environment for your databases with automated backups, patching, and other maintenance tasks. It reduces the time spent on managing database functionality, so you can focus on developing applications. RDS supports a wide range of databases, from MySQL to PostgreSQL and Oracle. You can also use RDS to host NoSQL databases like MongoDB or Amazon’s own DynamoDB. The service allows for scalability and backups that are secure and easy to manage, making it an essential component of any AWS-backed project.

Step 9 — Create Outputs.tf file

The outputs.tf file is an incredibly important part of the infrastructure-as-code development process. It contains the values that need to be exported after Terraform runs, such as IP addresses and other relevant information. By having a central location to save any exported values, developers can easily access it with specific configs. This not only greatly simplifies the storage process, but it also saves time and resources due to fewer manual operations. Including an outputs.tf file enables teams to have a consistent way of managing outputs through various modules and data sources that are easier to work with and manage than traditional methods.

Initiate the creation of the outputs.tf file with the following syntax:

touch outputs.tf

We can be returned with an EC2 public IPv4 address, RDS instance address, and the DNS of the load balancer with the following syntax:

Step 10— Initialize Terraform

With our .tf files populated with HCL to manage our infrastructure and outputs, we need to initialize Terraform with the following command:

terraform init

A successful entry will return similar output:

We can further organize and clean our Terraform source code files with the following command:

terraform fmt

This syntax can fix any errors or inconsistencies that may have crept into the configuration, ensuring only high-quality code enters production environments. By using this powerful tool, teams no longer need to worry about awkward implementations or sloppy syntax errors when writing their infrastructure as code.

Step 11 — Plan and Apply

With our code formatted properly, we can execute the plan and apply phases of the workflow:

terraform plan

The ‘terraform plan’ command is an incredibly important tool for ensuring a successful infrastructure setup in the cloud. It provides us with a comprehensive overview of the resources that will be deployed and highlight specific errors within our code.

With proper deployment plans verified, we will insert the following command to apply and build out the infrastructure:

terraform apply

We will be returned with another overview of resources that will be deployed and a final prompt to execute the construction of the infrastructure:

After entering ‘yes’, the resources will begin to deploy. After successful deployment, we will be returned with a successful output — including the specified values coded in the outputs.tf file:

We can additionally verify the creation of our resources by viewing them in the AWS console:

Step 12 — Destroy Infrastructure

Terraform also allows the user to completely delete an entire infrastructure created with Terraform, including all of its managed services and components. It’s a powerful way to quickly revert any changes made during a build process, eliminating any lingering configurations and freeing up valuable resources. This tool must be used carefully, however, as it cannot be reversed and will have permanent effects on the environment being destroyed. Despite this caution, the ability to completely remove an infrastructure in a matter of seconds using a single command makes this one of the most important functions available in Terraform software.

To initiate the deletion of the two-tier architecture, insert the following command:

terraform destroy

Upon successful deletion of the resources, we will be returned the following output:

As demonstrated in this example, using Terraform for infrastructure deployment offers a number of advantages over manual methods. These include improved scalability and visibility into changes over time. With its simple syntax and powerful automation capabilities, Terraform provides teams with an efficient way to quickly spin up new environments without needing extensive manual intervention or scripting knowledge. If you’re looking for an automated solution for deploying cloud infrastructure quickly and securely, then look no further than Terraform!

--

--

Tanner Clegg
Nerd For Tech

Current DevOps engineer student with Level Up In Tech. Join me as I learn more about the cloud and share some of my latest research and projects.