Host a Dynamic Web App on AWS with Terraform, Docker, Amazon ECR, and ECS

Eugene Miguel
77 min readJul 28, 2023

--

Part 2 of Deploying a Dynamic Website on AWS with Terraform

What is Infrastructure as Code (IaC)?

Infrastructure as Code (IaC) is the managing and provisioning of infrastructure through code instead of through manual processes. With IaC, configuration files are created that contain your infrastructure specifications, which makes it easier to edit and distribute configurations.

Photo by HashiCorp on Google

Benefits

  1. Cost reduction
  2. Increase in speed of deployments
  3. Reduce errors
  4. Improve infrastructure consistency

What is Container Deployment?

Containers are a method of building, packaging and deploying software. A container includes all the code, runtime, libraries and everything else the containerized workload needs to run.

Container deployment is the act of pushing (or deploying) containers to their target environment, such as a cloud or on-premises server. While a container might hold an entire application, in reality most container deployments are really multi-container deployments, meaning you are pushing multiple containers to the target environment. For more dynamic, large-scale systems, you might deploy hundreds or even thousands of containers a day.

Photo by AVI Networks on Google

Benefits

  1. Speed
  2. Agility and flexibility
  3. Resource utilization and optimization
  4. Run anywhere

Introduction

Hello and welcome back! In this project, we will be diving into the exciting world of AWS deployment using Terraform, Docker, Amazon ECR, and Amazon ECS. By the end of this project we will gain the skills and knowledge to write codes to deploy applications on AWS, incorporate Git and GitHub into our projects, store Terraform state in S3 and lock it with DynamoDB.

We will also learn how to write Terraform code for core AWS services such as

  1. VPC
  2. NAT Gateway
  3. Security Groups
  4. Elastic Container Service (ECS)
  5. Relational Database Services (RDS)
  6. Application Load Balancer, Auto Scaling Group, S3, Certificate Manager, Route 53, and more.

Aside from building your technical skills, this project will also enhance your resume and prepare you for the AWS Certified Cloud Practitioner and AWS Certified Solutions Architect Exam.

Requirement

Before you start this project, please make sure that you have completed my previous project Host a Dynamic Web App on AWS with Docker, Amazon ECR, and Amazon ECS. All of my files can be found in my P2-AWS-Terraform-Docker repository in GitHub.

Objectives

  1. Course Introduction
  2. Requirements
  3. How to Install Terraform
  4. Free GitHub Account Sign Up
  5. Get Started with Git — Installing Git on your Computer
  6. Generate Key Pairs for Secure Connections
  7. Add Your Public SSH Key to GitHub
  8. Visual Studio Code Installation for Effective Terraform Workflow
  9. Maximize Terraform Efficiency with these Extension Installs
  10. AWS CLI Setup — Installing Command Line
  11. Creating IAM User in AWS
  12. Get Started with IAM User — Creating Access Key
  13. AWS Configuration — Running the AWS Configure Command
  14. Storing Terraform State with S3 Bucket
  15. Locking Terraform State with DynamoDB Table
  16. Creating AWS Resources with Terraform Syntax
  17. Git Repository Setup for Terraform Code Storage
  18. Cloning Git Repository to Your Local Machine
  19. Terraform Variables — An Introduction
  20. Assigning Values to Terraform Variables with TFVars
  21. Establishing Secure Connection between Terraform and AWS
  22. S3 Bucket and DynamoDB for Storing and Locking Terraform State
  23. Creating a 3-Tier VPC with Terraform
  24. NAT Gateway Creation with Terraform
  25. Securing AWS with Terraform — Creating Security Groups
  26. Creating RDS Instance with Terraform
  27. AWS SSL Certificate Request with Terraform
  28. Application Load Balancer Creation with Terraform
  29. Creating S3 Bucket with Terraform
  30. ECS Task Execution Role Creation with Terraform
  31. ECS Service Creation with Terraform
  32. Auto Scaling Group for ECS Service Creation with Terraform
  33. Create Record Set in Route-53 and Terraform Outputs
  34. Terraform Clean Up — Running Terraform Destroy

Let’s bring it on!

Photo by Jakayla Toney on Unsplash

3. How to Install Terraform

We will learn how to install Terraform on Windows computer. For details on how to install Terraform, visit my previous tutorial here where I explained it thoroughly.

4. Free GitHub Account Sign Up

We will create a GitHub account to store all the codes for our project. If you already have an account, you can use it for this project. For details on how to sign up for a free GitHub account, visit my previous tutorial here where I explained it thoroughly.

5. Get Started with Git — Installing Git on your Computer

To clone a GitHub repository, you must install Git on your computer. For details on how to install Git, visit my previous tutorial here where I explained it thoroughly.

6. Generate Keypair for Secure Connections

We will create a keypair on our computer so we can use it to clone our private GitHub repository. For details on how to create a keypair, visit my previous tutorial here where I explained it thoroughly.

7. Add Your Public SSH Key to GitHub

We will upload our keypair into GitHub. Previously, we created a keypair the next thing that we need to do is to upload the public key of that keypair into GitHub. Afterwards, we’ll be able to clone our GitHub repository.

For details on how to add your public SSH key to GitHub, visit my previous tutorial here where I explained it thoroughly.

8. Visual Studio Code Installation for Effective Terraform Workflow

We are going to install Visual Studio Code on our computer. It is a text editor we will use for our project. For details on how to install Visual Studio Code, visit my previous tutorial here where I explained it thoroughly.

9. Maximize Terraform Efficiency with these Extension Installs

We are going to install some Terraform extension. For details on how to install the extension, visit my previous tutorial here where I explained it thoroughly.

10. AWS CLI Setup — Installing Command Line

We will install the AWS CLI on our computer to manage your AWS services. For details on how to install the command line interface, visit my previous tutorial here where I explained it thoroughly.

Keep on moving!

Photo by taichi nakamura on Unsplash

11. Creating IAM User in AWS

We are going to create an IAM user with programmatic access. Terraform will use this IAM user’s credentials to create resources in our AWS environment.

For details on how to create an IAM user, visit my previous tutorial here where I explained it thoroughly.

12. Get Started with IAM User — Creating Access Key

Once you’ve created your IAM user, For your user to have programmatic access you have to generate an access key and secret access key for your user.

For details on how to create an access key, visit my previous tutorial here where I explained it thoroughly.

13. AWS Configuration — Running the AWS Configure Command

We will create a named profile for the IAM user we created previously. This will allow terraform to use the user’s credentials to authenticate with our AWS environment.

For details on how to create a named profile, visit my previous tutorial here where I explained it thoroughly.

14. Storing Terraform State with S3 Bucket

When you use terraform to create resources in AWS, it will recall the information about the resources it created in a terraform state file. The next time you go to update those resources, terraform will use the state file to find those resources and update them accordingly.

The state file is crucial to our terraform works. In this part, we will create an S3 bucket to store our state file. Most companies store their state files in the S3 bucket and they may ask you this question during your interview.

For details on how to store Terraform state with S3 bucket, visit my previous tutorial here where I explained it thoroughly.

15. Locking Terraform State with DynamoDB Table

We will create a DynamoDB table to lock the Terraform state. When you lock the Terraform state with DynamoDB, this prevents multiple users from making changes to the state at the same time. Terraform will use this DynamoDB table for the state lock.

Go to DynamoDB through your AWS console > Tables > Create table. Use the following information and type it the same way I did. For the options that weren’t mentioned leave them to their default settings.

Table name (type it this way): terraform-state-lock

Partition key: LockID

Click Create table.

We’ve successfully created the terraform-state-lock. This is the table that we will use for the Terraform state lock.

16. Creating AWS Resources with Terraform Syntax

Before we start creating the resources for this project, let me show you the process I used to create a resource in AWS using terraform. In this example, I will show you how to write a simple terraform syntax to create a VPC. You can use this technique to create any resource in AWS using terraform.

For details on how to create AWS resources with Terraform syntax, visit my previous tutorial here where I explained it thoroughly.

17. Git Repository Setup for Terraform Code Storage

We will create a Git repository that will use to store the Terraform codes.

Log in to your GitHub account and go to the homepage. Click the green New button. Enter the information below.

Repository name (Type it the way I did or you can give any name you prefer): rentzone-terraform-ecs-project

Description: A repository to store the Terraform codes for the RentZone ECS project

Private: put a radio button

Add a README file: put a check mark

Add .gitignore: Terraform

Then click Create repository

We have successfully created that we will use to store our Terraform codes for this project. In your repository, ensure that you added .gitignore file for Terraform and a README file.

18. Cloning Git Repository to Your Local Machine

We will clone the repository we created previously to our computer. For details and guide on how to clone Git repository to your machine, visit my previous tutorial here where I explained it thoroughly.

Please make sure that you are cloning the rentzone-terraform-ecs-project (your repository name maybe different).

19. Terraform Variables — An Introduction

We will create Terraform variables. Think of it as a place holder that you can use to store a value and you can reference that value later on in your Terraform project.

For details and guide on how to create Terraform variables, visit my previous tutorial here where I explained it thoroughly.

For example we will create 3 variables in this project to store our project name, the environment, and the region we want to deploy the project. The best way to learn how variable works is to create one.

Open your project folder in Visual Studio Code. Let me explain these variables.

Variable that we will use to store the value for the region that we will deploy this application in.

Blue — to create the variables for the project name, region and environment, first we are going to create a Terraform file in our project folder.

Yellow — Comment or notes. This means that when Terraform runs your code, it’s going to ignore this line

Red — This is the first variable that we are going to create. We will use this to specify the region we want to deploy this application in. Start by typying var you will see the auto suggestion and it will say block. You can select it or press enter. It is going to create a variable block for you. In the variable, inside the double quote we’re calling this variable region.

Green — Type your description inside the double quotes.

Orange — Type is our next variable. The type is going to be a string. In programming, a string is a word. We are restricting the value of this variable to a word. That means it cannot be a number, true or false.

It’s best practice to align the equal signs to make it neat.

This is how you create a variable in Terraform. We will use this to store the value of the region where we will deploy our application in.

Pink — Before we create the next variable. If you want to add the value for your variable, you are going to enter the default option. We will create this project in the us-east-1 region. Type the variable region inside the double quotes. So anytime we reference the variable region in our Terraform project, the value is going to be us-east-1.

For this project, we won’t enter the default value for our variable in the variables file. In the next part, I will show you how to use Terraform.TFVars file to pass the value for your variables. So let’s remove this default value for now.

So far, in your variables file, the variables should look like this

Variable that we will use to store the value for our project name.

Copy the variable above and paste it below. Let me explain how we are modifying this variable.

Blue — We will call the variable name , Project Name. If there is a space between 2 words, use hyphen or under score.

Yellow — The description is going to be Project Name.

Red — The type is going to be a string.

Variable that we will use to store the value for the environment we will deploy this project in.

Copy the variable above and paste it below. To explain further

Blue — For the variable name, we will call it environment.

Red — The description of this variable is environment. In Terraform, we are using this description to describe what this variable would do. You can add as much details to your description as you want.

Yellow — The type is also going to be a string.

These are the 3 variables we are creating for now. We will update this variables file as we build other resources in our project.

Remember that we are not going to enter the values for our variables in the variables file, instead I will show you how to use the Terraform.TFVars file to enter the values of your variable.

Go ahead and save your work.

20. Assigning Values to Terraform Variables with TFVars

We will use Terraform TFVars to assign values to this variables we created previously. The first thing that we need to do is create the file for our Terraform TFVars. Similar to what we did in the prior steps, go ahead and create a file in your project folder.

You can split your screen like this. Click and hold terraform.tfvars drag it to the right and release putting your variables.tf at your left and terraform.tfvars to your right. Close the explorer to give you more room.

Entering the values in the .tfvars file

It’s gonna look like this once we completed all values.

Let me break this down for you

We are copying from .tf and pasting it in .tfvars file.

Yellow — copy the comment/note here

Blue — copy the variable name region and paste it here. We will deploy this application in us-east-1.

Red — This variable is called project name. This project is called rentzone.

Pink — This variable is called environment and we’re calling it dev.

This is how you use Terraform.tfvars to enter/assign the values for our variables. Go ahead and save your work.

We are going to smash this! Keep going!

Photo by Todd Quackenbush on Unsplash

21. Establishing Secure Connection between Terraform and AWS

We will configure an AWS provider to establish a secure connection between Terraform and AWS. First, create a new file named providers.tf in Visual Studio Code. The providers reference files including the resource types and aguments for this project can be found in my P2-AWS-Terraform-Docker repository on GitHub. We will use this to configure an AWS provider to authenticate Terraform with AWS. Once you have the syntax, go and paste it in your Visual Studio Code.

Let’s break these syntax/arguments down.

Red — This syntax is the provider block. The cloud provider that we want to authenticate to is AWS.

Pink — This is the region where we want to deploy our project in. Blue — Earlier, we created a variable for the region where we want to deploy our project in. We are going to reference this variable for our region argument Pink.

Pink — Notice that as soon as you type var., the auto suggestion shows you the variable that we created for the region in variables.tf file. All you have to do is either select it or type var.region.

That is how we reference the variable region.

The next argument is profile. This is the named profile that we configured on our computer earlier. It contains our user’s access key ID and secret access key.

The named profile is stored in .aws folder in your home directory.

To get this, go to your file explorer. Locate and open your .aws folder.

Open the credentials with notepad.

Yellow — this is the named profile that we configured in the previous steps. Copy terraform-user

Blue — paste it here.

This is the named profile that I have configured on my computer that I will use to authenticate Terraform with my AWS environment. So far, our providers.tf file looks like this

The next argument is default tags. These are the tags that we want to add to every resource that we’ll create in this project.

Blue — the first tag we want to add is automation. The value is terraform.

Red — This is going to be the name of the project. Copy the variable name project_name from the variables.tf file and reference it here.

Yellow — The last tag that we will add to our resource is the environment. Copy the variable name environment from the variables.tf file and reference it here.

This is how we reference the variable environment. For every resource we create in this project, Terraform will add these tags to those resource.

Let us recap

  1. The first tag we are adding is terraform to identify that resource is created with Terraform.
  2. We are adding the project name and
  3. The environment

This is all that we need to do to create an AWS provider to authenticate Terraform with AWS. Go ahead and save your work.

22. S3 Bucket and DynamoDB for Storing and Locking Terraform State

We will write the Terraform syntax to store the Terraform state in S3 and lock it with DynamoDB.

When using Terraform, it keeps a record of your infrastructure called the “state file.” This file is essential because it helps Terraform understand what resources it manages and how they relate to your configuration.

Storing the state file in Amazon S3, a cloud storage service, offers several advantages:

1. Safety: S3 keeps your state file secure and prevents data loss by storing multiple copies.
2. Sharing: When working with a team, keeping the state file in S3 lets everyone access it easily.
3. History: S3 can track changes to the state file, allowing you to undo mistakes if needed.
4. Control: You can manage who has access to the state file using AWS tools.

DynamoDB, a database service, is used to “lock” the state file, meaning it ensures only one person can make changes at a time. This helps prevent errors that could occur if multiple team members try to update the state file simultaneously.

In simpler terms, storing the Terraform state file in S3 and using DynamoDB for locking keeps your infrastructure records safe, shareable, and consistent when working alone or with a team.

Go to your project folder in Visual Studio Code and create another file named backend.tf. The backend reference files including the resource types and arguments for this project can be found in my P2-AWS-Terraform-Docker repository on GitHub. Once you have the syntax, go and paste it in your Visual Studio Code.

Let me roll my sleeves and break this down for you.

Blue — To write a backend syntax that you will use to store your Terraform state file in S3 and lock it with DynamoDB. First, we will start with Terraform. Underneath you will specify S3 as the backend that we want to use.

Red — Bucket is the first option for our backend. This is the name of the S3 bucket where we want to store the State File in. This is the S3 bucket that we created in my previous tutorial. So copy your unique bucket name and paste it here.

Yellow —The key is the name that we want to call our State File in the S3 bucket, hence we are creating a folder in the S3 bucket named rentzone-ecs/. Next, we will enter the name for the Terraform State File, terraform.tfstate.

To explain further, we will store our State File in this S3 bucket and inside that bucket Terraform will create a folder called rentzone-ecs. Afterwards, it’s going to store our State File in that folder. The name of the State File is terraform.tfstate.

Green — This is the region that we are using for this project, “us-east-1".

Pink — This is the named profile that we configured previously to authenticate Terraform with AWS. If I go back to my providers.tf file, my profile is called terraform-user (your profile name may be different from mine). Copy it and paste it here.

Orange — This is the name of the DynamoDB table that we created previously to lock our state file. Go to DynamoDB on AWS to get the name of your DynamoDB table and paste it here.

These are all the syntax that we need to store the Terraform state in S3 and lock with DynamoDB. Go ahead and save your work.

Let’s run terraform init to initialize our Terraform project with our AWS environment. Open an integrated terminal from any of your file and run terraform init

We have successfully initialized Terraform with AWS.

Go ahead and push your files to your GitHub repository. Afterwards, go to GitHub to verify if those files are there.

In my repository, you can see all the files we just created.

23. Creating a 3-Tier VPC with Terraform

Roll up your sleeves because we will create a VPC with public and private subnets in 2 different availability zones. The VPC reference file including the resource types and arguments that we will use can be found in my P2-AWS-Terraform-Docker repository on GitHub.

Go to your project folder in Visual Studio Code. Create another file named vpc.tf and paste the syntax in.

The first resource block will create the VPC. To create this, we will start with the resource. Let’s break this down to bits and pieces.

Blue — this is the resource type

Red — this is the reference name I’ve given (your reference name maybe different)

We discussed in my previous projects that the resource type is provided by Terraform while reference name is the name you’ve created for this resource in your Terraform project.

Yellow — CIDR block is the first argument. This is the IPv4 CIDR block that we want to assign to this VPC. Instead of hardcoding the IPv4 CIDR block, we’re going to create a variable for it in the variables.tf file — Turquoise. I added a note # vpc variables.

Let’s modify this variable. The variable name is "vpc_cidr" The description is going to be "vpc cidr block" and the type is going to be a string . We are done creating this variable. Copy the variable name. Let’s go back to vpc.tf file. We will reference that variable by typing var.(paste the variable name)

Green — This argument is tenancy and its gonna be default

Orange — We want to enable the enable_dns_hostnames so this should be true

Pink — This argument is tags. For the tag name, we will tag all the resources that we will create in this project by the project name and environment. We will tag this VPC "${var.project_name}-${var.environment}-vpc"

To discuss further.

{var.project_name} — we created a variable for project name in variables.tf so we referenced it here.

{var.environment} — we created a variable for our environment in variables.tf so we referenced it here.

This is the format that we will use to tag all the resources that we will create in this project.

Our vpc.tf so far should look like this

The next resource will create an Internet Gateway and attach it to the VPC.

Blue — This is the resource type to create an Internet Gateway.

Red — This is the reference name I’ve given it.

Yellow — The first argument is VPC ID. This is where we want to attach the Internet Gateway to. Remember that we just created out VPC resource above so all you have to do is to copy it and reference it here.

Green — This argument is called Tags. We’re going to follow the same format we used for the VPC. Reference it here and just change the -vpc to -igw

This means that when we tag this Internet Gateway, we will tag it by our project name-environment-igw.

Our vpc.tf so far should look like this

The next syntax on our template will use data source to get a list of all availability zones in the region we are using. For example, in this project we will deploy our application in the us-east-1 region and this syntax will get a list of all the availability zones in the us-east-1 region.

Up next, we will create a public subnet in the first availability zone.

Blue — This is the resource type to create a subnet and the reference name I’ve given it.

Red — This is the ID of the VPC where we want to create this subnet in. If you scroll to the top we created the VPC and we also referenced it so just copy it and paste here.

Yellow — This is the IPv4 CIDR block we want to assign to this subnet. Similar to what we did to our VPC we will create a variable for it — Turquoise. You can copy the VPC variable, press enter twice and paste it here. Modify the variable name to "public_subnet_az1_cidr" The description to "public subnet az1 cidr block” and the type to string

Copy the variable name. Go back to vpc.tf file. On line 27, let’s reference it here var.public_subnet_az1_cidr

Pink — This argument is availability zone. Remember that we used data source to get a list of all the availability zones in our region here. Just copy it and paste here. Modify it this way. The [0] at the end means that when we use data source to get a list of all the AZs in our region, [0] will select the first AZ in that region. In programming language this is called indexing.

Green — The value of this argument is true. Since this is a public subnet, we want any resource that we’ve launched such as EC2 instance to have a public IPv4 address.

Orange — For the tag name we will use the same format we used for the VPC. You can copy the tag name above and paste it here. Modify the -igw to -public-az1

Our vpc.tf so far should look like this.

The next resource will create public subnet in the second availability zone. We will refer to the same steps we did for creating public subnet in the first availability zone.

I want to emphasize the following:

Yellow — This is the IPv4 CIDR block we want to assign to this subnet. Similar to what we did to our VPC we will create a variable for it — Turquoise. You can copy the VPC variable, press enter twice and paste it here. Modify the variable name to "public_subnet_az2_cidr" The description to "public subnet az2 cidr block” and the type to string

Copy the variable name. Go back to vpc.tf file. On line 39, let’s reference it here var.public_subnet_az2_cidr

It is crucial to update az1 to az2

Pink — To create this public subnet in the second availability zone, copy the value of your availability zone for the public subnet AZ1. Change [0] to [1] because when we get a list of all the AZs in our region, we will select the second availability zone.

Orange — For the tag name we will use the same format we used for the VPC. You can copy the tag name above and paste it here. Modify the public-az1 to public-az2

Here’s how our 2 public subnets should look like

Our variables for the public subnets should look like

The next resource will create a route table and add public route to it.

Blue — This is the resource type to create a subnet and the reference name I’ve given it.

Red — This is the ID of the VPC where we want to create this subnet in. If you scroll to the top we created the VPC and we also referenced it so just copy it and paste here.

Yellow — The next argument is route — we have CIDR block and gateway ID. We are adding a public route to the route table so the CIDR block should be anywhere on the internet, hence the value must be 0.0.0.0/0

Pink — This is the ID of our internet gateway. Go back up and locate the resource that we created for the internet gateway. Copy the resource type and reference name. On line 54, reference it here.

Green — For the tag name we will follow the same format we have been using to tag our resources. You can copy the tag name above and paste it here. Modify the public-az2 to public-rt

Now that we have created a route table with public route, the next thing we will do is to associate it with public subnet AZ1 and public subnet AZ2 resource. This what makes those subnets public.

Remember in order to make a subnet public, we have to associate it with a route table that is routing traffic to the internet.

The next resource will associate public subnet AZ1 to the public route table we just created.

Blue — This is the resource type to associate a subnet to a route table and the reference name I’ve given it.

Red — The first argument is subnet ID. This is the ID of the subnet we want to associate the route table with. For this resource, we want to associate the public subnet AZ1 to the route table. Go ahead and copy the resource type and reference name for our public subnet AZ1 (line 25). Then reference it here (line 64).

Yellow — This argument is route table. This is the ID of the route table you want to associate the public subnet AZ1 to. Go back where we created our route table, copy the resource type and reference name. Reference it here.

The next resource will associate public subnet AZ2 to the route table.

Blue — This is the resource type to associate a subnet to a route table and the reference name I’ve given it.

Red — The first argument is subnet ID. This is the ID of the subnet we want to associate the route table with. For this resource, we want to associate the public subnet AZ2 to the route table. Go ahead and copy the resource type and reference name for our public subnet AZ2 (line 37). Then reference it here (line 70).

Yellow — This argument is route table. This is the ID of the route table you want to associate the public subnet AZ2 with. Go back where we created our route table (line 49), copy the resource type and reference name. Reference it here.

So far, our vpc.tf should look like this

The next resource will create the private app subnet in the first availability zone.

Let’s break this down. We will use similar format for the rest of our private subnets.

Blue — This is the resource type to create subnet and reference name I’ve given it.

Red — This is the ID of the VPC where we want to create this subnet in. If you scroll to the top we created the VPC and we also referenced it. Just copy it and paste here.

Yellow — This is the IPv4 CIDR block we want to assign to this subnet. Similar to what we did to other subnets we will create a variable for it — Turquoise. You can copy the variable above, press enter twice and paste it here.

Modify the variable name to "private_app_subnet_az1_cidr" The description to "private app subnet az1 cidr block” and the type to string

Copy the variable name. Go back to vpc.tf file. On line 77, let’s reference it here var.private_app_subnet_az1_cidr

Pink — The next argument is availability zone. This is the AZ in the region where we want to create this subnet. We are creating this in the first AZ. Similar to what we did for public subnet AZ1, on line 28 copy the value then paste it here (line 78).

The [0] means that when we get a list of all the AZs in our region, we will select the first availability zone.

Green — The next argument is map_public_ip_on_launch Since this is going to be a private subnet, we don’t want any resource we’ll launch in this subnet to have a public IPv4 address. Hence the value is false

Orange — For the tag name we will follow the same format we have been using to tag our resources. You can copy the tag name above and paste it here. Modify the public-rt to private-app-az1

The next resource will create another subnet in the second availability zone — # create private app subnet az2.

Blue — This is the resource type to create subnet and reference name I’ve given it.

Red — This is the ID of the VPC where we want to create this subnet in. If you scroll to the top we created the VPC and we also referenced it. Just copy it and paste here.

Yellow — This is the IPv4 CIDR block we want to assign to this subnet. Similar to what we did to other subnets we will create a variable for it — Turquoise. You can copy the variable above, press enter twice and paste it here.

Modify the variable name to "private_app_subnet_az2_cidr" The description to "private app subnet az2 cidr block” and the type to string

Pink — The next argument is availability zone. This is the AZ in the region where we want to create this subnet. We are creating this in the second AZ. Similar to what we did for private subnet AZ1, on line 78 copy the value then paste it here (line 90). Ensure that you modify [0] to [1]

The [1] means that when we get a list of all the AZs in our region, we will select the second availability zone. In programming language, when you type 0 , it will select the first availability zone in the list and when you type 1 it will select the second availability zone in the list, hence this is also called indexing.

Green — The next argument is map_public_ip_on_launch Since this is going to be a private subnet, we don’t want any resource we’ll launch in this subnet to have a public IPv4 address. Hence the value is false

Orange — For the tag name we will follow the same format we have been using to tag our resources (project name > environment > name of the resource).

You can copy the tag name above and paste it here. Modify the private-app-az1 to private-app-az2

The next resource will create another subnet in the first availability zone — # create private data subnet az1

Lets break down each arguments.

Blue — This is the resource type to create subnet and reference name I’ve given it.

Red — This is the ID of the VPC where we want to create this subnet in. If you scroll to the top, we created the VPC and we also referenced it. Just copy it and paste here.

Yellow — This is the IPv4 CIDR block that we want to assign to this subnet. Similar to what we did to other subnets we will create a variable for it — Turquoise. You can copy the variable above, press enter twice and paste it here.

Modify the variable name to "private_data_subnet_az1_cidr" The description to "private data subnet az1 cidr block” and the type to string

Pink — The next argument is availability zone. This is the AZ in the region where we want to create this subnet. We are creating this in the first AZ. Similar to what we did for private subnet AZ1, on line 78 copy the value then paste it here (line 102).

The [0] means that when we get a list of all the AZs in our region, we will select the first availability zone. In programming language, when you type 0 , it will select the first availability zone in the list and when you type 1 it will select the second availability zone in the list, hence this is also called indexing.

Green — The next argument is map_public_ip_on_launch Since this is going to be a private subnet, we don’t want any resource we’ll launch in this subnet to have a public IPv4 address. Hence the value is false

Orange — For the tag name we will follow the same format we have been using to tag our resources (project name > environment > name of the resource).

You can copy the tag name above and paste it here. Modify the private-app-az2 to private-data-az1

The next resource will create another subnet in the second availability zone — # create private data subnet az2

Lets discuss each arguments.

Blue — This is the resource type to create subnet and reference name I’ve given it.

Red — This is the ID of the VPC where we want to create this subnet in. If you scroll to the top, we created the VPC and we also referenced it. Just copy it and paste here.

Yellow — This is the IPv4 CIDR block that we want to assign to this subnet. Similar to what we did to other subnets we will create a variable for it — Turquoise. You can copy the variable above, press enter twice and paste it here.

Modify the variable name to "private_data_subnet_az2_cidr" The description to "private data subnet az2 cidr block” and the type to string

Pink — The next argument is availability zone. This is the AZ in the region where we want to create this subnet. We are creating this in the second AZ. Similar to what we did for private subnet AZ1, on line 90 copy the value then paste it here (line 114).

The [1] means that when we get a list of all the AZs in our region, we will select the second availability zone. In programming language, when you type 0 , it will select the first availability zone in the list and when you type 1 it will select the second availability zone in the list, hence this is also called indexing.

Green — The next argument is map_public_ip_on_launch Since this is going to be a private subnet, we don’t want any resource we’ll launch in this subnet to have a public IPv4 address. Hence the value is false

Orange — For the tag name we will follow the same format we have been using to tag our resources (project name > environment > name of the resource).

You can copy the tag name above and paste it here. Modify the private-data-az1 to private-data-az2

Our Private Subnets in the vpc.tf file should look like this

These are all the resources that we need to create our VPC with public and private subnets in multiple availability zones. Let’s go and save our work (File > Save All)

One last thing that we need to do before creating our VPC. Go to our variables.tf file, in the CIDR block variables, we need to add these values to terraform.tfvars file.

Let’s split our screen like this. Put variables.tf at your left and terraform.tfvars to your right.

In the variables.tf file, copy the note #vpc variables and paste in the terraform.tfvars file.

Red — We will copy each variable names here and

Blue — We will paste it here. Include the actual CIDR block values.

To recap, this is how the terraform.tfvars should look like

This is all we need to do create our VPC. Ensure that you type all information including the CIDR blocks to make sure that they match your CIDR blocks. You can use any CIDR block that you prefer.

Let’s save our work and open any file in integrated terminal.

We now have our terminal open. To create the our VPC, run terraform apply .

Blue — Terraform will show you the plan. This contains what Terraform will do in your AWS account. This plan is based on the resources we created in our Terraform project. Review the plan and when you are happy with it, type yes after the confirm message and press Enter.

There you have it! Terraform has successfully created the VPC and other resources in our AWS account. Let’s go to our AWS account and verify if these resources are there.

rentzone-dev-vpc with CIDR block 10.0.0.0/16

2 public and 4 private subnets. 3 us-east-1b and 3 us-east-1a Availability Zones.

rentzone-dev-public-rt Route Table

It is routing traffic to the internet through the Internet Gateway.

2 public subnets are associated with our Public Route Table

We have 1 rentzone-dev-igw Internet Gateway and it’s attached to our rentzone-dev-vpc.

Ladies and Gents, this is all we need to do. We’ve verified that the resources we specified in our Terraform project has been created in our AWS account. We will not delete these resources yet because we will need them for the next steps. For now, let’s push our work to our GitHub repository.

This is how you use Terraform to create a VPC with public and private subnets in multiple availability zones.

24. NAT Gateway Creation with Terraform

We will create 2NAT Gateway in public subnet AZ1 and public subnet AZ2 so that the resources in our private subnet can have access to the internet.

The NAT Gateway reference file including the resource types and arguments that we will use can be found in my P2-AWS-Terraform-Docker repository on GitHub.

Go to your project folder in Visual Studio Code. Create a new file named nat-gateway.tf and paste the syntax in the file.

The first and second resource blocks will allocate elastic IP addresses. These will be used for the NAT Gateways in the public subnet AZ1 and public subnet AZ2.

To break these down.

Blue — Resource type to allocate the elastic IP and the reference name I’ve provided.

Red — The value is true

Yellow — This is the tag name, we will follow the same format we have been using to tag our resources (project name > environment > resource’s name).

Go to your vpc.tf file. In line 8, copy the Name under tags and paste it here. Ensure that the resources ep1 and ep2 are updated at the end.

We successfully allocated both elastic IP addresses. The third and fourth resources will create NAT Gateways in public subnet AZ1 and public subnet AZ2.

Blue — Resource types to create NAT Gateways and the reference names I’ve provided.

Red — These arguments are the allocation IDs. This is the ID of the elastic IP (eip) we want to associate with these NAT Gateways. We created the EIPs above, just copy the resource type and reference name then paste it here. Ensure to remove the quotes, replace it with periods and add ID at the end.

Yellow — This argument is subnet ID. This is the ID of the subnet where want to create this NAT Gateway in.

We wanted to create this NAT Gateway in the public subnet AZ1 and AZ2, therefore we created our subnets in the vpc.tf file.

Go to vpc.tf file. Copy the resource type and reference name. Head back to nat-gateway.tf file and paste it here. Ensure to remove the quotes, replace it with periods and add ID at the end.

Pink — This is the tag name, we will follow the same format we have been using to tag our resources (project name > environment > resource’s name). You can copy the tag value in line 15 and paste it here. Update the resource name.

Orange — This argument is depends_on. According to AWS documentation, the NAT Gateway needs to depend on the Internet Gateway. Put [] here.

Go to vpc.tf file. Locate the resource where we created the Internet Gateway. Copy the resource type and reference name.

Come back to nat-gateway.tf and paste it between the []. Remove the quotes.

In Terraform, having the depends_on argument means Terraform will create the Internet Gateway before it creates the NAT Gateway.

We have successfully created the NAT Gateways in public subnet AZ1 and public subnet AZ2. Up next, we will create a private a route table that routes traffic through those NAT Gateways.

# create private route table az1 and add route through nat gateway az1

The first route table that we will create will route traffic through the NAT Gateway in the public subnet AZ1.

Blue — The resource type to create a route table and the reference name that I’ve provided.

Red — This argument is the VPC ID. This is where we want to create our route table in. Go to vpc.tf file where we created our VPC. Locate a line where we referenced it and copy it (in my case I went to line 14). Go back to nat-gateway.tf and paste it here.

Yellow — The next destination is route and we have CIDR block and NAT Gateway ID.

CIDR block is the destination on the internet that we will be routing traffic to. For this resource it will be anywhere from the internet.

Pink — This is the ID of the NAT Gateway that we will be using to route traffic to the internet.

This route table will route traffic through the NAT Gateway in the public subnet AZ1. So scroll up and locate your NAT Gateway in the public subnet AZ1. Copy the resource type and reference name then reference it here.

Orange — The tag name, we will follow the same format we have been using to tag our resources (project name > environment > resource’s name). You can copy the tag value above (in my case line 39) and paste it here. Update the resource name.

# associate private app subnet az1 with private route table az1

Let’s associate private app subnet az1 with private route table az1.

Blue — The resource type to associate a subnet with the route table and the reference name that I’ve provided.

Green —The ID of the private app subnet az1 that we want to associate with this route table. Copy the resource type and reference name from the vcp.tf file and reference it here.

Green —The ID of the route table az1 that we want to associate this subnet with. Copy the resource type and reference name above (line 48) and reference it here.

# associate private data subnet az1 with private route table az1

Let’s associate private data subnet az1 with private route table az1.

Blue — The resource type to associate a subnet with the route table and the reference name that I’ve provided.

Green — The ID of the private data subnet az1 that we want to associate with this route table. Copy the resource type and reference name from the vcp.tf file (in my case line 99) and reference it here.

Green — The ID of the route table az1 that we want to associate this subnet with. Copy the resource type and reference name above (line 64) and reference it here.

The next resources will

  1. Create private route table AZ2 and add route through NAT Gateway AZ2
  2. Associate private app subnet az2 with private route table az2
  3. Associate private data subnet az2 with private route table az2

The first resource will create another route table that we will use to route traffic through the NAT Gateway in the second availability zone.

Blue — The resource type to create a route table and the reference name that I’ve provided.

Red — This argument is the VPC ID. This is where we want to create our route table in. Go to vpc.tf file where we created our VPC. Locate a line where we referenced it and copy it (in my case I went to line 14). Go back to nat-gateway.tf and paste it here.

Yellow — The next destination is route and we have CIDR block and NAT Gateway ID.

CIDR block is the destination on the internet that we will be routing traffic to. For this resource it will be anywhere from the internet, hence 0.0.0.0/0.

Pink — This is the ID of the NAT Gateway that we will be using to route traffic to the internet.

This route table will route traffic through the NAT Gateway in the public subnet AZ2. So scroll up and locate your NAT Gateway in the public subnet AZ2. Copy the resource type and reference name then reference the value here.

Orange — The tag name, we will follow the same format we have been using to tag our resources (project name > environment > resource’s name). You can copy the tag value above (in my case line 57) and paste it here. Update the resource name.

We’ve created the 2nd route table. Let’s associate it with private app and data subnet in the second availability zone.

Blue — The resource type to associate the private subnets with the route table and the reference name that I’ve provided.

Red — The resource types and reference names of private subnets that we created in the vpc.tf file. Reference the value here.

Yellow — The resource types and reference names of the private route table we created in the second AZ. This is also the route table that we want to associate our private subnets with. Reference the value here.

These are all the Terraform resources that we need to create the NAT Gateway. Go ahead and save your work.

Open the terminal and run terraform apply Review the plan and if you are happy with it confirm the actions by typing yes and press Enter.

Terraform has successfully created the NAT Gateways in my AWS account. Let’s head to our AWS account and check if all of these resources are there.

We have 2 Elastic IP addresses.

In our custom VPC we have 2 NAT Gateways in public subnets.

Lastly, we have our Route Tables

rentzone-dev-private-rt-az1 has a route which is routing traffic to the internet through the NAT Gateway.

It also has 2 explicit subnet associations

rentzone-dev-private-rt-az2 has a route which is routing traffic to the internet through the NAT Gateway.

It also has 2 explicit subnet associations

We can now push the changes to our GitHub repository.

25. Securing AWS with Terraform — Creating Security Groups

We will create the security groups for the application load balancer, bastion host, app server, and database. The security groups reference file including the resource types and arguments that we will use can be found in my P2-AWS-Terraform-Docker repository on GitHub.

Go to your project folder in Visual Studio Code. Create a Terraform file for the security groups. I named it security-group.tf. Paste the syntax in.

A. # create security group for the application load balancer.

It’s crucial to type the cidr_blocks this way [“0.0.0.0/0”]

Blue — The resource type that will create the security group and reference name I’ve provided.

Red — The tag name. We will follow the same format we have been using to tag our resources (project name > environment > resource’s name). You can copy the tag value from the previous resources that we created then reference it here. Update the resource name.

Yellow — I’ve added the description that enables http/https access on port 80.443.

Pink — This is the VPC ID where we want to create our security group in. You can copy the value of the VPC ID either from vpc.tf or other files we created previously. Reference it here.

Green — The ingress or inbound rule that we want to add to the security group

Line 7. Ingress

  • the description is http access
  • from and to ports are 80
  • protocol is tcp
  • cidr blocks — we are allowing http access from anywhere in the world, hence 0.0.0.0/0

Line 15. Ingress

  • the description is http access
  • from and to ports are 443
  • protocol is tcp
  • cidr blocks — we are allowing http access from anywhere in the world, hence 0.0.0.0/0

This is how you add an inbound rule to your security group that opens port 80 and 443.

Green — The egress or outbound rule that we want to add to the security group.

It’s crucial to type the cidr_blocks this way [“0.0.0.0/0”]

Line 23. Egress

For security group, any rule you allow in is also going to be allowed out.

  • from and to ports are 0
  • protocol is -1
  • cidr blocks — we are allowing http access from anywhere in the world, hence 0.0.0.0/0

Orange — The tag name. We will follow the same format we have been using to tag our resources (project name > environment > resource’s name). You can copy the tag value from the previous resources that we created then reference it here. Update the resource name.

B. # create security group for the bastion host or jump box.

Blue — The resource type that will create the security group and the reference name I’ve provided.

Red — The tag names. We will follow the same format we have been using to tag our resources (project name > environment > resource’s name). You can copy the tag value from the previous resources that we created then reference it here. Update the resource name.

Yellow — I’ve added the description that enables SSH access on port 22.

Pink — This is the VPC ID where we want to create our security group in. You can copy the value of the VPC ID either from vpc.tf or other files we created previously. Reference it here.

Green — The ingress or inbound rule that we want to add to the security group

Line 41. Ingress

  • the description is ssh access
  • from and to ports are 22
  • protocol is tcp
  • cidr blocks — this is the IP that is allowed to SSH into your servers. Similar to our previous projects, we don’t open our SSH access to anywhere on the internet. Instead, we’re going to create its variable in the variables.tf file — see screenshot above. Copy the variable name, head back to security-group.tf file and reference it here (line 46). Add the var. and brackets.

We will use this variable to limit the IP that can SSH into our EC2 instance to our IP address.

Line 49. Egress

  • from and to ports are 0
  • protocol is -1
  • cidr blocks — we are allowing http access from anywhere in the world, hence [“0.0.0.0/0”]

Red — The tag name. We will follow the same format we have been using to tag our resources (project name > environment > resource’s name). You can copy the tag value from the previous resources that we created then reference it here. Update the resource name.

C. # create security group for the app server.

Blue — The resource type that will create the security group and the reference name I’ve provided.

Red — The tag names. We will follow the same format we have been using to tag our resources (project name > environment > resource’s name). You can copy the tag value from the previous resources that we created then reference it here. Update the resource name.

Yellow — I’ve added the description that enables http/htps on port 80/443 only if that traffic is coming from the alb sg

Pink — This is the VPC ID where we want to create our security group in. You can copy the value of the VPC ID either from vpc.tf or other files we created previously. Reference it here.

Green — The ingress or inbound rule that we want to add to the security group

Line 67. Ingress

  • from and to ports are 80
  • protocol is tcp
  • security_groups — this is the ID of the ALB security group where we will allow traffic from port 80. Copy the resource type and reference name of our ALB security group above and reference it here (line 72).

Line 75. Ingress

  • this ingress rule that we will add to the security group will be on https on port 443.
  • from and to ports are 443
  • protocol is tcp
  • security_groups — this is the ID of the ALB security group where we will allow traffic from port 443. Copy the resource type and reference name of our ALB security group above and reference it here (line 80).

This is how you add an inbound rule to your security group that opens port 80 and 443.

Line 83. Egress

  • from and to ports are 0
  • protocol is -1
  • cidr blocks — we are allowing http access from anywhere on the internet, hence [“0.0.0.0/0”]

Red — The tag name. We will follow the same format we have been using to tag our resources (project name > environment > resource’s name). You can copy the tag value from the previous resources that we created then reference it here. Update the resource name.

D. # create security group for the database.

This is the last security group that we will create. We will add this to the RDS database.

Blue — The resource type that will create the security group and the reference name I’ve provided.

Red — The tag names. We will follow the same format we have been using to tag our resources (project name > environment > resource’s name). You can copy the tag value from the previous resources that we created then reference it here. Update the resource name.

Yellow — I’ve added the description that enables mysql access on port 3306.

Pink — This is the VPC ID where we want to create our security group in. You can copy the value of the VPC ID either from vpc.tf or other files we created previously. Reference it here.

Green — The ingress or inbound rule that we want to add to the security group

Line 101. Ingress

  • from and to ports are 3306
  • protocol is tcp
  • security_groups — This is the ID of the security group that our database will allow traffic on port 3306 from, hence app server security group. Locate it above, copy its resource type and reference name and reference it here (line 106).

Line 109. Ingress

  • another inbound rule that we will add to the security group. We will only allow traffic on port 3306 if the traffic is coming from the bastion host security group.

We are adding this rule to the RDS security group for data migration to RDS database.

  • from and to ports are 3306
  • protocol is tcp
  • security_groups — locate our bastion host security group. Copy its resource type and reference name. Reference it here (line 114).

This is how you add ingress rules to your security group that opens port 3306.

Line 117. Egress

  • from and to ports are 0
  • protocol is -1
  • cidr blocks — we are allowing http access from anywhere on the internet, hence [“0.0.0.0/0”]

Red — The tag name. We will follow the same format we have been using to tag our resources (project name > environment > resource’s name). You can copy the tag value from the previous resources that we created then reference it here. Update the resource name. Go ahead and save your work.

Up next, let’s add the value for the SSH location variable that we created in the terraform.tfvars file.

Keep the variables.tf file at your left and the terraform.tfvars file to your right splitting the screen in half. Go to the security-group variables. Copy the variable name. Go to terraform.tfvars file and reference it here (line 16).

To get your IP address, you can go to your AWS console. Go to Security Group under VPC. Go to Inbound rule and select My IP under Source. We won’t create additional security group we just want to get our IP address. Copy your IP address and paste it here (line 16).

Our terraform.tfvars file should look like this.

This is all we need to do to create the security groups for this project. Go ahead and save your work. Open your terminal and run terraform apply

Review the plan. If you are happy with it go ahead and confirm the action by typing yes after the confirmation message.

Terraform has successfully created the security groups in our AWS account. Let’s go to the management console and verify if these resources are there.

The 4 security groups we created under our custom VPC.

The bastion security group with inbound and outbound rules.

The database security group with inbound and outbound rules.

The ALB security group with inbound and outbound rules.

The APP server security group with inbound and outbound rules.

This is all we need to do to create the security groups for this project. We have verified that the resources we specified on our Terraform template were properly applied in our AWS account. You can go to your project folder and push the changes to your GitHub repository.

26. Creating RDS Instance with Terraform

To create the RDS instance we will restore it from the snapshot we created in the previous project

We completed this project previously and created a snapshot in our AWS account before deleting the RDS instance

The RDS’ reference file including the resource types and arguments that we will use can be found in my P2-AWS-Terraform-Docker repository on GitHub. Create a new file in Visual Studio Code and paste the syntax in.

The first resource that we will create is the database subnet group. It is used to specify the subnets in our VPC that we want to reserve for the RDS instance.

Blue — The resource type to create the subnet group and the reference name I’ve provided.

Red — The tag names. We will follow the same format we have been using to tag our resources (project name > environment > resource’s name). You can copy the tag value from the previous resources file that we created such as vpc.tf then reference it here. Update the resource name.

Yellow — These are the subnets that we want to reserve for the RDS instance. Previously, we created our private database subnets (private data subnet AZ1 and private data subnet AZ2) in the vpc.tf file. So go to that file and copy their resource types and reference names then reference them here.

The next resource will get information about the database snapshot that we saved in our previous project.

Line 16. Enter “manual” this way. This means we are following the manual with one attribute access (“ “).

Blue —The resource type to get a snapshot in your AWS environment and the reference name I’ve provided.

Red — The name of the snapshot in our AWS account. To enter the value, I created a variable for it (right). Copy the variable name and reference it here (line 14).

Yellow — This is how we specify the latest version of the snapshot. The value should be true

Pink — We created a snapshot in the previous project, hence the value should be manual

Now that we have used data source to get information about our snapshot, we’re going to use that snapshot to launch our RDS instance.

When we created in the RDS instance, some of the information such as our database username and password are already included in the snapshot that is why we don’t need to enter them again during restoration. That is why we only need the arguments above.

The variable contains the variable name, description, and type. We will use this to reference some values in the rds.tf file.

Blue — The resource type to launch an RDS instance from a database snapshot and the reference name I’ve provided.

Red — The value is instance class which is similar to selecting your instance type/class prior to launching your EC2 instance in your AWS console. To enter this value I created a variable and referenced it here.

skip_final_snapshot —You can create a final snapshot when you delete your RDS instance. Since we will be applying this Terraform code several times in our AWS environment, I’m avoiding multiple snapshots.

availability_zone — We are launching our RDS instance in the second availability zone. You can choose any AZ.

In the vpc.tf file we referenced our availability zones. We are using data source to get a list of all AZs in our region, afterwards we are using indexing to select either of the AZs. Hence, I am choosing the second AZ — Blue. Copy the value. Come back to rds.tf file and reference it here.

Yellow — This is the name that we want to give t our RDS instance. To enter this value I created a variable and referenced it here.

snapshot_identifier — We used data source above to get information about our snapshot here — Blue. Copy the data, resource type, and the reference name and paste them here.

db_subnet_group_name — We created our subnet group resource above. Just copy the resource type and reference name and paste them here. Ensure adding .name at the end.

Pink — We won’t create a standby database in this project. To enter this value I created a variable and referenced it here. The type bool means the value that we can enter for this variable is either true or false.

vpc_security_group_ids — This is the ID of the security group that we want to attach to the RDS instance. Go to your security-group.tf file where we created our database security group (RDS). Copy the resource type and reference name. Reference it here.

This is all we need to do to create an RDS instance from a snapshot. Go ahead and save your work. You can close your rds.tf and vpc.tf files but leave the variables.tf file.

We are going to add its values in the terraform.tfvars file so open that file alongside variables.tf file splitting your screen in 2.

# rds variables We added our notes

database_snapshot_identifier Copy the variable name from the variables.tf file and the name of our database snapshot (from the snapshot page in your RDS dashboard). This is the snapshot that we created in my previous project.

database_instance_class Copy the variable name from the variables.tf file then enter db.t2.micro

database_instance_identifier Copy the variable name from the variables.tf file then enter the instance identifier name.

Its crucial to know that in order to keep the same endpoint of your RDS instance that is connected to the docker image you built, you must use the same instance identifier name as the RDS instance you created in the previous project.

multi_az_deployment type false

This is the terraform syntax that we need to create the RDS instance. Let’s save all our work. Open an integrated terminal and run .

Terraform will show you the plan. This contains what Terraform will do in your AWS account. This plan is based on the resources we created in our Terraform project. Review the plan and when you are happy with it, type yes after the confirm message and press Enter.

There you have it! Terraform has successfully created the RDS and other resources in our AWS account.

Let’s go to our AWS account and verify if these resources are there.

RDS Database Instance
Inside the RDS database instance shows its endpoint & port, our custom VPC, subnets, and security groups.
In the configuration tab, we have the DB instance ID, DB name, master username, and password.

This is how we use terraform to create an RDS instance. In this tutorial, we restored our RDS instance from the snapshot we created in my previous project because if we use terraform to create a new RDS instance, we will have to migrate our data just like we did in the previous project into that RDS instance and when you run terraform destroy it destroys the RDS instance (because you don’t want to keep it running) you will loose the data you’ve migrated on it. When you rerun terraform apply, we’’ have to migrate the data into the RDS instance.

Therefore, we use snapshot to minimize the effort of migrating your data into the RDS instance every time you run terraform apply. When you are in your actual role as a Cloud DevOps Engineer, this is the common process that your company will use.

This is all we need to do to create RDS instance for this project. We have verified that the resources we specified on our Terraform template were properly applied in our AWS account.

27. AWS SSL Certificate Request with Terraform

We will use terraform to request an SSL certificate.

The acm-reference.tf file reference that we will use can be found in my P2-AWS-Terraform-Docker repository on GitHub. Copy the raw file, create a new file in visual studio code and paste the raw file.

Open your variables.tf alongside acm.tf dividing your screen in two.

This first resource we’re going to create will use our domain name to request for public certificate from AWS certificate manager. You can use the same domain name that you registered in the previous project to complete this tutorial.

domain_name This is the domain name that we want to request an SSL certificate for. We need to create a variable for this then reference the variable name here. Refer to the screenshots above (line 80 to 84).

subject_alternative_names This is the sub domain name of our domain name. We will also create a variable for this then reference the variable name here. Refer to the screenshots above (line 86 to 90).

validation_method By using the DNS method, this is how we’re going to validate that this domain name belongs to us.

create_before_destroy Whenever we update our terraform code for any reason, we want to terraform to destroy this resource before it creates a new one. Hence the value is true.

When we request for our certificate, we chose DNS as our validation method (line 5). So this next resource will allow us to use data source to get information about our route 53 hosted zone so we can create record in it to verify that this domain name belongs to us.

name The domain name we want to get information from. We already created a variable for our domain name on line 3, just copy it and paste here.

private_zone This will be false

The next resource will create a record set in the route 53 hosted zone to validate that the domain name belongs to us.

When you copy it from the terraform documentation you can leave everything here to their default values (line 18 to line 34) except for the zone_id

zone_id This is the zone ID of our route 53 hosted zone. This is the only value that we need to modify. We need to reference our route 53 hosted zone and if you recall line 13, we used data source to get the information about our route 53 hosted zone here. Just copy the data, resource type, and reference name then paste them here.

This is all we need to do create a record set in our route 53 hosted zone to validate that the domain name belongs to us.

The last resource will validate the ACM certificates.

certificate_arn The ARN of the certificate we want to validate. Going back to line 2, we requested for our certificate. Copy the resource type and reference name and paste it here.

validation_record_fqdns Please note that if you change you reference name on the resource that you created on line 19, enure that you also update the reference name here but if you used the same reference name as mine then you can leave the default value here.

This is the terraform syntax that we need to request an SSL certificate. Let’s save all our work. You can close the acm.tf file for the meantime. Up next, let’s enter the values of our variables in the terraform.tfvars file.

domain_name In your variables.tf file, copy the #acm variables notes and paste it on line 24 of the terraform.tfvars file.

domain_name Copy the variable domain_name and paste it here (line 25). Enter your domain name here (you can get this from your hosted zone).

alternative_names Copy the variable alternative_names and paste it here (line 26). Add *. then enter your domain name here. This is how you request for an SSL certificate for your sub-domain name.

This is all that we need to do in this step. Let’s save all our file. You can close your variables.tf file for now.

Open an integrated terminal and run terraform apply. Terraform will show you the plan. This contains what Terraform will do in your AWS account. This plan is based on the resources we created in our Terraform project. Review the plan and when you are happy with it, type yes after the confirm message and press Enter.

There you have it! Terraform has successfully created the SSL certificate in our AWS account. Let’s go to our AWS account and verify if these resources are there.

Issued certificate ID from Amazon for my domain name.
The 2 domains that I requested the certificate for shows success status.

This is all we need to do to request for AWS SSL certificate for this project. We have verified that the resources we specified on our Terraform template were properly applied in our AWS account.

28. Application Load Balancer Creation with Terraform

We will create the application load balancer. The alb-reference.tf file that we will use can be found in my P2-AWS-Terraform-Docker repository on GitHub. Copy the raw file, create a new file in your project folder and paste the raw file in.

The first resource that we will create if the application load balancer.

resource "aws_lb" "application_load_balancer" The resource type to create an ALB and the reference name I have given it.

name We are using the tagging format for this project. In your vpc.tf file, you can copy the tag’s value on line 8 and paste it here. Update the resource name from vpc to alb. When we create this resource we will name it our project_name environment alb

internal This ALB is internet facing, hence false

load_balancer_type This is going to be an application load balancer type, hence type application

security_groups This is the ID of the security group we want to attach to this load balancer. In the security-group.tf file on line 2, this is where we created the ALB security group. Copy the resource type and reference name then reference it here.

subnets These are the subnets we want the ALB to have a reach to. AN ALB must always have a reach to your public subnets. Hence we will use public subnet AZ1 and public subnet AZ2. We created our public subnets in the vpc.tf file on line 25 and 37, copy their resource types and reference names and reference them here.

enable_deletion_protection Type false

Name For the tag name we will use the name as the ALB name, hence copy the value on line 3 and reference it here.

The next resource we’ll create is the target group.

name We‘ll use our tagging format. You can paste/reference the last tagging format we copied. Modify the resource name to tg

target_type It’s going to be ip

port it is going to be port 80

protocol The protocol is going to be HTTP

vpc_id This is the ID of the VPC we want to create the target group in. In the vpc.tf file on line 38, copy it and reference it here.

health_check These are the default health check settings that we will use for this project. For the matcher I added 301 and 302 , these are the codes when we redirect our HTTP traffic to HTTPS.

The next resource will create a listener on port 80 with redirect action.

resource "aws_lb_listener" "alb_http_listener" This is the resource type to create a listener and the reference name I’ve provided.

load_balancer_arn This argument is the ARN of the ALB. We created our ALB up here (line 2), so copy it and reference it here (line 37) add .arn at the end.

port The port is 80

protocol The value is HTTP

default_action For the type this listener is going to be redirecting HTTP traffic to HTTPS, hence enter redirect

port Enter 443 for redirect action. We will leave protocol and status_code to their default settings.

The next resource will create a listener on port 443 with forward action.

"aws_lb_listener" "alb_https_listener" This is the resource type to create a listener and the reference name I’ve provided.

load_balancer_arn This argument is the ARN of the ALB. We created our ALB up here (line 2), so copy it and reference it here (line 58) add .arn at the end.

port The port is going to be 443

protocol Enter HTTPS

ssl_policy This is the default policy we will use

certificate_arn This is the ARN of our SSL certificate. If you recall, we created our SSL certificate in the acm.tf file on line 2. So go there, copy the resource type and reference name. Reference it here and add .arn at the end.

default_action For the type , this listener will forward traffic to our target group, hence enter "forward" . The last argument is target_group_arn . This is the going to be the ARN of the target group that this listener will forward traffic to. If you scroll up to line 16, we created our target group here. So copy the resource type and reference name. Scroll down again to line 62 and reference it here. Add .arn at the end.

This completes the steps to create the resource for the application load balancer. Go ahead and save all your work.

Open an integrated terminal and run terraform apply. Terraform will show you the plan. This contains what Terraform will do in your AWS account. This plan is based on the resources we created in our Terraform project. Review the plan and when you are happy with it, type yes after the confirm message and press Enter.

There you have it! Terraform has successfully created the application load balancer and other resources in our AWS account.

Let’s go to our AWS account and verify if these resources are there.

The rentzone-dev-alb load balancer.
The listeners and rules tab shows the HTTP and HTTPS listeners including their default actions.

This is all we need to do to create the application load balancer for this project. We have verified that the resources we specified on our Terraform template were properly applied in our AWS account.

29. Creating S3 Bucket with Terraform

We will create an S3 bucket and we will also upload the environment file we created in the previous project in that S3 bucket.

To start, open your project folder from the previous project. I am referring to the project folder you created when you deployed this application in the AWS console. The rentzone.env is the file we need. The location of your project folder in your computer maybe different from mine.

We are going to add this file into our terraform project folder. So that when we use terraform to create the S3 bucket, terraform will also upload this file into the S3 bucket. In order for us to upload this file into the S3 bucket, its much easier to add this file into our terraform project then terraform can pick the file from there and upload it into the S3 bucket.

Open your terraform project in visual studio code, select the rentzone.env file and drag it into your terraform project folder. This is the easiest way to add your environment file into your terraform project.

I explained previously that if our file contains sensitive information we shouldn’t commit into our GitHub repository that is why we need to add our rentzone.env file into our .gitignore so Git will stop tracking it. Afterwards, save all your work.

Notice the rentzone.env, it turned grey from green meaning its no longer being tracked by Git and it will not be committed to our GitHub repository. You can close both files.

The s3-reference.tf file reference that we will use to complete this project can be found in my P2-AWS-Terraform-Docker repository on GitHub. Copy the raw file, create a new file in visual studio code (I named mine s3.tf) and paste the raw file here.

You can split your screen in two by dragging the variables.tf file to your right.

The first resource will create an S3 bucket. We also need to create a variable to complete it.

# create an s3 bucket

"aws_s3_bucket" “env_file_bucket” This is the resource type to create an S3 bucket and the reference name I’ve provided.

How your s3.tf file should look like after entering the values.

bucket This argument (line 3) is the name we want to give the S3 bucket. Using the same tagging format we’ve been using, reference the project name "${var.project_name} and the variable name we created for this S3 bucket ${var.env_file_bucket_name}” (line 92).

The next resource will upload our environment file into that S3 bucket.

# upload the environment file from local computer into the s3 bucket

"aws_s3_object" "upload_env_file" This is the resource type to upload an object into an S3 bucket and the reference name I’ve given it.

bucket This is the name of the bucket we want to upload the environment file into. We just created the bucket above (line 2) so copy the resource type and reference name and reference them here (line 8).

key This is the file name we want to upload onto the S3 bucket. The file name is rentzone.env, however instead of typing the file name let’s create a variable for it. Enter the values show below. Copy the variable name, go back to s3.tf file and reference it here.

source This is the last argument. Its the path on our computer to the file that we want to upload into the S3 bucket. Remember that the file we want to upload is the rentzone.env and this file is in the same directory as our terraform file. To specify the path to this file we will enter the value this way "./${var.env_file_name}"

This completes our steps to create the S3 bucket and upload the environment file, go ahead and save all your work. Close the s3.tf file for the meantime then let’s add the actual values for our variables in the terraform.tf file.

For better visibility you can split your screen in two by dragging the terraform.tfvars file to your right.

To explain the values I entered in my terraform.tfvars file:

Line 29 env_file_bucket_name is the variable name we copied fro s3.tf file (line 92). "pinkastra-ecs-env-bucket" is my unique S3 bucket name.

Line 30 The next variable I added is the env_file_name You can copy the variable name on line 97 from the s3.tf file and paste it here followed by the name of or environment file "rentzone.env"

Once you’ve entered the values for your variables, we are ready to create the S3 bucket. Save all your work.

Open an integrated terminal and run terraform apply. Terraform will show you the plan. This contains what Terraform will do in your AWS account. This plan is based on the resources we created in our Terraform project. Review the plan and when you are happy with it, type yes after the confirm message and press Enter.

There you have it! Terraform has successfully created the S3 and other resources in our AWS account.

Let’s go to our AWS account and verify if these resources are there.

Successfully created the S3 bucket and we uploaded the rentzone.env file.

This is all we need to do to create the S3 bucket for this project. We have verified that the resources we specified on our Terraform template were properly applied in our AWS account.

30. ECS Task Execution Role Creation with Terraform

We’re going to create the task execution role for the Elastic Container Service (ECS).

The ecs-role-reference.tf file that we will use to complete this tutorial can be found in my P2-AWS-Terraform-Docker repository on GitHub. Copy the raw file, create a new file in your terraform project folder in visual studio code (I named mine ecs-role.tf) and paste the raw file here.

1 # create iam policy document. This policy allows the ecs service to assume a role

The first resource we will create is an IAM policy document and this will allow the ECS service to assume a role. We are not going to modify anything in this resource block as this came straight from the terraform documentation.

actions we are using sts:assume role and the service that is going to assume the role is the ECS service.

This next block will create another policy document. Under statement we ‘ve added the permission that we want the ECS service to have.

These are the default permissions that our ECS service needs and we want to allow it to be able to perform these actions on all resources.

The next permission we want to assign to our ECS service is the ability to get our environment file from the S3 bucket.

The permission "s3:GetObject" will allow our ECS service to get our environment file from the S3 bucket.

For resources on line 33, list the ARN of the S3 bucket that we want the ECS service to have access to. Just change the S3 bucket name by referencing the variable for our S3 bucket name (s3.tf file line 3). This will help us form the ARN of our S3 bucket, hence we are allowing the ECS service to get any object from the S3 bucket.

This statement will allow the ECS service to be able to "s3:GetBucketLocation" We are also going to enter/paste our S3 bucket name here.

The next resource will # create an iam policy and we will attach this policy document to the IAM policy.

name Using the same tagging format for our project, reference the value from vpc.tf file on line 8.

policy This is the policy we want to attach to this resource. Reference the policy that we created on line 14.

The next resource will # create an iam role

name For the value, we will use our tagging format. Reference the value on line 51 and the values should look like this after modifying it. You can customize the name according to your preference.

assume_role_policy We want to assume the policy we created on line 2, so reference this here and it should look like this after updating the values.

The last resource will # attach the ecs task execution policy to the iam role we just created.

role The name of the IAM role we want to attach to the policy, hence on line 56 reference the resource type and the reference name here.

policy_arn The ARN of the IAM policy that we want to attach to this role, therefore reference here the IAM policy we created on line 50.

This is all we need to do to create the task execution role for the ECS service go ahead and save all your work.

Now, let’s create the ECS task execution role. Open an integrated terminal and run terraform apply. Terraform will show you the plan. This contains what Terraform will do in your AWS account. This plan is based on the resources we created in our Terraform project. Review the plan and when you are happy with it, type yes after the confirm message and press Enter.

There you have it! Terraform has successfully created the VPC and other resources in our AWS account.

Let’s go to our AWS account and verify if these resources are there.

In IAM roles, we have the rentzone-dev-ecs-task-execution-role-policy.
Click the + to see the policy values.
Under trust and relationships tab, it shows the the trusted entities.
Default tags that terraform created as shown under value row.

This is all we need to do to create the IAM task execution role for Amazon ECS for this project. We have verified that the resources we specified on our Terraform template were properly applied in our AWS account.

We are almost there!

Photo by Pietro Mattia on Unsplash

31. ECS Service Creation with Terraform

We will write the resource to launch the ECS service.

The ecs-reference.tf file that we will use to complete this tutorial can be found in my P2-AWS-Terraform-Docker repository on GitHub. Copy the raw file and paste it in the new file you created (I named mine ecs.tf)

The first resource will # create ecs cluster

name Using the tagging format for our project, reference the value from line 8 of the vpc.tf file here and modify the resource name.

For the setting , this is how we can enable container insights and we are disabling it for this project.

the next resource will # create cloudwatch log group

name Refer to the screenshot of the values below you must type it this way. Modify the resource name td stands for task definition. This means that when we name this log group , we’ll be able to identify it by the task definition.

lifecycle This argument means that when we update our terraform resource, it wants to terraform to destroy this resource before it creates a new one, hence we will type true

The next resource will #create task definition

family This is the name of the task definition. Reference line 13 here and it should look this way.

execution_role_arn The ARN of our ECS task execution role. We created this in the ecs-role.tf file on line 56. Reference the resource type and reference name here. It should look like this after modifying the value.

Line 31 cpu_architecture we need to create a variable then reference its variable name here.

For the rest of the values, refer to the screenshot below.

The next block will # create container definition

name Using the same tagging format, copy the values on line 22 and reference it here. Update the resource name. This means that we’re going to name our container by the project name, environment, and container.

image We will create a variable for the container image, afterwards copy the variable name and reference it this way "${var.container_image}"

essential This is how we make this container essential, hence enter true

portMappings Enter 80 for the containerPort and hostPort

At the moment, our ecs.tf file should look like this

environmentFiles This is where we reference the ARN of our S3 bucket that contains the environment file. We need to update the <s3-bucket-name> and <env-file-name> . So copy the S3 bucket’s name from s3.tf file line 3. Next, copy the value from line 10. Reference both of them here. Lastly, enter "s3" for type.

logConfiguration

logDriver enter "awslogs",

options

"awslogs-group" We’ll reference the aws log group we created on line 12. Note the , at the end.

"awslogs-region" We’ll reference our region variable. Remember that we created the variable for our region in the beginning of this project, so enter "${var.region}",

"awslogs-stream-prefix" Enter "ecs"

Finally, our esc.tf file for this resource should look like this

The next resource will #create ecs service

name Using the same tagging format, copy the value from line 37 and reference it here. Update the resource name to service This means that when we name our ECS service, we will name it by our project name, environment — service.

launch_type Enter "FARGATE"

cluster We created the ECS cluster on line 2 so copy the resource type and reference name then reference it here.

task_definition This is the ARN of our task definition. We created this on line 21 so copy its resource type and reference name then reference them here.

platform_version Enter "LATEST"

desired_count We want 2 containers, hence type 2

deployment_minimum_healthy_percent Enter 100

deployment_maximum_percent Type 200

At the moment # create ecs service resource in our ecs.tf file should look like this

For the next resource # task tagging configuration refer to the screenshot below

# vpc and security groups under network_configuration

subnets This is where we enter the ID of the subnets we want to launch our container in. We are launching our container in the private app subnet AZ1 and private app subnet AZ2. To get the value, go to the resource where you created these private subnets, hence vpc.tf file. On line 75 and 87, copy the resource types and reference names then reference them in your ecs.tf file.

security_groups This is the ID of the security group that we want to attach to the container. To get its ID, head over to security-group.tf file and look for the resource where we created the app server security group (line 62). Copy the resource type and reference name and reference them in your ecs.tf file.

assign_public_ip Please remember that we are launching our container in the private subnet and we don’t want them to have a public IP, hence enter false

The # vpc and security groups resource in our ecs.tf file should look like this

#load balancing

The next block is load_balancer and this is how we connect the application load balancer to the ECS service.

target_group_arn To get the ARN of our target group, go to the resource where we created our target group which is line 16. Copy the resource type and reference name then reference them in the ecs.tf file.

container_name We created our container name in the task definition above. On line 37, copy the value, reference it here and update the resource name.

container_port Enter 80

The # load balancing block in our ecs.tf file should look like this

This is all we need to do to create the terraform resource for the ECS service go ahead and save all your work. You can leave the variables.tf file open you can close the rest.

Up next, let’s add the values for our variables in the terraform.tfvars file. This is how it should look like afterwards.

How our terraform.tfvars file should look like after referencing the values from our variables.tf file
Multi-view. You are copying the variable from variable.tf and referencing it to terraform.tfvars

To explain the values of our terraform.tfvars further.

architecture based from the terraform documentation on creating a task definition, in the cpu_architecture Use X86_64 if you are building your docker image on a windows computer. Use ARM64 if you’re building the image on a mac computer.

container_image In our rentzone repository on Amazon Elastic Container Registry (ECR), this is the URI of of our docker image.

Go ahead and save all your work. Open an integrated terminal and run terraform apply. Terraform will show you the plan. This contains what Terraform will do in your AWS account. This plan is based on the resources we created in our Terraform project. Review the plan and when you are happy with it, type yes after the confirm message and press Enter.

There you have it! Terraform has successfully created the VPC and other resources in our AWS account.

Let’s go to our AWS account and verify if these resources are there.

The rentzone-dev-cluster with 2 tasks running.
Under rentzone-dev-cluster, we have an active rentzone-dev-service.
The task definition that terraform created.
The health status of our rentzone-dev-service including the ARN, target group name, and 2 tasks running.
Click the rentzone-dev-tg target group to show 2 healthy running targets.

This is all we need to do to create the elastic container service for this project. We have verified that the resources we specified on our Terraform template were properly applied in our AWS account.

32. Auto Scaling Group for ECS Service Creation with Terraform

We will create the auto scaling group and connect it to the ECS service we just created.

The asg-reference.tf file that we will use to complete this tutorial can be found in my P2-AWS-Terraform-Docker repository on GitHub. Copy the raw file and paste it in the new file you created in your project folder.

# create an auto scaling group for the ecs service

resource “aws_appautoscaling_target” “ecs_asg” This is the resource type to create an ASG for the ECS service and the reference name.

max_capacity This means that we want our auto scaling to scale up to a maximum of 4 containers.

min_capacity On the other hand we want our auto scaling to scale down to a minimum of 1 container.

resource_id All we have to do is update our ECS cluster name and service name, refer to your ecs.tf file line 3 and 69. "service/${var.project_name}-${var.environment}-cluster/${var.project_name}-${var.environment}-service"

scalable_dimension We will use "ecs:service:DesiredCount"

service_namespace Enter "ecs"

depends_on We want this resource to be dependent on the ECS service. We want terraform to create the ECS service first before creating the auto scaling group. So in your ecs.tf file, copy the resource type and reference name on line 68 and reference it here.

The # create an auto scaling group for the ecs service resource/block in our asg.tf file should look like this

# create scaling policy for the auto scaling group

resource "aws_appautoscaling_policy" "ecs_policy" is the resource type to create scaling policy and the reference name I’ve provided.

name We will use the same tagging format for the scaling policy’s name. On line 5, we need to copy some values then reference it this way

policy_type We are using target tracking scaling.

resource_id We are using the same value we entered on line 5.

scalable_dimension We are using the ECS service desired count.

service_namespace Type "ecs"

At the moment, the # create an auto scaling policy for the auto scaling group resource in our asg.tf file should look like this

The last block is target tracking scaling policy configuration, refer to the settings below.

depends_on This is the last argument. We want this policy to be dependent on the auto scaling group we created on line 2. Copy the resource type and reference name then reference them here.

This completes our steps in creating an auto scaling group and its scaling policy. Go ahead and save all your work.

Open an integrated terminal and run terraform apply. Terraform will show you the plan. This contains what Terraform will do in your AWS account. This plan is based on the resources we created in our Terraform project. Review the plan and when you are happy with it, type yes after the confirm message and press Enter.

There you have it! Terraform has successfully created the VPC and other resources in our AWS account.

Let’s go to our AWS account and verify if these resources are there.

In the configuration tab of our rentzone-dev-service, we have the auto scaling with 2 desired tasks, 1 minimum task, and 4 maximum tasks.

This is all we need to do to create the auto scaling group for ECS for this project. We have verified that the resources we specified on our Terraform template were properly applied in our AWS account.

33. Create Record Set in Route-53 and Terraform Outputs

We will create a record set in our route 53 hosted zone so that we can access our application.

The route-53-reference.tf file that we will use to complete this tutorial can be found in my P2-AWS-Terraform-Docker repository on GitHub. Copy the raw file and paste it in a new file in your project folder.

# get hosted zone details

data “aws_route53_zone” “hosted_zone” — this is the resource type to get the hosted zone details and the reference name I’ve provided.

name We will reference our domain name, hence go to variables.tf file where we created its variable earlier. On line 81, copy the variable name and reference it here.

This resource should look like this

# create a record set in route 53

resource “aws_route53_record” “site_domain” — this is the resource type to create a record set in route 53 and the reference name I’ve provided.

zone_id Copy the data, resource type, and reference name for the route 53 hosted zone on line 2 then reference it here this way

name We’re going to create a variable for the value and after modifying the variable name, description, and type it should look like this

Copy the variable name and reference it here.

type It’s going to be an "A" record.

Alias — is how we’re going to connect our record set to our application load balancer.

name This is the DNS name of the ALB and we created it in our alb.tf file, hence copy the resource type and reference name and reference it here. The attribute we want to reference is the DNS name.

zone_id This is the zone ID of the ALB. For the value, paste the resource type and reference name of our ALB here. The attribute we want to reference is zone ID.

evaluate_target_health This is the last argument. We want this record set to evaluate our target health, hence type true

Finally, our route-53.tf should look like this

This is all we need to do to create a record set in the route 53 hosted zone go ahead and save all your work. You can close both route-53.tf and alb.tf files.

Up next, we will add the value for our variable record name in the terraform.tfvars file and it should look like this.

Multi-view. You are copying the variable from variable.tf and referencing it to terraform.tfvars

Go ahead and save all your file. Before we run terraform apply to create the record set, we need to create an output that will print our domain name. So create a new file in your project folder — outputs.tf.

The outputs-reference.tf file that we will use to complete this tutorial can be found in my P2-AWS-Terraform-Docker repository on GitHub. Copy the raw file and paste it in your outputs.tf file.

You can use output to print any attribute of your resource. This output will join https with our record name ( . ) our domain name. In my next project, when we deploy this application using terraform module we’ll learn more about outputs.

"website_url" You can provide your own output name.

var.record_name Add your record name here. Go to your variables.tf file where you created the variable for record name. On line 114, copy the variable name and reference it here.

var.domain_name Add your domain name here. Go to your variables.tf file where you created the domain name for record name. On line 81, copy the variable name and reference it here.

Your outputs.tf file should look like this.

Save all your file. Open an integrated terminal and run terraform apply. Terraform will show you the plan. This contains what Terraform will do in your AWS account. This plan is based on the resources we created in our Terraform project. Review the plan and when you are happy with it, type yes after the confirm message and press Enter.

The record set is going to be created.

There you have it! Terraform has successfully created the record set in route 53 in our AWS account and terraform outputs.

The outputs website URL has created our domain name. To access the website hover your mouse over the website URL and click it.

We can now access our application that is fully deployed with docker, amazon ECS, and terraform The end users can now access our application using our domain name.

This is all we need to do to create the record set in the route 53 hosted zone for this project. We have verified that the resources we specified on our Terraform template were properly applied in our AWS account.

34. Terraform Clean Up — Running Terraform Destroy

We have completed our project and successfully deployed this application using terraform, docker, amazon ECR, ECS, and various AWS core services. This project will be a valuable addition to your resume.

Finally, to wrap up this project we will push our code changes to GitHub and we’ll run the terraform destroy command to clean up our environment so we don’t incur further cost.

Let’s go ahead and push the code changes to GitHub repository. Select source control, type a commit message, and commit/sync changes.

We successfully verified that the codes are in our GitHub repository.

Lastly, in your project folder open an integrated terminal and run terraform apply . Terraform will show you the plan. This contains the 47 resources that will be destroyed in our AWS account. Review the plan. Type yes to initiate the deletion and press Enter.

Terraform is now deleting all the resources we created in this project.

Congratulations and thank you for following along! I hope you find this valuable in your journey. Let me know if you have any questions and I look forward to see you on my next project.

Photo by Al Elmes on Unsplash

Build real-world projects with me here! Show your employers that you are the right person for the job and stand out from the crowd!

Connect with me on LinkedIn

--

--

Eugene Miguel

Cloud DevOps Engineer • AWS Certified Solutions Architect