A Resilient AWS Three-Tier Architecture Design and Deployment Project

Kwasi Twum-Ampofo (KTA)
34 min readJul 2, 2023

--

Architecture Design

Introduction

Welcome to the Cloudoers blog!

Today, we are excited to have you here as we embark on one of the #10weeksofCloudOps challenges created by a well-deserved educator and mentor, Piyush Sachdeva. Together, we will learn how to configure and deploy high availability, fault-tolerance, secure, and scalable three-tier architecture infrastructure, utilizing AWS resources on the AWS cloud platform.

Technologies, services, and tools used for this architecture.

GitHub | AWS Cloud | EC2 | Security Groups | AWS RDS | Route Table | Internet Gateway | Nat Gateway | Nginx | Elastic Load Balancer | Auto Scaling Group

Part 0: Setup

Objectives:

  • Download Code from Github Repository
  • IAM EC2 Instance Role Creation
  • S3 Bucket Creation

We’re going to start off by running the command below to download and save GitHub repository code to my local machine. We’ll upload the code to an S3 bucket so our instances (virtual machines) can access it.

git clone https://github.com/aws-samples/aws-three-tier-web-architecture-workshop.git

After saving the Github code, I signed in to my AWS account with my credentials and proceeded to create an IAM EC2 Instance Role. If you don’t have an AWS account, click here and follow the instructions to create one before you proceed.

Now, let’s navigate to the IAM dashboard in the AWS Management Console, click on Roles in the left navigation pane, and then Create role to create the EC2 Instance Role.

EC2 Instance Role Creation

Click the Create role and select AWS service as the Trusted entity type and EC2 under Common use cases and click Next.

We’re going to search and add the following AWS-Managed policies to the permissions to proceed. These policies will allow our instances to download our code from S3 and use Systems Manager Session Manager to securely connect to our instances without SSH keys through the AWS console.

  • AmazonSSMManagedInstanceCore
  • AmazonS3ReadOnlyAccess
Permissions Policies Added To EC2 Instance Role

Click Next after selecting the two managed policies. Under Role details, name the role and scroll down to review the permissions’ configuration settings, and then click Create role.

Now that we have the EC2 Instance Role created, let’s head over to the AWS Console Home and navigate to the S3 dashboard to create the S3 bucket.

AWS Console Home

Click the S3 icon to go to the Amazon S3 dashboard. Select Buckets on the left-hand side and click Create bucket.

S3 Bucket Creation

The bucket name should be unique. Let’s name our bucket (theawsthreetierworkshop), select US East (N. Virginia) us-east-1, and accept the rest of the default settings to create the bucket.

S3 Bucket General Configuration Settings
S3 Bucket Created

Part 1: Networking and Security

In this section, we are going to build out the VPC networking components as well as security groups that will add a layer of protection around our EC2 instances, Aurora databases, and Elastic Load Balancers. We will create an isolated network with the following components:

  • VPC
  • Subnets
  • Route Tables
  • Internet Gateway
  • NAT gateway
  • Security Groups

VPC and Subnets

The VPC Creation — Click on the VPC icon on the AWS Management Console, select ‘Your VPCs’, and click on Create VPC to initiate the creation process.

VPC And Subnets Creation

Next, select the VPC-only option and fill out the Name tag and CIDR range with the following information:

Name: theawsthreetierworkshop

CIDR range: 10.0.0.0/16

Accept the default tenancy and create the VPC.

Subnet Creation

After creating the VPC, click Subnets on the left side of the dashboard and then click on Create Subnet to create the Subnets.

We will need 6 subnets across two availability zones. This means that three subnets will be in one availability zone and the other three in a second availability zone.

Each subnet in one availability zone will correspond to one layer of our three-tier architecture. We’ll create each of the 6 subnets by specifying the VPC we created in part 1 and then choosing a name, availability zone, and appropriate CIDR range for each of the subnets.

Using the previously created VPC, the 6 subnets with the appropriate configuration settings were successfully created as indicated below.

Subnets

Internet Connectivity

Internet Gateway

To give our public subnets in our VPC internet access, we need to have an Internet Gateway and attach it to the VPC. On the left-hand side of the VPC dashboard, select Internet Gateway and click on Create internet gateway to create it.

Internet Gateway

Let’s name the Internet Gateway (three-tier-igw), create and attach it to the VPC (awsthreetierworkshop) as indicated below.

Internet Gateway

NAT Gateway

In order for our instances in the appLayer private subnet to be able to access the internet they will need to go through a NAT Gateway. For high availability, we’ll deploy one NAT gateway in each of our public subnets.

Let’s navigate to NAT Gateways on the left side of the current dashboard and click Create NAT Gateway.

NAT Gateway

Fill in the Name, choose one of the public subnets we created in part 2, allocate an Elastic IP, and then click Create NAT gateway.

NAT Gateway

Let’s repeat the same procedure for the other public subnet (public-web-subnet-az-2).

NAT Gateways

Routing Configuration

First, let’s create one route table for the web layer public subnets and name it accordingly. On the same VPC dashboard, let’s navigated to Route Tables on the left side and click Create route table.

Route Table Creation

At the Create route table page, let’s fill in the necessary configuration settings information to create the route table.

Route Table Configuration Settings

After creating the route table, we need to edit it on the details page. Let’s scroll down and click on the Routes tab and Edit routes.

Route Table Details Page

At the details page, we’re going to add a route that directs traffic from the VPC to the internet gateway. In other words, for all traffic destined for IPs outside the VPC CDIR range, we need to add an entry that directs it to the internet gateway as a target and save the changes.

Route Table Edit

We’ve successfully added a route from the VPC and saved the changes to the internet gateway. Now, let’s edit the Explicit Subnet Associations of the route table by navigating to the route table details again. Select Subnet Associations and click Edit subnet associations.

Subnet Association

Here, we’ll select the two web layer public subnets we’ve created eariler and click Save associations as indicated below.

Two Web Layer Public Subnets Selected And Saved

We’re now going to create 2 more route tables, one for each app layer private subnet in each availability zone. These route tables will route app layer traffic destined for outside the VPC to the NAT gateway in the public web subnet respective availability zone, so let’s add the appropriate routes for that. Ready? Let’s go! :)

Private-RT-AZ-1 Configuration Settings

Remember we need to create two private route tables for the private app layers 1 and 2 which host the NAT Gateways.

NAT Gateways

Once we’ve the route tables created and added to the Nat Gateways, we need the appropriate subnet associations for each of the app layer private subnets.

Each of the app layer private subnets was successfully added to the appropriate Subnet Associations.

Private Subnets’ Associations

Security Groups

Now that we’ve the private app layer subnets configured, let’s head over to the security groups. Security groups will tighten the rules around which traffic will be allowed to our Elastic Load Balancers and EC2 instances.

On the left side of the VPC dashboard, navigate to Security Groups under Security.

Here the first security group we’ll create is for the public, internet-facing load balancer. After typing a name and description, let’s add an inbound rule to allow HTTP type traffic for our IP. See below!

First Security Group
Internet-Facing Load Balancer

The second security group we’ll create is for the public instances in the web tier. After typing a name and description, we need to add an inbound rule that allows HTTP-type traffic from our internet-facing load balancer security group we created in the previous step. This will allow traffic from our public-facing load balancer to hit our instances. After that, we’ll add an additional rule that will allow HTTP-type traffic for our IP. This will allow us to access our instance when we test.

Web Tier Security Group Allows Traffic From Internet-Facing Load Balancer To Instances.
Web Tier Security Group With Two Inbound Rules

The third security group will be our internal load balancer. Let’s create this new security group and add an inbound rule that allows HTTP-type traffic from your public instance security group. This will allow traffic from our web tier instances to hit our internal load balancer. Ready? Let’s go! :)

Internal Load Balancer
Internal Load Balancer Accepts Traffic From The Web Tier Security Group

The fourth security group we’ll configure for our private instances. After typing a name and description, we’ll add an inbound rule that will allow TCP-type traffic on port 4000 from the internal load balancer security group we created in the previous step. This is the port our app tier application is running on and allows our internal load balancer to forward traffic on this port to our private instances. We’ll also add another route for port 4000 that allows our IP for testing.

Private Instance Security Group
Two Inbound Custom TCP Rules Added To The Private Instance Security Group

The fifth security group we’ll configure protects our private database instances. For this security group, we’ll add an inbound rule that will allow traffic from the private instance security group to the MYSQL/Aurora port (3306).

Database Tier Security Group
Database Tier Security Group Inboud Rule For The Databases (MYSQL/Aurora)

Part 2: Database Deployment

This section of the workshop will walk us through deploying the database layer of the three-tier architecture.

Objectives:

  • Deploy Database Layer
  • Subnet Groups
  • Multi-AZ Database

Excited? Let’s go!

We are going to create database Subnet groups for the architecture. Navigate to the RDS dashboard in the AWS console, search for RDS, and click on Subnet groups on the left-hand side.

AWS Console RDS Dashboard

Let’s click Create DB subnet group to begin.

Subnet Groups Creation

Let’s give our subnet group a name, and description, and choose the VPC we created.

Subnet Group Configuration

Now, we need to add the subnets previously created in each availability zone for the database tier (layer). To ensure consistency, let’s navigate back to the VPC dashboard to verify so that we can select the correct subnet IDs.

Now that we’ve got the subnets IDs verified, let’s select the two subnets to add them and create the Subnet Group as indicated below.

Subnets For The Database Tier Added To Subnet Groups

Database Deployment

Let’s create and deploy the database for our three-tier architecture. We’re going to select MySQL/Aurora as our database choice. On the same RDS dashboard, navigate to Databases on the upper left-hand side and click Create database.

Database Creation For The Database Tier

We’ll now go through several configuration steps to process. Let’s start with the Standard create for this MySQL-Compatible Amazon Aurora database and leave the rest of the Engine options as default.

MySQL-Compatible Amazon Aurora Database
Database Engine Version

Under the Templates section, select the Dev/Test since this isn’t being used for production at the moment. Under Settings Database cluster identifier, keep the default name database-1.

We’ll keep the username as ‘admin, set a password to welcome-db’, and then write down the password on a notepad since we’ll be using password authentication to access our database.

Database Configuration

Under the Cluster storage configuration section, we’ll keep Aurora Standard and keep the default option under Instance Configuration

Cluster Storage Configuration

Next, under Availability and Durability, we’ll keep the option to create an Aurora Replica or reader node in a different availability zone as recommended along with our VPC under Connectivity.

We’ll also choose the subnet group we created earlier, and select no for public access.

Database Subnet Group And Public Access Configured

Now, let’s set the security group we created for the database layer.

We’ll not make any changes under password authentication because password authentication is always on. Click Create to create the database.

MySQL Aurora Database-1

With our database provisioned, you can see a reader and writer instance in the database subnets of each availability zone. Let’s copy the writer's Endpoint Name for our database for later use.

Part 3: App Tier Instance Deployment

In this section, we are going to create an EC2 instance for our app layer and make all the essential software configurations so that the app can run correctly. The app layer will consist of a Node.js application running on port 4000. In addition, we will configure our database with some data and tables.

Objectives:

  • Create App Tier Instance
  • Configure Software Stack
  • Configure Database Schema
  • Test DB connectivity

Excited? Let’s go!

App Instance Deployment

Using the EC2 service dashboard, click on Instances on the left-hand side and then Launch Instances to start the process.

Let’s name our app instance ‘appweb’ for the three-tier architecture and select the Amazon Linux 2 AMI for our application and operating system image.

App Instance Creation

We’ll be using the free tier eligible T.2 micro instance type so let’s select that to proceed.

T.2 micro Instance Type

Even though ‘Proceed without a key is (Not recommended)’, we’ll proceed without a key pair for architecture.

Proceed Without A Key Pair (Not recommended)

Earlier we created a security group for our private app layer instances, so go ahead and select that along with our default VPC and the ‘private-app-subnet-az-1' under Network settings.

We’ll keep the default configuration settings for Configure storage, and Advanced details as-is, but review the Summary, and click Launch instance to create the ‘appLayer’ instance.

The ‘appLayer’ instance is created. Now, let’s navigate back to the Instance dashboard of the EC2, select the ‘appLayer’ instance, click on Actions, and then Modify IAM role under Security to update the IAM role.

Modify IAM role

Select the IAM role we created earlier from the ‘IAM role’ list and click ‘Update IAM role’ to update the role.

Update IAM role
IAM role Successfully Attached To ‘appLayer’ Instance

Connect to Instance

Let’s navigate to our list of running EC2 Instances by clicking on Instances on the left-hand side of the EC2 dashboard. When the instance state is running, connect to the instance by clicking the checkmark box to the left of the instance and clicking the connect button on the top right corner of the dashboard. Select the Session Manager tab, and click Connect. This will open a new browser tab for you.

NOTE: If you get a message saying that you cannot connect via session manager, then check that your instances can route to your NAT gateways and verify that you gave the necessary permissions on the IAM role for the Ec2 instance.

When you first connect to your instance like this, you will be logged in as ssm-user which is the default user as indicated below.

Remotely Connected To ‘appLayer’ Instance

Viola, we’re connected!

Now, let’s switch to ec2-user using the below command in the same terminal:

sudo -su ec2-user

So far, so good!

Let’s take this moment to make sure that we are able to reach the internet via our NAT gateways. If our network is configured correctly up till this point, we should be able to ping the Google DNS servers:

ping 8.8.8.8
Google DNS Server Successfully Pinged

Transmission of packets from our ping command means that we’re successfully connected to the internet. Congratulations and give yourself a pat on the back. You did! :)

You can stop the ping by pressing ctrl + c.

NOTE: If you can’t reach the internet then you need to double-check the route tables and subnet associations to verify if traffic is being routed to your NAT gateway!

Configure Database

Let’s start this process by downloading the MySQL CLI using the ‘sudo’ command below.

sudo yum install mysql -y
MySQL CLI Downloaded To ‘appLayer’ Instance

Now, let’s initiate our DB connection with our Aurora RDS writer endpoint. In the following command, replace the RDS writer endpoint and the username, and then execute it in the browser terminal:

mysql -h CHANGE-TO-YOUR-RDS-ENDPOINT -u CHANGE-TO-USER-NAME -p

Let’s replace the endpoint name, and username along with the password at our appLayer instance command prompt, and hit enter to see if we can connect to the database.

We successfully connected to the database after replacing the database endpoint name (database-1.cluster-czkx0xyclbd4.us-east-1.rds.amazonaws.com), the username (admin), and typing in the password as indicated in the above image.

NOTE: If you cannot reach your database, check your credentials and security groups then try again.

Now that we’ve successfully connected to MySQL database, let’s create a database called ‘webappdb’ with the following command using the MySQL CLI:

CREATE DATABASE webappdb;

Let’s verify that the database was created correctly with the following command:

SHOW DATABASES;

Now, let’s create a data table by first navigating to the database we just created with the command below:

USE webappdb;

Now that we’ve changed to the database (webappdb), let’s create the following transactions table by executing this create table command:

CREATE TABLE IF NOT EXISTS transactions(id INT NOT NULL
AUTO_INCREMENT, amount DECIMAL(10,2), description
VARCHAR(100), PRIMARY KEY(id));

Let’s verify if the tables were created with the below command:


SHOW TABLES;

The tables were successfully created as illustrated in the above image.

Now that we have the tables created, let’s insert data into the table with the command below for use/testing later:

INSERT INTO transactions (amount,description) VALUES ('400','groceries');

Let’s verify that our data was added by executing the following command:

SELECT * FROM transactions;

You can see from the above command prompt that our data was successfully added to the data table.

To exit the MySQL client, just type exit and hit enter.

Configure App Instance

The first thing we will do is update our database credentials for the app tier. To do this, let’s open the application-code/app-tier/DbConfig.js file from the GitHub repo in a text editor on the local computer.

The ‘app-tier’ Directory

Now, let’s edit the DbConfig.js file by replacing the empty strings for the hostname, user, password, and database. Fill this in with the credentials we configured for our database, the writer endpoint of our database as the hostname, webappdb for the database, and save the file.

Ready? Let’s go!

The vi Text Editor
DbConfig.js File
Database Credentials Inserted And Saved

NOTE: This is NOT considered a best practice, and is done for the simplicity of this hands-on lab. Moving these credentials to a more suitable place like Secrets Manager is left as an extension for this workshop.

Next, we’re going to upload the app-tier folder to the S3 bucket that we created at the beginning of this project. Let’s navigate to the scalable storage in the cloud (S3) dashboard to proceed.

App-tier Folder Uploaded To AWS S3 Bucket

With the ‘app-tier’ folder and contents uploaded to the S3 bucket, let’s go back to our SSM session via our EC2 ‘appLayer’ instance to install all of the necessary components to run our backend application. Start by installing NVM (node version manager) using the command below:

curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.38.0/install.sh | bash
source ~/.bashrc
  1. Next, let’s install a compatible version of Node.js and make sure it’s being used with the below commands:
nvm install 16
nvm use 16
NVM Installed

PM2 is a daemon process manager that will keep our node.js app running when we exit the instance or if it is rebooted. Let’s install that as well.

npm install -g pm2
PM2 Installed

Now, we need to download our code from our S3 buckets onto our instance. With the command below, let’s replace BUCKET_NAME with the name of the bucket we uploaded the app-tier folder to:

cd ~/
aws s3 cp s3://BUCKET_NAME/app-tier/ app-tier --recursive

Ooop! We’re denied access to the S3 bucket (theawsthreetierworkshop) because we do not have access permissions to the bucket.

Let’s troubleshoot this issue by navigating to the bucket via the S3 bucket dashboard. Select the bucket name (theawsthreetierworkshop) | Permissions | Edit and uncheck ‘Block all public access’ under Block public access (Bucket settings), click Save changes, and then confirm the changes.

Next, let’s create an S3 bucket policy for the bucket.

{
“Version”: “2012–10–17”,
“Statement”: [
{
“Sid”: “PublicReadGetObject”,
“Effect”: “Allow”,
“Principal”: “*”,
“Action”: “s3:GetObject”,
“Resource”: “arn:aws:s3:::theawsthreetierworkshop/*”
}
]
}

Lastly, let’s go back to the IAM dashboard in the AWS console, select Roles, search, and access the role (ec2-three-tier-access-role) we created earlier.

Click Add permissions | Attach policies, search, and add the AmazonS3ReadOnlyAccess permission to our role (ec2-three-tier-access-role).

Now, navigate back to the ‘appLayer’ instance prompt terminal and try to copy the app-tier folder to the S3 bucket (theawsthreetierworkshop) again. Voila!

This time it worked after granting permissions to the bucket.

The ‘app-tier’ Folder Copied To S3 Bucket

Next, navigate to the app directory, install dependencies, and start the app with pm2 with the command below:

cd ~/app-tier
npm install
pm2 start index.js

To make sure the app is running correctly run the following:

pm2 list
Started ‘app’ With PM2

The app is running. If you see an error, then you need to do some troubleshooting. To look at the latest errors, use this command:

pm2 logs

You can check to see if there’re errors in the app logs by executing the ‘pm2 log's command.

PM2 Logs

You can also see that the AB3 backend app is listening at http://localhost:4000.

NOTE: If you’re having issues, check your configuration file for any typos, and double-check that you have followed all installation commands recommended till now.

PM2 will make sure our app stays running when we leave the SSM session. However, if the server is interrupted for some reason, we still want the app to start and keep running. This is also important for the AMI we will create so let’s keep PM2 running by executing the command below:

pm2 startup
PM2 Startup

The below message appears on the command prompt above after running the ‘pm2 startup’ command.

[PM2] To setup the Startup Script, copy/paste the following command: sudo env PATH=$PATH:/home/ec2-user/.nvm/versions/node/v16.0.0/bin /home/ec2-user/.nvm/versions/node/v16.0.0/lib/node_modules/pm2/bin/pm2 startup systemd -u ec2-user --hp /home/ec2-user

DO NOT run the above command, let’s copy and past the command in the output you see in your own terminal. After you run it, save the current list of node processes with the following command:

pm2 save

Test App Tier

Now let’s run a couple tests to see if our app is configured correctly and can retrieve data from the database.

To check out our health check endpoint, copy this command into your SSM terminal. This is our simple health check endpoint that tells us if the app is simply running.

sudo curl http://localhost:4000/health

The command responded with the following message: “This is the health check” which means our health check is running correctly as indicated below.

App Health Check

Next, let’s test our database connection. We can do that by hitting the following endpoint locally:

curl http://localhost:4000/transaction

Received the following response after executing the command:

{“result”:[{“id”:1,”amount”:400,”description”:”groceries”}]}
Curl Command Response

The two above responses indicate that our networking, security, database, and app configurations are correct. Our app layer is fully configured and ready to go.

Part 4: Internal Load Balancing and Auto Scaling

In this section of the workshop, we will create an Amazon Machine Image (AMI) of the app tier instance we just created, and use that to set up autoscaling with a load balancer in order to make this tier highly available.

Objectives:

  • Create an AMI of our App Tier
  • Create a Launch Template
  • Configure Autoscaling
  • Deploy Internal Load Balancer

App Tier AMI

Let’s navigate to Instances on the left-hand side of the EC2 dashboard. Select the app tier instance we created and under Actions select Image and Templates. Click Create Image.

Create AMI

Let’s give the image a name and description and then click Create image. This will take a few minutes, but if you want to monitor the status of image creation you can see it by clicking AMIs under Images on the left-hand navigation panel of the EC2 dashboard.

App Tier Image

Target Group

While the AMI is being created, let’s go ahead and create our target group to use with the load balancer. On the EC2 dashboard, navigate to Target Groups under Load Balancing on the left-hand side. Click on Create Target Group.

Target Group Creation

The purpose of forming this target group is to use our load balancer so it may balance traffic across our private app tier instances. Let’s select Instances as the target type and give it a name.

Target Group

Let’s set the protocol to HTTP and the port to 4000. Remember that this is the port our Node.js app is running on. Select the VPC we’ve been using thus far, and then change the health check path to be /health to indicate the health check endpoint of our app and click Next.

Target Group Configuration Settings

We’ll NOT register any targets for now, so let’s just skip that step, click Next and create the target group.

Internal Load Balancer

We’re going to create an internal load balancer for our three-tier. On the left-hand side of the EC2 dashboard select Load Balancers under Load Balancing and click Create Load Balancer.

Load Balancer Creation

We’ll be using an Application Load Balancer for our HTTP traffic, give it a name (App-Tier-Internal-lg), select Internal under Scheme, and click the create button for that option.

Application Load Balancer

Let’s select the correct network configuration for our VPC and private subnets.

Next, select the security group we created for this internal ALB. Now, this ALB will be listening for HTTP traffic on port 80. It will be forwarding the traffic to our target group that we just created. Let’s select it from the dropdown list, and create the load balancer.

Security Group For The Load Balancer
Listeners and Routing

Internal Load Balancer is created with two availability zones.

Internal Load Balancer Active

Launch Template

Now, we need to create a Launch template with the AMI we created earlier before we configure Auto Scaling. On the left side of the EC2 dashboard, let’s navigate to Launch Template under Instances and click Create Launch Template.

Launch Template Creation

Name the Launch Template, and then under Application and OS Images include the app tier AMI we previously created.

Launch Template Creation
Template Image

Under Instance Type select t2.micro. For Key pair and Network Settings don’t include it in the template. We don’t need a key pair to access our instances and we’ll be setting the network information in the autoscaling group.

Set the correct security group for our app tier, and then under Advanced details use the same IAM instance profile we have been using for our EC2 instances.

App Tier Launch Template Created

Auto Scaling

We will now create the Auto Scaling Group for our app instances. On the left-hand side of the EC2 dashboard, navigate to Auto Scaling Groups under Auto Scaling and click Create Auto Scaling Group.

Auto Scaling Creation

Let’s give our Auto Scaling group a name, and then select the Launch Template we just created and click Next.

Auto Scaling Group Configuration Settings

Next, we’re going to select on the Choose instance launch options page, set our VPC, and the private instance subnets for the app tier, and continue.

For this next step, we’ll attach this Auto Scaling Group to the Load Balancer we just created by selecting the existing load balancer’s target group from the dropdown. Then, click next.

We’re going to set the desired, minimum, and maximum capacity of our Auto Scaling Group size and Scaling policies. Review and then Create Auto Scaling Group.

Auto Scaling Group Size And Scaling Policies

Let’s find out if our internal load balancer and autoscaling group are configured correctly. The autoscaling group will spin up 2 new app tier instances. We can test if this is working correctly by deleting one of our new instances manually and waiting to see if a new instance is booted up to replace it.

AppTierASG3 Manually Terminated. A New One Initializing

Our internal load balancer and autoscaling group are working correctly

NOTE: The original app tier instance is excluded from the ASG so you will see 3 instances in the EC2 dashboard. You can delete your original instance that you used to generate the app tier AMI, but it’s recommended to keep it around for troubleshooting purposes so do not delete it.

Click next thrice, review the Auto Scaling Group details then create.

AppTierASG Auto Scaling Group Created

Part 5: Web Tier Instance Deployment

In this section, we will deploy an EC2 instance for the web tier and make all necessary software configurations for the NGINX web server and React.js website.

Objectives

  • Update NGINX Configuration Files
  • Create a Web Tier Instance
  • Configure Software Stack

Update Config File

Before we create and configure the web instances (web tier), let’s modify the application-code/nginx.conf file from the repo we previously downloaded. First, navigate to your internal load balancer’s details page and copy the DNS entry into a notepad and save.

Internal Load Balancer DNS Name

Next, open the folder where the repo was downloaded on your command prompt terminal to update the application-code/nginx.conf file. Edit the file using this command:

vi nginx.conf

Scroll down to line 58 and replace [INTERNAL-LOADBALANCER-DNS] with your internal load balancer’s DNS entry. Save the file by pressing the ‘esc keypad, shift : + wq’ without the single quotes.

The ‘nginx.config’ File

Now, let’s upload the ‘nginx.config’ file and the application-code/web-tier folder to the s3 bucket we created for this lab. Navigate back to the Amazon S3 dashboard, click ‘Buckets’ on the left hand, and select our bucket | Upload.

2 Folders And 1 File In S3 Bucket

Web Instance Deployment

Follow the same instance creation instructions we used for the App Tier instance in Part 3: App Tier Instance Deployment, with the exception of the subnet. Provision the instance in one of our public subnets. Make sure you select the correct network components, security group, and IAM role. This time, auto-assign a public ip on the Configure Instance Details page and name the instance with a name so we can identify it more easily.

Back on the EC2 dashboard, select Instances on the left-hand side under Instances, and click on Launch instances.

Web Tier Instance Creation

We’ll proceed without a key pair for this instance.

Let’s head back to the EC2 dashboard now that we’ve the Web Tier instance creation in process. Select the instance and click Actions | Security | Modify IAM role. Select the (ec2-three-tier-access-role) from the IAM role dropdown list and Update IAM role to update the instance.

App Tier Attached To IAM role

Connect to Instance

Let’s follow the same steps we used to connect to the first app instance and change the user to ec2-user. Test connectivity here via ping as well since this instance should have internet connectivity:

sudo -su ec2-user 
ping 8.8.8.8

Note: If you don’t see a transfer of packets then you’ll need to verify your route tables attached to the subnet that your instance is deployed in.

I was able to ping the Google server IP Address 8.8.8.8 from the App Tier instance command prompt terminal successfully.

Successfully Pinged Google Server

Configure Web Instance

We now need to install all of the necessary components needed to run our front-end application. Let’s start by installing NVM and node on the instance :

curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.38.0/install.sh | bash
source ~/.bashrc
nvm install 16
nvm use 16

Now we need to download our web tier code from our s3 bucket:

cd ~/
aws s3 cp s3://BUCKET_NAME/web-tier/ web-tier --recursive

Replace [BUCKET_NAME] with your bucket name as indicated below.

Web Tier Files Copied To S3 Bucket

Navigate to the web-layer folder and create the build folder for the react app so we can serve our code using the below commands:

cd ~/web-tier
npm install
npm run build

NGINX can be used for different use cases like load balancing, content caching etc, but we will be using it as a web server that we will configure to serve our application on port 80, as well as help direct our API calls to the internal load balancer. Let’s run the below command to proceed

sudo amazon-linux-extras install nginx1 -y

We will now have to configure NGINX. Navigate to the Nginx configuration file with the following commands and list the files in the directory:

cd /etc/nginx
ls

Let’s update the ‘nginx.conf’ file with the one we uploaded to S3 bucket. We’ll remove the file and replace the bucket name below:

sudo rm nginx.conf
sudo aws s3 cp s3://BUCKET_NAME/nginx.conf .

The bucket name is replaced

Let’s restart Nginx with the following command:

sudo service nginx restart

Let’s make sure Nginx has permission to access our files by executing the command:

chmod -R 755 /home/ec2-user

And to make ensure the service starts on boot, run this command:

sudo chkconfig nginx on

Now let’s copy and plug in the public IP of our web tier instance to see our website. The public IP can be found on the App Tier Instance details page on the EC2 dashboard. Voila, the website is working correctly.

Let’s do the same for our database tier and if it’s connected and working correctly.

Part 6: External Load Balancer and Auto Scaling

In this section of the workshop, we will create an Amazon Machine Image (AMI) of the web tier instance we just created, and use that to set up autoscaling with an external facing load balancer in order to make this tier highly available.

Objectives:

  • Create an AMI of our Web Tier
  • Create a Launch Template
  • Configure Auto Scaling
  • Deploy External Load Balancer

Web Tier AMI

Let’s go to Instances on the left-hand side of the EC2 dashboard. Select the web tier instance we created and under Actions select Image and Templates. Click Create Image.

We’ll give the image a name and description and then click Create image. This will take a few minutes, but if you want to monitor the status of image creation you can see it by clicking AMIs under Images on the left-hand navigation panel of the EC2 dashboard.

Target Group

On the EC2 dashboard let’s navigate to Target Groups under Load Balancing on the left-hand side. Click on Create Target Group.

The purpose of forming this target group is to use our load balancer so it may balance traffic across our public web tier instances. Select Instances as the target type and give it a name.

Let’s set the protocol to HTTP and the port to 80. Remember this is the port NGINX is listening on. Select the VPC we’ve been using thus far, and then change the health check path to be /health. Click Next.

Go ahead and create the target group. We are NOT going to register any targets for now.

Internet Facing Load Balancer

On the left-hand side of the EC2 dashboard select Load Balancers under Load Balancing and click Create Load Balancer.

Select Application Load Balancer for our HTTP traffic. Click the Create button for that option.

After giving the load balancer a name, be sure to select internet facing since this one will not be public facing, but rather it will route traffic from our web tier to the app tier.

Select the two public subnets to proceed.

Select the security group we created for this internal ALB. Now, this ALB will be listening for HTTP traffic on port 80. It will be forwarding the traffic to our target group that we just created, so select it from the dropdown, and create the load balancer.

Web Tier External Load Balancer

Launch Template

Before we configure Auto Scaling, we need to create a Launch template with the AMI we created earlier. On the left-hand side of the EC2 dashboard navigate to Launch Template under Instances and click Create Launch Template.

Let’s name our Launch Template, and then under Application and OS Images include the app tier AMI you created.

Launch Template Creation

Let’s set the correct security group for our web tier, and then under Advanced details use the same IAM instance profile we have been using for our EC2 instances.

Accept the rest of the default configuration settings and click Create launch template.

Web Tier Launch Template Created

Auto Scaling

Once we have got the Launch Template created, let’s go ahead to create the Auto Scaling Group for our web instances. On the left side of the EC2 dashboard navigate to Auto Scaling Groups under Auto Scaling and click Create Auto Scaling group.

Let’s give our Auto Scaling group a name, and then select the Launch Template we just created and click next.

On the Choose instance launch options page, let’s set our VPC, and the public subnets for the web tier and proceed to the next step.

For this next step, we’re going to attach the Auto Scaling Group to the Load Balancer we just created by selecting the existing web-tier load balancer’s target group from the dropdown. Then, click next.

For Configure group size and scaling policies, set desired, minimum and maximum capacity to 2. Click Next (3x) to review and then Create Auto Scaling Group.

Web Tier ASG Created

Let’s test both our external load balancer and autoscaling group to see if they are configured correctly. The autoscaling group is spinning up 2 new web tier instances. Let’s head over to the EC2 instances dashboard and delete one manually and wait to see if a new instance is booted up to replace it. Voila, it worked as intended!

Let’s test if our entire architecture is working by plugging in our external facing load balancer, DNS name into your browser. Voila, it is working!

The external load balancer is working correctly. Congratulations, we did it!

Congrats! You’ve Implemented a 3 Tier Web Architecture!

Thank you for taking the time to read this long post. I appreciate your support by liking, sharing, and leaving constructive feedback.

I would like to give a big shout-out Piyush Sachdeva for his leadership and mentorship in leading me to complete this wonderful real-world, hands-on project.

#10weeksofCloudops #awscloud #github #securitygroups #nginx #aws3tier #elasticloadbalancer

--

--

Kwasi Twum-Ampofo (KTA)

MultiCloud Architect @ KTA Mobile Communications, Inc. | DevSecOps, Systems Analyst