IaC 3-Tier Architecture with AWS CloudFormation: Node.js, Lambda, API Gateway & RDS

A comprehensive guide to deploying a 3-tier architecture with AWS CloudFormation

Chinelo Osuji
Cloud Native Daily
11 min readAug 8, 2023

--

What Is A 3-Tier Architecture?

In regards to web applications, a 3-tier architecture is a design layout that separates the application into 3 layers: Presentation (Front-End), Application (Logic), and Database (Back-End) layers.

Let’s say we have a small e-commerce company that wants a scalable and reliable e-commerce website. The company decides to deploy a 3-tier architecture for their website using AWS as advised by their engineers.

  1. The Presentation layer would be responsible for serving the static website to end-users and handling incoming orders.
  2. The Application layer would handle the business logic of the platform. This layer would be responsible for processing transactions, handling payments, and interacting with the database.
  3. And the Database layer would store all of the website’s data, including customer information, product data, and transaction records.

Now let’s get started.

First, before we begin creating the architecture, let’s create the HTML file for our website.

Copy and paste the code below into your Notepad and save the file with the extension “.html”.

Then, create an S3 Bucket to host the static web page.

If you need assistance with this, please refer to my previous article on creating S3 Buckets. --> CLICK HERE

This web page allows users to submit orders and the data is sent to the Data (Back-End) layer via API Gateway.

AWS API Gateway allows developers to create, deploy, and manage APIs for their applications. APIs (Application Programming Interfaces) are rules that make communication between different software applications possible.

Once we create our stacks in CloudFormation, we’ll get the API Gateway URL, add it to the code, and update the file in the S3 Bucket.

We will insert the API Gateway URL in Line 264 in replace of ‘YOUR API URL GOES HERE’.

I’ll show you where to get the API Gateway URL later on.

Copy and paste the Bucket website endpoint URL from the S3 Bucket in your browser.

Below is an example of how the page looks.

Now let’s create 2 Custom Resources with logic for 2 Lambda functions that we will use in our Application layer. Custom resources help to manage resources or features not directly supported by CloudFormation. These limitations can be bypassed with the use of custom resources that we will add to the S3 Bucket and reference in the code template.

AWS Lambda is a serverless computing service that allows you to run code without managing servers. Your code will execute in response to triggers from AWS services or custom resources, allowing file uploads, database modifications, HTTP requests, etc.

The 1st custom resource below is a Lambda function code written in Node.js that allows CloudFormation to interact with custom logic during the stack creation or update process. When CloudFormation creates, updates, or deletes a custom resource, this Lambda function will be triggered and the response tells CloudFormation the status of the custom resource. This function helps you report back to CloudFormation about the success or failure of such interactions, allowing the stack deployment to proceed smoothly and providing better feedback in case of errors.

Copy and paste the code below in your Notepad and save the file with the extension “.js”.

Pack the .js file into a .zip file and upload it to the same S3 bucket that’s hosting the static webpage.

Now let’s create the 2nd custom resource for our 2nd Lambda function.

The 2nd custom resource below is a Lambda function code written in Node.js that functions as an HTTP API endpoint.

In this case, it will manage HTTP requests for creating orders, validate the inputs, and send them to the Database layer.

Copy and paste the code below in your Notepad and save the file with the extension “.js”.

In the code above, the require('mysql2/promise') statement imports the library and assigns it to the variable mysql. This allows you to utilize the functions provided by the library for interacting with the MySQL database.

But to interact with the MySQL database, you need to include the mysql2/promise library as a dependency.

Dependencies are managed via NPM (Node Package Manager), which is a tool used for managing and distributing JavaScript packages. These packages contain reusable code and are designed to provide certain functionalities.

To include the dependency needed, first go to your computer’s terminal.

Use the cd command to navigate to the same folder containing the .js file of the lambda function.

Then run npm init -y to create a package.json file with the default settings without needing to customize the metadata.

Now that we have a package.json file, let’s run npm install mysql2 to install the mysql2 package and its dependencies into the project’s node_modules directory.

The node_modules directory is automatically created in your Node.js project directory when you install external packages using NPM. This directory contains all the dependencies that your project requires to run, including the packages you've specified in your project's package.json file.

NPM will also automatically update the package.json file to include the mysql2 package as a dependency.

In addition to the node_modules directory, a package-lock.json file is created automatically when you run npm install for the first time or whenever you add, update, or remove a package.

The purpose of the package-lock.json file is to lock down the specific versions of dependencies that were installed. It contains detailed information about each package, its version, sub-dependencies, and the dependency tree structure.

Pack the .js file with the node_modules folder, package.json and package-lock.json files into a .zip file and upload it to the same S3 bucket that’s hosting the static webpage.

Below is an example of how your S3 Bucket Objects list should look at this point.

Now let’s start creating the CloudFormation templates for the 3-tier architecure.

Below is the code that we will use to create the Presentation layer.

This code creates a VPC, Internet Gateway and attachment, 2 Public Subnets and a Public Route Table associated with them, 4 Private Subnets and a Private Route Table associated with them.

Also, it creates a Bastion Host Instance and Launch Templates with static webpage and Security Groups for each, Auto Scaling Groups for Public and Private Subnets along with Scaling Policies for both, and an Application Load Balancer and Target Group for health checks.

Copy and paste the code below in your Notepad and save the file with the extension “.yaml”.

Also, remember to replace <YOUR IP GOES HERE> with your IP address for the Bastion Host’s Security Group in Line 214.

And to replace <YOUR S3 BUCKET NAME GOES HERE> with the name of your S3 Bucket in Lines 295 and 348.

Now let’s upload the template to CloudFormation.
Go to AWS CloudFormation and click Create stack.

On Step 1 page select Template is ready and Upload a template file.
Once you’ve selected the file, click Next.

On Step 2 page enter a Stack name, select the Key Pair your using in the Parameters section and click Next.

On Step 3 page keep all default stack options and click Next.

On Step 4 page scroll down click the box next to I acknowledge that AWS CloudFormation might create IAM resources and click Submit.

On the next page, we will see the stack creation in progress. Wait a few minutes for completion.

Next up is the Database layer. Below is the code that we will use to create the Database layer.

This code creates a NAT Gateway in the VPC, and an RDS DB Instance with a Subnet Group associated with Private Subnets 3 & 4, a Security Group that allows inbound traffic for MySQL, and Multi Availability Zones enabled.

Copy and paste the code below in your Notepad and save the file with the extension “.yaml”.

Let’s go create another stack with this code.

On Step 1 page select Template is ready and Upload a template file.
Once you’ve selected the file, click Next.

On Step 2 page enter a Stack name. In the Parameters section select the VPC that was created in the Presentation layer.

Now we must enter the Private Route Table ID and Public Network ACL ID in the Parameters section.

Open a Duplicate tab in your web browser, and go to VPC in your AWS Console.

On the left, under Virtual Private Cloud click Route Tables.
The Private Route Table has 4 Private Subnets associated with it, so we can easily determine which ID we need to use.

Copy the ID, go back to the other tab for CloudFormation and paste the ID in the Private Route Table field replacing “xxxxxx”..

Go back to the tab for VPC and under Security click Network ACLs.

By looking at the VPC ID, we can easily determine the Network ACL ID we need to use.

Copy the ID, go back to the other tab for CloudFormation and paste the ID in the Public Network ACL field replacing “xxxxxx”.

Now scroll down and select the Private Subnets and Public Subnet used in the template and click Next.

On Step 3 page keep all default stack options and click Next.

On Step 4 page scroll down and click Submit.

On the next page, we will see the stack creation in progress. Wait a few minutes for completion.

Now let’s create the 3rd stack for the Application layer. Below is the code that we will use to create the Application layer.

This code creates the 2 Lambda functions that we created the custom resources for, along with permissions for Lambda to carry out these functions.

And an API Gateway with 2 endpoints: User Input Endpoint to accept and process the information user provide and Order Endpoint to handle and process user orders. And each endpoint has 2 main methods setup: OPTIONS method to describe the communication options for the target resource and POST method to allow users to send data to the endpoint.

Copy and paste the code below in your Notepad and save the file with the extension “.yaml”.

Keep in mind to replace <YOUR S3 BUCKET NAME GOES HERE> with the name of your S3 Bucket in Lines 50, 63, and 117.

And to replace <FILENAME OF 1ST LAMBDA FUNCTION FILE STORED IN S3 BUCKET>, <FILENAME OF 2ND LAMBDA FUNCTION FILE STORED IN S3 BUCKET> in Lines 51, and 64 with the filenames of the Lambda functions in your S3 Bucket.

Let’s create the stack now.

On Step 1 page select Template is ready and Upload a template file.
Once you’ve selected the file, click Next.

On Step 2 page enter a Stack name, select the Private Subnets and VPC being used in the Parameters section and click Next.

On Step 3 page keep all default stack options and click Next.

On Step 4 page scroll down click the box next to I acknowledge that AWS CloudFormation might create IAM resources and click Submit.

On the next page, we will see the stack creation in progress. Wait a few minutes for completion.

Now that we have our 3-tier architecture created, let’s get the API Gateway URL and update the HTML file in our S3 Bucket.

Go to API Gateway and click on the name of the API that was created from the stack.

On the left, click Stages and to the right under Stages click prod.

The Invoke URL is the API Gateway URL. This URL is essential in communicating and sending data from the web page to the data layer.

Copy the Invoke URL and paste it in Line 244 of the HTML code in replace of ‘YOUR API URL GOES HERE’.

Save it and upload it again to the same S3 Bucket to update the file.

Now let’s test the Lambda code with test events to simulate different scenarios and trigger the Lambda functions.

Go to Lambda in the AWS Console.

Click the Lambda function you want to test.

Copy the code below.

This code is used to simulate an order creation request that would typically be sent to a Lambda function via API Gateway.

The Lambda function would then process this request, extract the data from the “body” field, and execute the necessary logic to create the order in the database.

{
"body": "{\"CustomerID\":\"123\",\"ProductID\":\"456\",\"Quantity\":3}",
"path": "/prod/order",
"httpMethod": "POST"
}

Scroll down and on the left side, select Test.

Paste the code in the Event JSON section.

On the right side, click Test.

Above the Test event section, you will see a Executing function: succeeded prompt appear.

You can click Details to expand the execution log information.

As you can see, the Lambda function test was successful. The status code is set to 200, which indicates that the request was successfully processed by the server.

We can also run another test on the Lambda function.

Copy the code below.

This code is used to simulate a request to create a new customer.

When the server receives the request, it will parse (dissect) the “body” field to extract the customer’s name, email, and address. This data can then be used to create a new customer in the database.

{
"body": "{\"customerName\":\"John Doe\",\"customerEmail\":\"john@example.com\",\"customerAddress\":\"123 Main St\"}",
"path": "/prod/userinput",
"httpMethod": "POST"
}

Scroll down and paste the code in the Event JSON section.

On the right side, click Test.

Above the Test event section, you will see a Executing function: succeeded prompt appear.

You can click Details to expand the execution log information.

As you can see, the Lambda function test was successful. The status code is set to 200, which indicates that the request was successfully processed by the server.

And that’s it for this one.

Make sure to delete the stacks so that you’re not charged for resources you don’t need.

Since the DeletionPolicy was set to Delete for all resources, once we delete the stack, we do not have to go and delete each resource individually.

You have the option to set the DeletionPolicy to Retain for any resources you do not want to have automatically deleted.

Please feel free to share your thoughts! Thank you!

--

--

Chinelo Osuji
Cloud Native Daily

DevOps | Cloud | Data Engineer | AWS | Broward College Student