Host & Run a Multi Tier Web Application on AWS (Lift & Shift)

Can Yalcin
21 min readMar 27, 2024

--

Our current system involves app services running on physical or virtual machines, along with numerous servers in our local data center. This setup is challenging to manage, especially when it comes to scaling resources up or down to meet changing demands. Additionally, the upfront cost of procuring and maintaining this infrastructure is significant. Furthermore, many manual processes are difficult to automate, leading to wasted time and potential human error.

To address these issues, we’ve decided to migrate our applications to Amazon Web Services (AWS). Since AWS offers Infrastructure as a Service (IaaS), we no longer need to pay a large upfront cost for hardware. Instead, we benefit from a pay-as-you-go model, offering greater flexibility. This flexibility allows us to easily scale our resources up or down as needed. Moreover, AWS enables us to automate many tasks, minimizing human error and saving us valuable time.

Pre-requisites

Before you get started, you’ll need to have a few things in place:

  • Java Development Kit (JDK) 11 or later: You’ll need JDK 11 or a later version installed on your system.
  • Maven 3 or later: Maven is a build automation tool for Java projects. Ensure you have Maven 3 or a later version installed.
  • MySQL 5.6 or later: This project requires a MySQL database server version 5.6 or later.
  • AWS account and familiarity with AWS EC2: An AWS account and basic understanding of AWS EC2 (Elastic Compute Cloud) is necessary to deploy the application on AWS.

Let’s Start

Now, let’s explore the architectural design of our AWS setup, which utilizes Amazon Elastic Compute Cloud (EC2) instances, Elastic Load Balancing (ELB), Auto Scaling, Amazon Simple Storage Service (S3), Amazon Certificate Manager and Route 53 services. We’ll also take a look at the accompanying architectural diagram.

  • Users enter a URL in their browser. This URL points to an endpoint, which is defined in GoDaddy’s DNS settings.
  • The user’s browser or the endpoint itself then connects to our application load balancer using HTTPS for secure communication. The encryption certificate for HTTPS is managed by Amazon Certificate Manager Service.
  • Users ultimately reach the application load balancer endpoint. This load balancer resides in a security group that only allows HTTPS traffic.
  • The load balancer then routes user requests to Tomcat instances. Tomcat is a web application server running on a set of Amazon EC2 instances managed by an auto scaling group. This group automatically scales the number of instances up or down based on high (HI) or low traffic demand.
  • The EC2 instances running Tomcat belong to a separate security group that only allows traffic on port 8080, and only from the load balancer.
  • Our application also relies on backend servers, including MySQL, Memcached, and RabbitMQ. The information for these backend services, such as their private IP addresses, is stored in a private DNS zone hosted by Amazon Route 53.
  • Tomcat instances access these backend servers by name, with the private IP addresses being resolved through Route 53’s private DNS zone. Finally, the backend EC2 instances running these services (MySQL, RabbitMQ, and Memcached) are placed in a separate security group for additional security.

Great! Now that we have a solid understanding of our AWS stack, let’s roll up our sleeves and get started!

Security Group & Keypairs

Let’s start by creating a security group for our load balancer.

  1. Go to the Amazon EC2 console. You can access it by logging into your AWS account and navigating to the EC2 service.
  2. Click on “Security Groups” in the navigation pane.
  3. Click the “Create Security Group” button.
  4. Give your security group a descriptive name and a brief description. This will help you identify its purpose later.
  5. Under “Inbound rules,” click “Add Rule.” This is where you define what kind of traffic can reach your load balancer.
  6. For “Type”, select “HTTP”. In the “Source” field, choose “Anywhere (0.0.0.0/0).” This allows traffic from any IP address to reach your load balancer for now. (We can adjust this later for better security).
  7. For the “Port Range” field, enter “80” for HTTP and “443” for HTTPS. These are the standard ports for these protocols.
  8. Click “Add Rule” again and repeat steps 6–8, but enter only “HTTPS” in the “Type” for HTTPS. Finally, click “Create Security Group” to create your new security group.

Next we will create a security group for our Tomcat instance.

  1. Give your security group a descriptive name and a brief description.
  2. Under “Inbound rules,” click “Add Rule.”
  3. For “Type”, select “Custom TCP.”
  4. For our Tomcat service running on port 8080, enter “8080” in the “Port Range” field. To restrict access, we’ll only allow traffic coming from our load balancer’s security group. Please provide a brief description for this rule and then save it.

The final security group will be for our backend services. After giving it a name and description, let’s add some rules. Our MySQL service will be running here, so choose “MySQL” as the “Type”. We want to restrict access to only traffic coming from our application security group.

We also have two other backend services: Memcache and RabbitMQ. Memcache will use port 11211 and RabbitMQ will use port 5672. Let’s add separate rules to allow traffic for each service on their respective ports.

There’s one more adjustment we need to make to our backend security group. This security group currently houses three services that will also need to communicate with each other. To enable this communication, we’ll add a security group rule allowing “all traffic” to originate from the backend security group itself.

Our application will work by following these certain rules. If you want to connect to the instances using SSH from your computer, we’ll need to add another rule. This rule will involve opening Port 22 in all of the security groups we created. By doing this, your computer will be allowed to connect to the instances.

* “Port Range 8080” is optional. Please see below.

*If you’re troubleshooting and need to access the app server directly from your web browser (instead of going through the load balancer) you can simply add a rule to the inbound rules in application’s security group. The rule should specify two things:

  1. Port Range: 8080 (this is the port the app server is listening on)
  2. Source: Your IP address (this limits access to only you)

Once this rule is in place, you can access the app server directly by entering its public IP address followed by :8080 in your browser address bar. This should bring you to the app’s webpage.

We need to create a login key pair to access our EC2 instances. This key pair acts like a digital key and lock. Head over to the “Key Pairs” section and click on “Create key pair.” Give your key pair a name. When choosing the file format, select “.pem”. If you plan to use PuTTY specifically, choose “.ppk” instead.

EC2 Instances

Make sure your EC2 instances have outbound internet access before you create them. We’ll be using user data scripts to download packages from the internet, and this won’t work if the outbound rules (outbound rules of all security groups we created) aren’t set correctly. Here’s an example of how to configure the outbound rules.

Now we can clone the source code here:

Let’s start to launch our EC2 instances, starting with database instance.

  1. Head to EC2: Navigate to the EC2 service and select the “Instances” section.
  2. Launch a New Instance: Click on the “Launch Instance” button.
  3. Add Instance Tags: Assign two tags to your instance: Name: Give your instance a descriptive name for easy identification. Project: Specify the project this instance belongs to.
  4. Choose AMI: Under “Application and OS Images”, click on “Browse more AMIs”. We’ll be using AlmaLinux 9. Locate and select “Almalinux OS 9”.
  5. Select Instance Type: Choose the “t2.micro” instance type for your database.
  6. Assign Key Pair: Under “Key Pair”, select the key pair you created earlier for secure access.
  7. Configure Network Settings: Select “Existing security groups” under network settings. Remember, database instances typically reside in a backend security group. Choose your appropriate backend security group from the dropdown menu.
  8. Add User Data Script: Scroll down to the “User data” section. Locate the mysql.sh script within the cloned code repository (ensure you're in the aws-LiftAndShift branch). Copy the contents of the mysql.sh script and paste them directly into the user data section.

You can create other instances (Instance for MemCache and RabbitMQ) by following these same steps. However, be sure to use the appropriate script for the specific service you’re launching. For instance, if you’re creating an instance for Memcached, copy and paste the memcache.sh script into the user data field.

Once we’ve successfully launched three separate instances for our application, including MySQL, MemCache, and RabbitMQ, we’ll move on to launching the final instance specifically for the application itself. This last instance will be different, though. Instead of using AlmaLinux 9, we’ll be using Ubuntu OS. (Make sure that you copy and paste the tomcat_ubuntu.sh script into the user data field).

*Before proceeding to the next step, you can take a moment to verify the system status. This will help confirm if the user data scripts functioned as expected.

Route 53

Amazon Route 53 is a popular Domain Name System (DNS) service known for its ease of use. With Route 53, we can create a zone that acts like a container for our domain name. Inside this zone, we can define different hosts. These hosts will have records that point to either an IP address or a different domain name (CNAME).

It’s important to note that we’ll be creating a private hosted zone. This means it won’t be accessible from the public internet and will only be used within our Virtual Private Cloud (VPC) in the chosen region where our instance resides.

Once the zone is created, we can add a record. We’ll use a simple routing method, where a name will directly resolve to an IP address.

We’re creating records specifically for backend services like the database, memcache, and RabbitMQ, not for the application server itself. The application server will connect to these services using information stored in application properties file. This file defines the details for connecting to the database, memcache, and RabbitMQ.

Build and Deploy Artifacts

There’s one step before we can build the artifact. We need to update our application properties file. In the previous step, we created record names that will eventually be linked to IP addresses. Now, we need to replace these record names with the actual names defined in the application properties file.

Now it’s time to build our artifact, upload it to S3 bucket and from there fetch to Tomcat EC2 instance.

In the terminal we will use mvn install to build the artifact (make sure that you are in aws-LiftAndShift folder).

Once the build finishes successfully, you should be able to see a new folder called “target” when you list the directory contents with the ls command. This folder contains the built artifact, which is the final output of your build process. Now, we can upload this artifact to an S3 bucket for storage.

On AWS, we need to create an S3 bucket, an IAM role and an IAM user.

To create access keys for S3, let’s follow these steps:

  1. Navigate to the IAM service: In the AWS Management Console, find the Identity and Access Management (IAM) service.
  2. Create a new user: Go to the “Users” section and click “Add user”. Console access is not required for this purpose, but we do need to create access keys.
  3. Attach S3 access policy: On the next step, choose “Attach existing policies directly”. Select the “AmazonS3FullAccess” policy to grant full access to S3 buckets.
  4. Create the user: Review the details and click “Create user”.
  5. Download access keys: Find the newly created user and go to “Security credentials”. Scroll down and click “Create access key”. Choose “Command line interface” as the access key type and click “Next”. Finally, click “Create access key” and download the CSV file containing the access key ID and secret access key. Be sure to keep this file confidential.

Creating an S3 Bucket and Uploading an Artifact

We can create an S3 bucket and upload an artifact using the AWS Command Line Interface (CLI). Here’s how:

Create an S3 Bucket:

  • Open your terminal and run the following command, replacing yourbucketname with a unique name for your bucket:

aws s3 mb s3://yourbucketname

This command creates a new S3 bucket named yourbucketname.

Copy the Artifact:

Next, let’s copy our artifact to the S3 bucket. Replace target/vprofile-v2.war with the actual path to your artifact:

aws s3 cp target/vprofile-v2.war s3://yourbucketname/

This command uploads the file vprofile-v2.war from your local directory (target/) to your S3 bucket (s3://yourbucketname/).

Verification:

The command will indicate when the upload is complete. You can also verify that the artifact has been uploaded by logging into the AWS Management Console and navigating to your S3 bucket.

The last step involves downloading the artifact onto our Tomcat server running on the EC2 instance. We’ll achieve this by connecting to the instance using SSH, installing the AWS Command Line Interface (AWS CLI), and then retrieving the artifact from our S3 bucket.

For authentication, while we could set an access key again, a more secure option is to use IAM roles. These roles function similarly to users, but you can attach them to your EC2 instance.

To create a role, navigate to the IAM service in your AWS console and click “Create role.” Under “Entity Type,” choose “AWS service” and then under “Use case” select “EC2.” Click “Next” and find the policy named “S3FullAccess.” Select it and click “Next” again. Finally, give your role a descriptive name and create the role.

Once the role is created, go back to your EC2 instances (be sure to refresh the list). Select your application instance, click on “Actions,” navigate to the “Security” dropdown menu, and choose “Modify IAM role.” Select your newly created role from the dropdown menu and click “Update IAM role” to apply the changes.

After we update the IAM role and apply the changes, let’s SSH back into our application instance as the root user. We’ll need to update and install the AWS CLI. If a message appears about a new kernel being available, just press the spacebar and ok to skip. Once the AWS CLI is installed, you can verify it by typing aws s3 ls. If you see a list of your S3 buckets, it means the IAM role attached to this instance has the necessary S3 permissions.

Now, it’s time to deploy the application:

  1. Stop Tomcat 9: Use the command systemctl stop tomcat9.
  2. Remove the default application: We’ll replace the default application with our own, so delete the contents of the ROOT directory with rm -rf /var/lib/tomcat9/webapps/ROOT.
  3. Copy the deployment file: Copy the vprofile-v2.war file from the temporary directory to the Tomcat web apps directory: cp /tmp/vprofile-v2.war /var/lib/tomcat9/webapps/ROOT.war.
  4. Start Tomcat 9: Restart the Tomcat service with systemctl start tomcat9.
  5. Verify deployment (optional): You can check the contents of the web apps directory to confirm the deployment with ls /var/lib/tomcat9/webapps/

Load Balancer & DNS

Great news! All the servers are back online. Now, let’s set up a load balancer to distribute traffic efficiently.

Here’s how:

  1. Head over to EC2 and scroll down to the Load Balancing section. Click on Target Groups, then choose Create Target Group.
  2. Think of a target group as a team of servers. You’ll tell the load balancer to distribute traffic amongst this team. Give your target group a clear name.
  3. Our servers run Tomcat, which listens on port 8080 by default. So, under Port, enter 8080.
  4. Scroll down to Health Check. This is how the load balancer checks if your servers are healthy (operational). Since your application listens on the /login path, enter /login in the health check path.
  5. Click on Advanced health settings. Because we’re using a custom port (8080 instead of the default 80), make sure to Override the port and enter 8080 here.
  6. Healthy Threshold defines how many times the health check needs to succeed before a server is considered healthy. Enter 2 here. Likewise, set Unhealthy Threshold to 2. This means the server will be marked unhealthy after two failed health checks. Click Next.
  7. Now, select the application instances you want to add to the target group. Double-check that the Port is 8080 for each instance. Click the checkbox next to Include as pending below. You’ll see your instances listed as pending. Finally, click Create Target Group.

Since our target group is ready, let’s create a load balancer to route traffic to it. Here’s how:

  1. Create an Application Load Balancer: Navigate to the Load Balancers section and click “Create load balancer.” Choose “Application Load Balancer” and give it a name.
  2. Network Mapping: In the network mapping section, select all the Availability Zones where you want the load balancer to distribute traffic.
  3. Security Group: Scroll down and select the security group you previously created for the load balancer.
  4. Listeners and Routing:

HTTP: Here, configure a listener for HTTP traffic. Leave the port at the default 80 and choose your target group to forward traffic to. This tells the load balancer to listen for requests on port 80 and send them to your target group.

HTTPS (Optional): If you have an SSL certificate for your domain (mentioned in the prerequisites), click “Add listener” to configure HTTPS. Choose HTTPS as the protocol (default port 443) and route traffic to the same target group.

Note: If you don’t have a certificate or domain yet, you can skip the HTTPS configuration and remove port 443 from the listeners.

5. Secure Listener Settings (Optional):

If you have an SSL certificate, scroll down to “Secure listener settings.” Choose “From ACM” for the SSL/TLS certificate and select your domain’s certificate.

6. Create Load Balancer: Once everything is set, click “Create load balancer” to complete the setup.

Now, let’s connect your domain to your website. First, find the load balancer you created in AWS and click on it. In the details section, you’ll see a DNS name listed as an “A Record”. Copy that name.

Next, head over to your domain provider’s website and log in. Look for a section called “Manage DNS” or something similar. This is where you’ll add a new record to point your domain name to your website.

We want people to be able to access your website using a regular web address, so let’s create a CNAME record. This record simply maps one name (your domain) to another (the load balancer’s A Record).

When adding the new record, choose “CNAME” as the type. You can give your website a name in the “Name” field. In the “Value” field, paste the DNS name (A Record) you copied from the load balancer’s details earlier. Finally, save the record.

It’s important to note that it can take up to 15–30 minutes for this change to take effect globally. Once the propagation is complete, your website should be accessible using your chosen domain name!

After publishing your URL, wait a while and then open your browser. Try accessing your site. If you see the login page, it means your certificate is valid and allows access.

Now, let’s log in. The username and password are both “admin_vp”. If you can log in successfully, it confirms that the application can connect to the database.

To check RabbitMQ, click on it. If you see the RabbitMQ initialization page, then it’s set up correctly.

Next, click on “All Users.” This will access the list of users stored in the database. Click on a specific user ID. You should see a message that says: “Data is from DB and data is inserted in Cache.” Go back and click on the same user ID again. This time, you should see a different message: “Data is From Cache.”

Autoscaling

The final step is to create an autoscaling group for our Tomcat instance. This will allow our application to automatically scale its resources (add or remove servers) based on the amount of traffic it receives.

To set up the autoscaling group, we’ll need three things:

  1. An AMI (Amazon Machine Image) containing the application and Tomcat server pre-configured.
  2. A launch template defining the specific details of how new instances should be launched (e.g., instance type, security groups).
  3. The autoscaling group itself, which will be configured with scaling policies to determine when to add or remove instances based on defined metrics (e.g., CPU utilization).

To create the AMI check mark your application instance, from right top click on actions>image>create image. Give it a name and description, then click on create image.

After your AMI is created and shows a status of “available,” you can create a launch template. Go to the Launch Templates section in the AWS console and click “Create Launch Template.” This template will define the configuration for your instances, including the AMI ID, security group, and key pair. An Auto Scaling group uses this launch template to automatically create or delete instances, or simply maintain the desired number of instances based on your specifications.

To create a launch template:

  1. Click on “Create Launch Template.”
  2. Provide a name and description for the template.
  3. Under “Application and OS Images,” navigate to “My AMIs” and select “Owned by me.” Choose your desired AMI from the list.
  4. Select the instance type “t2.micro” and the key pair you normally use for launching instances.
  5. In the “Security Group” section, choose an existing security group that matches the one used by your application instance.
  6. Scroll down to “Resource Tags.” Add a tag with a name of your choice and select “Instances,” “Volumes,” and “Network Interfaces” as the resource type. This ensures the volumes and network interfaces associated with the launched instance will also inherit this tag.
  7. Optionally, add another tag named “Project” with the same resource type selection as before.
  8. Under “Advanced Details,” locate “IAM Instance Profile” and choose the IAM role you created earlier for your application instance (the one with S3 access).
  9. Finally, click on “Create Launch Template.”

Your launch template is now created. The next step will involve specifying it within your autoscaling group.

To attach your Auto Scaling group to a Load Balancer:

  1. Go to the Auto Scaling Groups service and click on “Create Auto Scaling group.”
  2. Give your group a descriptive name and select the launch template you just created.
  3. Click “Next” to proceed to the VPC settings.
  4. You can usually keep the default VPC. However, for optimal redundancy, select all available zones for your region.
  5. Click “Next” again.
  6. On the next page, choose “Attach to an existing load balancer.” Then, from the “Load balancer target groups” section, select the target group you created earlier.
  7. Scroll down to the “Health check” section and enable “Turn on Elastic Load Balancing health checks.” This allows the Auto Scaling group to monitor the health of your instances based on these checks.
  8. Click “Next” to continue configuring your Auto Scaling group.

Let’s configure the capacity for our Auto Scaling Group. We’ll start with just 1 instance, but to test things out, we’ll set a minimum of 1 and a maximum of 4 instances. Since our instance won’t have any real load, 4 is a good upper limit for now.

Next, scroll down to the Scaling Policies section and choose “Target Tracking Scaling Policy.” This policy automatically adds or removes instances based on a chosen metric. For web applications, CPU utilization and network traffic are common metrics to track. We’ll pick CPU utilization and set a target value of 50%. This means if the overall CPU utilization of the group goes above 50%, the Auto Scaling Group will launch an additional instance to handle the increased load.

Click “Next” to proceed to the Notifications page. If you have a pre-existing SNS topic for receiving notifications, you can add it here. Otherwise, you can skip this step. Click “Next” again and you’ll be able to add tags to your Auto Scaling Group for better organization (optional). Click “Next” one last time to reach the Review page. Double-check all your settings to make sure everything looks good. If so, click “Create Auto Scaling Group”.

There’s one additional setting you need to enable before accessing the application through your browser. This setting, called “stickiness,” is specific to this application (although it’s generally optional).

Here’s how to enable it:

  1. Navigate to your target group.
  2. Select the target group you want to modify.
  3. Scroll down and click on “Attributes.”
  4. Click on “Edit.”
  5. Scroll down and find “Target Selection Configuration.”
  6. Locate the “Stickiness” setting and enable it.
  7. Click “Save Changes.”

Once you’ve completed these steps, you can retrieve the load balancer URL and access the application from your browser.

Summary

Let’s take a step back and review what we’ve covered so far. Refer back to the architectural diagram if needed.

  • User Access: Users reach our application through a URL that points to the load balancer’s endpoint. This connection uses HTTPS for secure communication, with the certificate stored in Amazon Certificate Manager.
  • Load Balancer: The application load balancer resides within a security group that only allows incoming traffic on port 443 (HTTPS). It then forwards requests to Tomcat instances running on EC2 on port 8080. These Tomcat instances belong to a separate security group.
  • Backend Services: Backend services are accessed by name using a private DNS zone for internal resolution. All our backend services are grouped within a single security group.
  • Manual Deployment (for Learning Purposes): Currently, we can upload a new artifact to an S3 bucket and then manually download it to our Tomcat EC2 instances.

While this allows for basic deployment, it’s not an ideal solution. A more efficient approach involves automating the entire process using CI/CD (Continuous Integration and Continuous Delivery). However, understanding manual deployment is a crucial step before diving into automation.

Thanks for reading! In the next project, we’ll be taking things a step further by refactoring this stack. We’ll be leveraging AWS’s extensive suite of SaaS (Software as a Service) and PaaS (Platform as a Service) offerings to streamline our development process.

Resources Used:

GitHub Link: https://github.com/devopshydclub/vprofile-project/tree/aws-LiftAndShift

--

--

Can Yalcin
0 Followers

This blog is my space to share my projects and keep my DevOps and FinOps notes organized and accessible for future reference.