Implementation of a Scalable Web Application using the services of AWS Elastic Beanstalk, DynamoDB, CloudFront and Edge Location
Project description:
In this project based on a real-world scenario, I was responsible for implementing an application that needs to support the high demand of a large number of users accessing it simultaneously. This application has been used in a large conference that had more than 10,000 people, in-person and online, with participants from all over the world.
This event was broadcast online and in person and 10 vouchers were drawn for 3 Cloud certifications. At that moment, more than 10,000 people in the audience registered their e-mails to guarantee their participation in the raffle.
We used in AWS, Elastic Beanstalk services to deploy the web application, DynamoDB to store emails, CloudFront to cache static and dynamic files in an Edge Location close to the user.
We will accomplish this project in this four Steps:
- Create a table in DynamoDB
- Configure and run Elastic Beanstalk environment
- Create a CloudFront distribution
- Stress test of the application
Step-1: Create a table in DynamoDB
Initially, we created a table named “users” in Amazon DynamoDB to store email addresses inserted by the users. DynamoDB offers single-digit millisecond response times, making it ideal for high-performance applications. Additionally, it can automatically scale up and down based on traffic, ensuring consistent performance and cost-efficiency.
Step-2:Configure and run Elastic Beanstalk environment
In next step, we created a web server environment in Elastic Beanstalk. We named our application as “tcb-conference”.
We chose a public S3 URL as the source for our application. The web application is written in Python, and we selected the appropriate Python version as the platform. To ensure proper service access for our application, we created two roles.
- Role name: aws-elasticbeanstalk-service-role
use cases: Elastic Beanstalk
Permissions:
AWSElasticBeanstalkEnhancedHealth
AWSElasticBeanstalkManagedUpdatesCustomerRolePolicy
2. Role name:aws-elasticbeanstalk-ec2-role(EC2 instance profile)
use cases: EC2
Permissions:
AWSElasticBeanstalkWebTier
AWSElasticBeanstalkWorkerTier
AWSElasticBeanstalkMulticontainerDocker
We chose default VPC as our desired VPC and selected all the availability zones. We checked the box public ip address as “activated”.
Now we Configure instance traffic and scaling as below:
Instances
Root volume type: General Purpose (SSD)
Size: 10 GB
Capacity
Auto scaling group
Load balanced
Min: 2
Max: 4
Fleet composition: (*) On-Demand instances
Instance types: t2.micro
Now for scaling triggers we chose as below:
Metric: CPUUtilization
Unit: Percent
Min: 1
Min: 1
Upper: 50
Scale up: 1
Lower: 40
Scale down: -1
Load balancer network settings for visibility should be Public.
Lastly, we add an environment property:
Name:
AWS_REGION
Value:
us-east-1
While attempting to add a new email address, we encountered an error because our application running on EC2 instances did not have access to DynamoDB. To resolve this, we granted full DynamoDB access permissions. After doing so, we successfully added our first email address, “abc@abc.com.” This can be verified by checking the users table in DynamoDB.
Step-3:Create a CloudFront distribution
Given that our users are distributed across various locations, we need to create a CloudFront distribution to cache the application at CloudFront edge locations. This will reduce latency and provide quick and efficient responses.
Followed the below configurations:
We selected the ‘Elastic Load Balancer’ created by Elastic Beanstalk.
Protocol: HTTP and HTTPS
Allowed HTTP methods: GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE
Cache key and origin requests:
Cache policy and origin request policy (recommended)
-Cache policy: CachingOptimized
Web Application Firewall (WAF)
-Enable security protections
We tried to register another email address after creating the CloudFront distribution. We used Distribution Domain Name to open the web application.
Step-4:Stress test of the application
We now need to verify if our application is auto-scaling properly under heavy traffic. To do this, we conducted a stress test. First, we connected to one of the EC2 instances using its public IP address. Then, we installed the “stress” program and ran it to ensure 100% CPU utilization, thereby overloading the instance. In response, our auto-scaling group should add another instance to counter this issue and provide a smooth user experience.
ssh -i new-key-1.pem ec2-user@ec2-public-ip
sudo yum install stress -y
stress -c 4
Now we can open a new SSH connection and check the process running in the EC2 instance:
ps aux
ps aux --sort=-pcpu
top
Afterward, I checked the Elastic Beanstalk status, which showed a ‘Warning.’ One of the instances was unhealthy due to the stress program, which caused high CPU utilization (over 50 percent).
Auto-scaling group was triggered and added one ec-2 instance.
After the overload condition was removed from the instance, it turned back to an “Ok” state. The extra ec-2 instance was terminated as it was no longer required.
Now we checked again to register a user email address and was successful. We opened the application using Distribution Domain Name.
Conclusion:
In this project, we successfully implemented an application capable of supporting high demand from a large number of simultaneous users by utilizing various AWS services. I hope you find this project useful for learning purposes. Thank you!