Achieve Scalability and Agility: Refactoring My Web App on AWS (Part 1)

Can Yalcin
5 min readMar 30, 2024

--

In my last project, I built a scalable multi-tier web application and utilized AWS for hosting and infrastructure.

I already have an app stack deployed on AWS and am looking to improve agility and business continuity. We’ll explore how to leverage AWS’s managed PAAS (Platform-as-a-Service) and SAAS (Software-as-a-Service) offerings to achieve these goals.

From Manual Management to Managed Services

I am currently using an EC2 instance to host my application frontend. Managing this instance manually is time-consuming and error-prone. Here’s how AWS Elastic Beanstalk can simplify my life:

  • Automated Instance Management: Beanstalk provisions and manages the EC2 instance for you. You can focus on your application code, not server configuration.
  • Built-in Load Balancing and Autoscaling: Beanstalk automatically distributes traffic across healthy instances and scales resources up or down based on demand. This ensures high availability and optimal performance.
  • Storage Flexibility: Beanstalk integrates seamlessly with S3 buckets, allowing you to store application assets and leverage S3’s scalability and durability. You can even use an existing S3 bucket with your Beanstalk environment.

Modernizing Backend Infrastructure

The backend is equally important. Here’s how I can modernize it using AWS managed services:

  • Database with Built-in Management: Replace your current database instance with Amazon RDS (Relational Database Service). RDS offers automatic patching, backups, and scaling, freeing you from administrative tasks.
  • Managed Caching with ElastiCache: Ditch Memcached and embrace ElastiCache, a managed caching service that supports various caching engines like Memcached and Redis. ElastiCache offers automatic provisioning, failover, and scaling for a robust caching layer.
  • Simplified Messaging with ActiveMQ: Upgrade your messaging system from RabbitMQ to Amazon ActiveMQ. ActiveMQ is a fully managed message broker that simplifies message queuing and pub/sub functionality.
  • Global DNS with Route 53: Consolidate your DNS management with Amazon Route 53, a highly available and scalable Domain Name System (DNS) service. Route 53 ensures your users can always find your application.
  • Enhanced Content Delivery with CloudFront: Integrate Amazon CloudFront, a Content Delivery Network (CDN), to deliver your application’s static content with high performance and low latency. CloudFront caches content at strategically placed edge locations around the globe, bringing content closer to your users.

Transitioning from manual infrastructure management to AWS’s managed PAAS and SAAS offerings can significantly improve my application’s agility and business continuity. This approach allows me to focus on my core business while AWS handles the heavy lifting of infrastructure management.

Architecture of AWS Services for This Project

Here’s how users will access my website:

  1. Domain Name: Users will enter my web address (URL) in their browser.
  2. Route 53: This service acts like a phonebook, directing users to the correct location for my website.
  3. CloudFront: This global content delivery network (CDN) caches static content (like images, videos, and scripts) closer to users around the world. This improves website loading speeds.
  4. Application Load Balancer: This service distributes incoming traffic evenly across multiple EC2 instances in my Elastic Beanstalk environment.
  5. Auto Scaling Group: This automatically scales my EC2 instances (adds or removes) based on user traffic. CloudWatch monitors my application and triggers scaling events.
  6. EC2 Instances: These are virtual servers running my Tomcat application.
  7. S3 Bucket: This secure storage service holds my application code and artifacts. I can easily deploy new versions by uploading them to the S3 bucket.
  8. Elastic Beanstalk: This service manages my entire frontend infrastructure, including EC2 instances, Tomcat, and deployments.
  9. Amazon MQ: Instead of managing my own message broker, I’m using Amazon MQ. This managed service simplifies the setup and operation of open-source message brokers like RabbitMQ on Amazon Web Services (AWS).
  10. ElastiCache: For caching, I’ve switched from Memcached to ElastiCache. ElastiCache is another AWS service that provides a managed in-memory data store based on popular caching engines.
  11. Amazon RDS: Finally, I’ve moved away from running my database on an EC2 instance and opted for Amazon RDS. Amazon RDS is a managed relational database service that allows you to easily set up, operate, and scale a variety of database engines.

Now that we’ve covered the key points, it’s time to put them into practice!

A. Security Group and Keypairs

A.1. Securely Connecting to Your EC2 Instance: Creating a Key Pair

The first step to securely accessing your Amazon EC2 instance is creating a key pair. Think of a key pair like a digital lock and key for your instance. The key pair provides secure access through SSH (Secure Shell) and ensures only authorized users can connect.

Here’s how to create a key pair:

Navigate to the EC2 console: Log in to your AWS Management Console and navigate to the Amazon Elastic Compute Cloud (EC2) service.

Access Key Pairs: In the EC2 console’s navigation pane, look for the “Network & Security” section. Underneath that, you’ll find “Key Pairs.” Click on “Key Pairs” to access the key pair management area.

Create a New Key Pair: Click the “Create key pair” button located in the top right corner.

Name Your Key Pair: Choose a descriptive and memorable name for your key pair. This will help you easily identify it later.

Select the Key Format (Optional): By default, the key pair will be created in the PEM format, which is widely compatible with various SSH clients. If you plan to use Putty for SSH access specifically, you can leave the format selection as PEM.

Download the Key Pair: Once you’ve named your key pair and chosen the format (if applicable), click the “Create key pair” button. AWS will generate a key pair consisting of a public key and a private key. The private key file is crucial for connecting to your instance, so be sure to download it securely.

Important Note: Keep your private key file confidential and secure.

A.2. Creating a Security Group for Backend Services

Securing your backend services is crucial. Let’s create a security group to control incoming traffic:

  1. Navigate to Security Groups and click “Create Security Group.”
  2. Give your security group a descriptive name (e.g., “Backend Services”) and a brief description of its purpose.
  3. Inbound Traffic Rules: Here’s where the magic happens!
  • Start with a temporary rule: Allow SSH (port 22) access from your IP address. Since your backend services reside in a private network, this is for initial setup purposes. You likely won’t need ongoing remote access.
  • The key rule: After creating the temporary rule, you can now add a rule that allows all traffic originating from the security group itself. This might seem strange at first, but it’ll make sense in a moment.

Why the Temporary Rule?

Cloud platforms often restrict adding rules that reference your security group until it exists. The temporary SSH rule allows the creation of the main rule, essentially enabling your backend services to communicate with each other.

Don’t forget to click “Save Rules” to apply your configuration.

Coming Up Next:

We’ll revisit this security group later to add another rule after creating your backend service instance (e.g., Beanstalk environment). For now, this initial configuration lays the foundation for secure communication within your private network.

In the next section (Part 2), we’ll dive into the step-by-step process of setting up the backend services, including RDS, Elasticache, and Amazon MQ.

--

--

Can Yalcin
0 Followers

This blog is my space to share my projects and keep my DevOps and FinOps notes organized and accessible for future reference.