Migrating Corporate Workloads to AWS with EC2 and RDS (Lifting and Shifting)

Syed Waqas Ahsan
8 min readJun 10, 2024

--

In a real-world project, I acted as the Cloud Specialist responsible for migrating a workload from a Corporate DataCenter to AWS. The application and database were migrated to AWS using the Lift & Shift (re-host) model. I followed these migration steps:

  1. Planning: I determined the sizing and prerequisites for the migration, and systematically named the AWS resources.
  2. Execution: I provisioned the necessary AWS resources following best practices.
  3. Go-Live: I conducted a dry run to validate the application’s functionality in the AWS environment, and then performed the final migration (cutover) after successful validation.
  4. Post Go-Live: I ensured the application and user access functioned seamlessly in the AWS environment.

Technologies Used:

  • AWS EC2 (Virtual Machines)
  • AWS RDS (Relational Database Service)
  • AWS S3 (Object Storage)
  • AWS CLI (Command Line Interface)
  • MySQL (Relational Database Management System)

During this project, I learned a lot about how the core services of AWS work together to create and implement real-world scenarios like migration.

Here is the Solution Architecture:

Let’s start with the Planning phase. I used a spreadsheet to layout the on-premise resources and their counterpart AWS resources that would be used in this project.

For the Prerequisites, I identified and installed the necessary packages and libraries for the application to run successfully in AWS.

In the Execution phase, I created the following AWS instances and a VPC:

  • EC2 Instance:
  • Instance Name: awsuse1app01
  • Service: EC2
  • Instance Size: t2 micro
  • Region: us-east-1
  • Availability Zone: az1
  • VPC: vpc-production-1
  • Subnet: vpc-production-1-pu-1 (Public)
  • RDS Instance:
  • Instance Name: awsuse1db01
  • Service: RDS-MySQL
  • Instance Size: t2 micro
  • Region: us-east-1
  • Availability Zone: az1
  • VPC: vpc-production-1
  • Subnet: vpc-production-1-pv-1 (Private)

Now I created a VPC in AWS:

  • VPC Name: vpc-production-1
  • CIDR Block: 10.0.0.0/16

Note: The CIDR Block should NOT overlap with on-premises or other cloud providers.

Click on Create VPC

I then created one public and two private subnets inside the new VPC just created as required for RDS.

  • Public subnet details:
  • vpc-production-1-pu-1 (Public)
  • CIDR Block: 10.0.0.0/24
  • Availability Zone: us-east-1a
  • Private subnet details:
  • vpc-production-1-pv-1 (Private)
  • CIDR Block: 10.0.1.0/24
  • Availability Zone: us-east-1a
  • vpc-production-1-pv-2 (Private)
  • CIDR Block: 10.0.2.0/24
  • Availability Zone: us-east-1b

Next, I created an Internet Gateway and attached it to the VPC, and then created a Route. I updated the route table and added a new security group rule to allow the application to be accessed by users over the internet.

VPC| Internet Gateway: igw-mod3 |Action: Attach to VPC(vpc-production-1)

Next I updated the route table.

Click on edit routes and type internet gateway as a target and select gw-mod3 and then save changes.

I then provisioned a new EC2 instance that would serve our application, and installed the libraries and packages onto the EC2 instance as mentioned in the prerequisites.

In AWS console, open EC2.

Launch a new instance

EC2 instance details:

  • Amazon Machine Image: Ubuntu 22.04 (Free tier)
  • Instance type: t2.micro
  • Instance details:
  • Network: vpc-production-1
  • Subnet: vpc-production-1-pu-1 (Public)
  • Assign public ip: Enable
  • Tags:
  • hostname: awsuse1app01
  • environment: bootcamp
  • Configure security group:
  • Port: 22 — Source: 0.0.0.0/0
  • Port: 8080 — Source: 0.0.0.0/0

then I created a new key pair.

Next I moved towards Network setting.

I created a new security group. Name: app-sg01.

Note: For my purpose, I left the inbound security group rules as default. For projects involving production, this should be restricted to only known and permitted ip addresses.

Next, I added a new security group rule to allow the application to be accessed by users over the internet.

Next, I created a new RDS instance.

Instance details:

  • MySQL
  • MySQL 5.7.44
  • Templates: free-tier
  • Settings:
  • DB Instance: awsuse1db01
  • Master username: admin
  • Master password: admin123456
  • DB instance class: db.t2.micro
  • Storage: 20 GB
  • Connectivity:
  • VPC: vpc-production-1
  • Public access: No
  • Create new VPC security group
  • New VPC security group name: sec-group-db-01
  • Availability Zone: us-east-1a

Next I installed the libraries and packages onto the EC2 instance as mentioned in the pre-requisites.

    sudo apt-get update
sudo apt-get install python3-dev -y
sudo apt-get install libmysqlclient-dev -y
sudo apt-get install unzip -y
sudo apt-get install libpq-dev python-dev libxml2-dev libxslt1-dev libldap2-dev -y
sudo apt-get install libsasl2-dev libffi-dev -y

curl -O https://bootstrap.pypa.io/pip/3.6/get-pip.py ; python3 get-pip.py –user

export PATH=$PATH:/home/ubuntu/.local/bin/

pip3 install flask
pip3 install wtforms
pip3 install flask_mysqldb
pip3 install passlib

sudo apt install awscli -y
sudo apt-get install mysql-client -y

Note: make sure all commands provided are executed successfully.

Next, connect to the EC2 instance using SSH.

Note: When trying to connect into the EC2 instance using SSH, if you encounter a permission issues. use the command below to give permission to the ssh key.

sudo chmod 600 Desktop/AWS/ssh-key.pem

In the Go-Live phase, I validated the application using a dry run and kept the on-premises resources still up and running. Once the dry run was validated, I scheduled a downtime and ran the final migration (cutover). I took a data dump from the on-premise database and exported the application from the on-premise environment into an AWS S3 bucket.

Finally I ran the application.

python3 wiki.py

Navigated to AWS Console | EC2.

Copy the public ip-address.

Open a new browser window and paste the public ip-address followed by :8080

Application was now up and running.

Login:

Added new article:

I was able to see incoming traffic inside the terminal.

Post Go-Live phase

After the migration, the application and database will undergo stability testing and receive continuous support. The cloud team will ensure that all users can access the application.

The strategic integration of Amazon EC2, VPC, RDS, MySQL, and Internet Gateway can revolutionize a business’s cloud infrastructure, yielding significant cost savings, scalability, and enhanced security. By automating cloud management with tools like Amazon CloudFormation, businesses can take their infrastructure to the next level, streamlining efficiency, reducing errors, and simplifying tasks. As the digital landscape continues to evolve, embracing these powerful technologies can be the key to staying ahead of the competition and delivering exceptional customer experiences.

It was a pleasure to guide you through this innovative endeavor, and I hope that the insights and knowledge shared have been informative and valuable to you.

Happy Learning !

--

--

Syed Waqas Ahsan
0 Followers

Supply Chain Manager with a learning / transitioning focus on Cloud & DevOps | AWS | Microsoft Azure | Google Cloud | Oracle Cloud