Olusegun Omotunde
7 min readMay 10, 2023

Migration of a Workload running in a Corporate Data Center to AWS using the Amazon EC2 and RDS service

In this project based on a real-world scenario, I acted as the Cloud Specialist responsible for migrating a workload running in a Corporate DataCenter to AWS.
The application and database were migrated to AWS using the Lift & Shift (rehost) model, moving both application and database data.

I followed some migration steps:

  1. Planning (sizing, prerequisites, resource naming)
  2. Implementation/ Execution (resource provisioning, best practices),
  3. Go-live (validation test — Dry-run, final migration — Cutover)
  4. Post Go-live (ensure the operation of the application and user access).

The planning phase keeps things organized and easy to follow thoughout the migration process. The planning phase is the most important step of the migration process and proper planning provides a proper structure and make things easy ahead of the execution phase. This first step in the migration process involves creating a spreadsheet with details of the on-premises resources to be migrated to Amazon AWS providing details on the sizing requirement of resources like database and application server and also all the prerequisites libraries, operating system packages that needs to be installed on the Amazon AWS before migration can take place.

In the Implementation phase, I need to provision the infrastructures in the solution architecture on AWS for the on-premises migration to Amazon AWS. I am going to provision the EC2 instance so it can host/store the application servers , the AWS RDS instance so it can store the database and VPC (Virtual Private Cloud) along with it’s subnets and also implementing the Internet Gateways to allow our VPC connect to the internet. When we want to create relational database instance using rds, we need to create vpc, subnets, internet gateways and route tables. Below, I will show a step by step process of the implementation phase.

  • The first step in the Implementation phase entails:

VPC provisioning, Subnets (one private and one public), Internet Gateways, Route Table, EC2 provisioning and RDS provisioning.

# VPC Provisioning

I first created the VPC. The IPv4 CIDR block determines the starting IP and the size of your VPC using CIDR notation. CIDR BLOCK= 10.0.0.0/16.

You should never use the same IPv4 CIDR block range for subnets else it can cause an overlap with on-premises or other cloud providers so always change it.

# Subnets

I successfully created the VPC. I went ahead to create the subnets. I created three subnets, two private subnets and one public subnet using the CIDR Block. The second private subnet was created in a different region.

- Public subnet details:

vpc-production-1-pu-1 (Public)
CIDR Block: 10.0.0.0/24

- Private subnet details:

vpc-production-1-pv-1 (Private)
CIDR Block: 10.0.1.0/24

vpc-production-1-pv-2 (Private)
CIDR Block: 10.0.2.0/24

# Internet Gateway

Now that all the subnets have been created successfully, the Internet Gateway Creation is next. Creating the Internet Gateway allows the EC2 instances hosting the applications to communicate with the internet and also attaching it to the VPC allows the the VPC to connect to the internet.

- Creating an Internet Gateway, Attaching it to a VPC

VPC | Internet Gateway: igw-mod3 | Action: Attach to VPC (vpc-production-1)

# Route Table

Looking at the image above it is clear that the Internet Gateway has been successfully created and attached to the VPC. I then proceed to create the route table and associate it with the Internet Gateway.

Creating a Route table
VPC | Route Table | Routes | Edit routes | Add route: 0.0.0.0/0 — Target: Internet Gateway (igw-mod3)

# EC2 Provisioning

The next thing to do is to create a new EC2 instance. see details of the EC2 instance creation below:

  • Amazon Machine Image: Ubuntu 18.04
    - Instance type: t2.micro
    - Instance details:
    - Network: vpc-production-1
    - Subnet: vpc-production-1-pu-1 (Public)
    - Assign public ip: Enable
    - Tags:
    - hostname: awsuse1app01
    - environment: bootcamp
    - Configure security group:
    - Port: 22 — Source: 0.0.0.0/0
    - Port: 8080 — Source: 0.0.0.0/0

NOTE: (AWS WARNING) IN A PRODUCTION ENVIRONMENT UNDER NORMAL CIRCUMSTANCES DO NOT USE ROUTE: 0.0.0.0/0

EC2 was successfully created

# RDS Provisioning

Now that the EC2 instance has been created successfully, I create a RDS instance. See details of RDS instance creation below:

- MySQL
- MySQL 5.7.30
- Templates: free-tier
- Settings:
- DB Instance: awsuse1db01
- Master username: admin
- Master password: admin123456
- DB instance class: db.t2.micro
- Storage: 20 GB
- Connectivity:
- VPC: vpc-production-1
- Public access: No
- Create new VPC security group
- New VPC security group name: sec-group-db-01
- Availability Zone: us-east-1a

For the RDS creation, I selected MySQL database, instance db.t2.micro and setting up the connectivity to our VPC and I also created a new VPC security group. The settings involved created a username: ‘admin’ and password ‘admin123456’.

The first step in the implementation phase has been completed

  • The second step in the Implementation phase involves using EC2 public ipv4 to run commands in the local computer CLI i.e terminal or gitbash. This means connecting to the Ec2 instance using the local machine CLI, (In my case, my Macbook terminal) the public ipv4 address , the ssh-key.pem file and running all the prerequisites commands from the planing stage to install the necessary libraries and operating systems before migration takes place.

# Pre-requisites steps to run in the terminal after connecting to the EC2 instance (Application Server)

To first connect go to your aws folder in terminal then run this command below:

$ssh ubuntu@<public-IPv4> -i <private-Key>

The connection has been made successfully. I then proceed to run the following commands below:

$sudo apt-get update
$sudo apt-get install python3-dev -y
$sudo apt-get install libmysqlclient-dev -y
$sudo apt-get install unzip -y
$sudo apt-get install libpq-dev python-dev libxml2-dev libxslt1-dev libldap2-dev -y
$sudo apt-get install libsasl2-dev libffi-dev -y

$curl -O https://bootstrap.pypa.io/pip/3.6/get-pip.py ; python3 get-pip.py — user

$export PATH=$PATH:/home/ubuntu/.local/bin/

$pip3 install flask
$pip3 install wtforms
$pip3 install flask_mysqldb
$pip3 install passlib

$sudo apt install awscli -y
$sudo apt-get install mysql-client -y

The next step in the migration process is the Go-live phase

The Steps to run during the validation(“dry-run) & final migration(cutover)”:

- Open port RDS security group:

- Type: MySQL/Aurora
- Source: 0.0.0.0/0
(only for this project purpose — in production env, you should open the ports to the application server IPs only)

- Connect to the EC2 instance

$ ssh ubuntu@<PUBLIC_IP> -i <ssh_private_key>

- Download the dump file and app file

wget https://tcb-bootcamps.s3.amazonaws.com/bootcamp-aws/en/wikiapp-en.zip
wget https://tcb-bootcamps.s3.amazonaws.com/bootcamp-aws/en/dump-en.sql

- Connect on MySQL running on AWS RDS

mysql -h <RDS_ENDPOINT> -P 3306 -u admin -p

Replace your Endpoint

Password: admin123456

- Create the wiki DB and import data

create database wikidb;
use wikidb;
source dump-en.sql;

- Create the user wiki in the wikidb

CREATE USER wiki@’%’ IDENTIFIED BY ‘wiki123456’;

GRANT ALL PRIVILEGES ON wikidb.* TO wiki@’%’;

FLUSH PRIVILEGES;

exit;

- Unzip the app files

unzip wikiapp-en.zip
cd wikiapp

- Edit the wiki.py file and change the MYSQL_HOST and point to the RDS endpoint.

vi wiki.py

- Edit the wiki.py file and change the MYSQL_HOST and point to the RDS endpoint.

- Replace the host

Bring up the application

python3 wiki.py

Steps to validate the migration:

- Open the AWS console, copy the public IP and test the application

Using public IP address of our EC2 instance. Open new browser and adding :8080 at the end of it.

<EC2_PUBLIC_IP>:8080

If everything was done correctly you should see this web.

- Login to the application (user:admin / password: admin)

- Create a new article

- Capture the evidences

The final step in the Migration process is the Post Go-Live Phase. The cloud team makes sure everything is working fine and there is no performance or integration issue and provides stability and on-going support.

This is the end of the project I have completed the process of migrating the application and its database from an on-premises environment to AWS cloud, allowing it to be accessible to users via the public internet.

Olusegun Omotunde

Data Engineer with focus on Cloud & DevOps | AWS | Microsoft Azure | Google Cloud | Oracle Cloud |