Migrating an Oracle instance to AWS RDS Aurora Postgresql using Terraform

Paul Zhao
Paul Zhao Projects
Published in
33 min readApr 3, 2021
Project infrastructure

As shown by our infrastructure diagram, what we intend to do in this project is to accomplish data transfer from a source server (Redhat 7.6) with Oracle database installed to a target server (RDS Aurora Postgresql) using Terraform. We may use Windows Server sitting in an EC2 instance as a medium to deploy this data transfer with AWS Schema Conversion Tool

This whole project is on Cloud using AWS, you may use your own Windows System to achieve data transfer from on-premise server to an Aurora Postgresql

Why using Aurora Postgresql over Oracle Database?

Amazon Aurora is a relational database service that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. The PostgreSQL-compatible edition of Aurora delivers up to 3X the throughput of standard PostgreSQL running on the same hardware, enabling existing PostgreSQL applications and tools to run without requiring modification. The combination of PostgreSQL compatibility with Aurora enterprise database capabilities provides an ideal target for commercial database migrations

Why using Terraform?

Throughout this project, you we find out why Terraform is leading IaC in Cloud. With it, we are able to provision our infrastructure with ease and manage our code in a systematic manner. While working on the project, we can update our infrastructures with ease. (You will find one of those examples in this project since we had to overcome an error and re-deploy our infrastructure). The best of all using Terraform is to cleanup the infrastructure at the end of the day since you don’t want to leave resources in AWS and to be charged for resources not in use

Prerequisites:

  • An AWS account — with non-root user (take security into consideration)
  • In terms of system, we will be using RHEL 8.3 by Oracle Virtual Box on Windows 10 using putty
  • AWSCLI installed
  • Install Terraform

Let us work on them one by one.

Creating a non-root user

Based on AWS best practice, root user is not recommended to perform everyday tasks, even the administrative ones. The root user, rather is used to to create your first IAM user, groups and roles. Then you need to securely lock away the root user credentials and use them to perform only a few account and service management tasks.

Notes: If you would like to learn more about why we should not use root user for operations and more about AWS account, please find more here.

Login as a Root user
Create a user under IAM service
Choose programmatic access
Choose programmatic access
Create user without tags
Keep credentials (Access key ID and Secret access key)

Set up RHEL 8.3 by Oracle Virtual Box on Windows 10 using putty

First, we will download Oracle Virtual Box on Windows 10, please click Windows hosts

Second, we will also download RHEL iso

Let us make it work now!

Click Oracle VirtualBox and open the application and follow instructions here, you will install RHEL 8.3 as shown below

Oracle VM VirtualBox

Notes: In case you are unable to install RHEL 8.3 successfully, please find solutions here. Also, after you create your developer’s account with Red Hat, you have to wait for sometime before register it. Otherwise, you may receive errors as well.

Now it’s time for us to connect to RHEL 8.3 from Windows 10 using VirtualBox.

Login RHEL 8.3

Click activities and open terminal

Open terminal

Notes: In order to be able to connect to RHEL 8.3 from Windows 10 using putty later, we must enable what it is shown below.

Bridged Adapter selectedBridged Adapter selected

Now we will get the ip that we will be using to connect to RHEL 8.3 from Windows 10 using Putty (highlighted ip address for enp0s3 is the right one to use)

IP address

Then we will install Putty.

ssh-keygen with a password

Creating a password-protected key looks something like this:

$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/pzhao/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/pzhao/.ssh/id_rsa.
Your public key has been saved in /home/pzhao/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:RXPnUZg/fGgRGTOxEfbo3VOMo/Yp4Gi80has/iR4m/A pzhao@localhost.localdomain
The key's randomart image is:
+---[RSA 3072]----+
| o . %X.|
| . o +=@ |
| . B++|
| . oo==|
| .S . o...=|
| . .oo o . ..|
| o oo=.. . o |
| +o*o. . |
| .E+o |
+----[SHA256]-----+

To find out private key

$ cat .ssh/id_rsa
-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABlwAAAAdzc2gtcn
NhAAAAAwEAAQAAAYEAwoavXHvZCYPO/sbMD0ibtkvF+9/NmSm2m/Z8wRy7O2A012YS98ap
8aq18PXfKPyyAMNF3hdG3xi1KMD7DSIb/C1gunjTREEJRfYjydOjFBFtZWY78Mj4eQkrPJ
.
.
.
-----END OPENSSH PRIVATE KEY-----

Notes: You may take advantage of GUI of RHEL to send Private Key as an email, then open the mail and copy the private key from email

Open the Notepad in Windows 10 and save private key as ansiblekey.pem file

Ansiblekey.pem

Then open PuTTY Key Generator and load the private key ansiblekey.pem

Load private key in putty key generator

Then save it as a private key as ansible.ppk file

We now open Putty and input IP address we saved previously as Host Name (or IP address) 192.168.0.18

Load private key in putty

We then move on to Session and input IP address

IP address saved

For convenience, we may save it as a predefined session as shown below

Saved session

You should see the pop up below if you log in for the very first time

First time log in

Then you input your username and password to login. You see below image after log in.

Login successfully

Installing AWS CLI

To install AWS CLI after logging into Redhat8

$ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install

To verify the installation

$ aws --version
aws-cli/2.0.46 Python/3.7.4 Darwin/19.6.0 exe/x86_64

To use aws cli, we need to configure it using aws access key, aws secret access key, aws region and aws output format

$ aws configure
AWS Access Key ID [****************46P7]:
AWS Secret Access Key [****************SoXF]:
Default region name [us-east-1]:
Default output format [json]:

Installing Terraform

To install terraform, simply use the following command:

Install yum-config-manager to manage your repositories.

$ sudo yum install -y yum-utils

Use yum-config-manager to add the official HashiCorp Linux repository.

$ sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/AmazonLinux/hashicorp.repo

Install terraform

$ sudo yum -y install terraform

Notes: In case of a wrong symbolic link set up, please check out this link. Also, you may need to re login after changing the symbolic link.

To check out installation of terraform

$ terraform version
Terraform v0.14.3
+ provider registry.terraform.io/hashicorp/aws v3.21.0

Let us kick off our project now

First, in our Redhat 8, we need to create a directory for our project, then change into this directory

$ mkdir data-migration
$ cd data-migration/

Then we’ll be creating our providers.tf file to set up the AWS region desired us-east-1

vim providers.tf

// set the provider to AWS and the AWS region to us-east-1provider "aws" {profile    = "data-migration"region     = "us-east-1"}

Notes: To set up our data-migration as a profile for AWS, we need to add your access keys and secret access in the following file

vim ~/.aws/credentials

[default]
aws_access_key_id = AKIAWYH7TZXXXXXX
aws_secret_access_key = TtAaF20YsPIrXXXXXXXMiSSOJLsZH2m
[data-migration]
aws_access_key_id = AKIAWYHXXXXXX
aws_secret_access_key = TtAaF20YsPIrOBXXXXXXXX

Then, there are two variables that are required: the first one defines my local IP-address which is used in the security group definitions below, so connections via SSH and RDP will be possible from my current location

Notes: In order to follow the best practice of Terraform — not to hard code our variables for security concern, we will be using terraform chomp() method to remove any trailing space or new line which comes with body. Along with it, data is applied to index our ipv4 address automatically upon provisioning. We will be storing this data in data.tf file for future reference. For more about how to query your ip addresses, please refer to this post

The second one defines the User data that will be passed to the EC2 instance that will host the Oracle source database. Basically it installs the Oracle Database Express Edition (XE) Release 18.4.0.0.0 (18c) and the Oracle sample schemas

Here are 2 files we need to create

vim locals.tf

locals {
my_ip = ["${chomp(data.http.myip.body)}/32"]
instance-userdata = <<EOF
#!/bin/bash
sudo yum update -y
sudo yum install -y wget perl
wget https://download.oracle.com/otn-pub/otn_software/db-express/oracle-database-xe-18c-1.0-1.x86_64.rpm -O /home/ec2-user/oracle-database-xe-18c-1.0-1.x86_64.rpm
wget https://yum.oracle.com/repo/OracleLinux/OL7/latest/x86_64/getPackage/oracle-database-preinstall-18c-1.0-1.el7.x86_64.rpm -O /home/ec2-user/oracle-database-preinstall-18c-1.0-1.el7.x86_64.rpm
wget wget http://mirror.centos.org/centos/7/os/x86_64/Packages/compat-libstdc++-33-3.2.3-72.el7.i686.rpm -O /home/ec2-user/compat-libstdc++-33-3.2.3-72.el7.i686.rpm
sudo yum localinstall -y /home/ec2-user/compat-libstdc++-33-3.2.3-72.el7.i686.rpm
sudo yum localinstall -y /home/ec2-user/oracle-database-preinstall-18c-1.0-1.el7.x86_64.rpm
sudo yum localinstall -y /home/ec2-user/oracle-database-xe-18c-1.0-1.x86_64.rpm
(echo "manager"; echo "manager";) | /etc/init.d/oracle-xe-18c configure
sudo echo ORACLE_HOME=/opt/oracle/product/18c/dbhomeXE/ >> /home/oracle/.bash_profile
sudo echo PATH=\$PATH:\$ORACLE_HOME/bin >> /home/oracle/.bash_profile
sudo echo ORACLE_SID=xe >> /home/oracle/.bash_profile
sudo echo export ORACLE_HOME PATH ORACLE_SID >> /home/oracle/.bash_profile
wget https://github.com/oracle/db-sample-schemas/archive/v19.2.tar.gz -O /home/oracle/v19.2.tar.gz
sudo su - oracle -c "tar -axf v19.2.tar.gz"
sudo su - oracle -c "cd db-sample-schemas-19.2; perl -p -i.bak -e 's#__SUB__CWD__#/home/oracle/db-sample-schemas-19.2#g' *.sql */*.sql */*.dat"
sudo su - oracle -c "cd db-sample-schemas-19.2; sqlplus system/manager@localhost/XEPDB1 @mksample manager manager manager manager manager manager manager manager users temp /tmp/ localhost/XEPDB1"
chkconfig --add oracle-xe-18c
EOF
}

vim data.tf

data "http" "myip4address" {
url = "http://ipv4.icanhazip.com"
}

Next, we will be creating our vpc in AWS using vpc.tf file

vim vpc.tf

// create the virtual private networkresource "aws_vpc" "dms-vpc" {cidr_block = "10.0.0.0/16"enable_dns_hostnames = trueenable_dns_support = truetags = {Name = "dms-vpc"}}// create the internet gatewayresource "aws_internet_gateway" "dms-igw" {vpc_id = "${aws_vpc.dms-vpc.id}"tags = {Name = "dms-igw"}}// create a dedicated subnetresource "aws_subnet" "dms-subnet" {vpc_id            = "${aws_vpc.dms-vpc.id}"cidr_block        = "10.0.1.0/24"availability_zone = "us-east-1a"tags = {Name = "dms-subnet"}}// create a second dedicated subnet, this is required for RDSresource "aws_subnet" "dms-subnet-2" {vpc_id            = "${aws_vpc.dms-vpc.id}"cidr_block        = "10.0.2.0/24"availability_zone = "us-east-1b"tags = {Name = "dms-subnet-2"}}// create routing table which points to the internet gatewayresource "aws_route_table" "dms-route" {vpc_id = "${aws_vpc.dms-vpc.id}"route {cidr_block = "0.0.0.0/0"gateway_id = "${aws_internet_gateway.dms-igw.id}"}tags = {Name = "dms-igw"}}// associate the routing table with the subnetresource "aws_route_table_association" "subnet-association" {subnet_id      = "${aws_subnet.dms-subnet.id}"route_table_id = "${aws_route_table.dms-route.id}"}// create a security group for ssh access to the linux systemsresource "aws_security_group" "dms-sg-ssh" {name        = "dms-sg-ssh"description = "Allow SSH inbound traffic"vpc_id      = "${aws_vpc.dms-vpc.id}"ingress {from_port   = 22to_port     = 22protocol    = "tcp"cidr_blocks = local.my_ip4address}// allow access to the internetegress {from_port   = 0to_port     = 0protocol    = "-1"cidr_blocks = ["0.0.0.0/0"]}tags = {Name = "dms-sg-ssh"}}// create a security group for rdp access to the windows systemsresource "aws_security_group" "dms-sg-rdp" {name        = "dms-sg-rdp"description = "Allow RDP inbound traffic"vpc_id      = "${aws_vpc.dms-vpc.id}"ingress {from_port   = 3389to_port     = 3389protocol    = "tcp"cidr_blocks = local.my_ip4address}// allow access to the internetegress {from_port   = 0to_port     = 0protocol    = "-1"cidr_blocks = ["0.0.0.0/0"]}tags = {Name = "dms-sg-rdp"}}

Once the network is ready we’ll deploy the EC2 instance that will run the Oracle database (Redhat Enterprise Linux 7 in this case):

vim ec2.tf

// setup a red hat 7 system for the oracle sourceresource "tls_private_key" "this" {
algorithm = "RSA"
}
module "key_pair" {
source = "terraform-aws-modules/key-pair/aws"
key_name = "dms-key-pair"
public_key = tls_private_key.this.public_key_openssh
}

resource "aws_instance" "dms-oracle-source" {
ami = "ami-000db10762d0c4c05" ###(ami is the ami for the instance of your choice in the your desired region)instance_type = "t2.medium"key_name = "dms-key-pair"vpc_security_group_ids = ["${aws_security_group.dms-sg-ssh.id}"]subnet_id = "${aws_subnet.dms-subnet.id}"associate_public_ip_address = "true"user_data = "${base64encode(local.instance-userdata)}"root_block_device {volume_size = "30"volume_type = "standard"delete_on_termination = "true"}tags = {Name = "dms-oracle-source"}}

Thereafter, as the target for the migration will be an Aurora cluster we will be creating following resources as well

vim rds.tf

// create the subnet group for RDS instanceresource "aws_db_subnet_group" "dms-rds-subnet-group" {name = "dms-rds-subnet-group"subnet_ids = ["${aws_subnet.dms-subnet.id}","${aws_subnet.dms-subnet-2.id}"]}// create the parameter group (### There is a bug here that would not allow us to create a db_cluster_parameter_group_name without creating a parameter group in the first place)resource "aws_rds_cluster_parameter_group" "default" {
name = "aurora-postgresql10"
family = "aurora-postgresql12"
description = "RDS default cluster parameter group"
}
// create the RDS clusterresource "aws_rds_cluster" "aws_rds_cluster_dms" {backup_retention_period = "7"cluster_identifier = "aurora-dms"db_cluster_parameter_group_name = "default.aurora-postgresql10"db_subnet_group_name = "${aws_db_subnet_group.dms-rds-subnet-group.id}"deletion_protection = "false"engine = "aurora-postgresql"engine_mode = "provisioned"engine_version = "12.4"database_name = "postgres" (### this database_name is required when connecting even though it is mandatory to create a database with database_name)master_password = "${var.db_password}"master_username = "${var.db_username}port = "5432"skip_final_snapshot = true}// create the RDS instanceresource "aws_rds_cluster_instance" "aws_db_instance_dms" {auto_minor_version_upgrade = "true"publicly_accessible = "false"monitoring_interval = "0"instance_class = "db.r5.large"cluster_identifier = "${aws_rds_cluster.aws_rds_cluster_dms.id}"identifier = "aurora-1-instance-1"db_subnet_group_name = "${aws_db_subnet_group.dms-rds-subnet-group.id}"engine = "aurora-postgresql"engine_version = "12.4"}

Notes: For best security practice in Terraform, we will storing the variables in variables.tf file and also apply sensitive to mask them upon terrafom plan or terraform apply. Lastly, we then store the username and password in secret.tfvars file

vim secret.tfvars (Provide your own username and password)

db_username = "XXX" 
db_password = "XXXX"

Lastly, for running the AWS Schema Conversion Tool we’ll finally setup a Windows instance so we are able to connect via RDP and install the AWS Schema Conversion Tool, we may name the file as windows.tf

vim windows.tf

// create a windows instance for the AWS SCTresource "aws_instance" "dms-oracle-sct" {ami           = "ami-07817f5d0e3866d32"instance_type = "t2.micro"key_name                    = "dms-key-pair"vpc_security_group_ids      = ["${aws_security_group.dms-sg-rdp.id}"]subnet_id                   = "${aws_subnet.dms-subnet.id}"associate_public_ip_address = "true"root_block_device {volume_size           = "30"volume_type           = "standard"delete_on_termination = "true"}tags = {Name = "dms-oracle-sct"}}

Now, let’s build up our infrastructure using Terraform

Initiate terreform infrastructure

$ terraform initInitializing the backend...Initializing provider plugins...
- Finding latest version of hashicorp/aws...
- Finding latest version of hashicorp/http...
- Installing hashicorp/http v2.1.0...
- Installed hashicorp/http v2.1.0 (signed by HashiCorp)
- Installing hashicorp/aws v3.34.0...
- Installed hashicorp/aws v3.34.0 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Warning: Interpolation-only expressions are deprecatedon ec2.tf line 6, in resource "aws_instance" "dms-oracle-source":
6: vpc_security_group_ids = ["${aws_security_group.dms-sg-ssh.id}"]
Terraform 0.11 and earlier required all non-constant expressions to be
provided via interpolation syntax, but this pattern is now deprecated. To
silence this warning, remove the "${ sequence from the start and the }"
sequence from the end of this expression, leaving just the inner expression.
Template interpolation syntax is still used to construct strings from
expressions when the template includes multiple interpolation sequences or a
mixture of literal strings and interpolations. This deprecation applies only
to templates that consist entirely of a single interpolation sequence.
(and 13 more similar warnings elsewhere)Terraform has been successfully initialized!You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Check out our terraform infrastructure

$ terraform validateWarning: Interpolation-only expressions are deprecatedon ec2.tf line 6, in resource "aws_instance" "dms-oracle-source":
6: vpc_security_group_ids = ["${aws_security_group.dms-sg-ssh.id}"]
Terraform 0.11 and earlier required all non-constant expressions to be
provided via interpolation syntax, but this pattern is now deprecated. To
silence this warning, remove the "${ sequence from the start and the }"
sequence from the end of this expression, leaving just the inner expression.
Template interpolation syntax is still used to construct strings from
expressions when the template includes multiple interpolation sequences or a
mixture of literal strings and interpolations. This deprecation applies only
to templates that consist entirely of a single interpolation sequence.
(and 20 more similar warnings elsewhere)Success! The configuration is valid, but there were some validation warnings as shown above.

To plan it out using -var-file=secret.tfvars file

$ terraform plan -var-file="secret.tfvars"An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:# aws_db_subnet_group.dms-rds-subnet-group will be created
+ resource "aws_db_subnet_group" "dms-rds-subnet-group" {
+ arn = (known after apply)
+ description = "Managed by Terraform"
+ id = (known after apply)
+ name = "dms-rds-subnet-group"
+ name_prefix = (known after apply)
+ subnet_ids = (known after apply)
}
# aws_instance.dms-oracle-sct will be created
+ resource "aws_instance" "dms-oracle-sct" {
+ ami = "ami-07817f5d0e3866d32"
+ arn = (known after apply)
+ associate_public_ip_address = true
+ availability_zone = (known after apply)
+ cpu_core_count = (known after apply)
+ cpu_threads_per_core = (known after apply)
+ get_password_data = false
+ host_id = (known after apply)
+ id = (known after apply)
+ instance_state = (known after apply)
+ instance_type = "t2.micro"
+ ipv6_address_count = (known after apply)
+ ipv6_addresses = (known after apply)
+ key_name = "dms-key-pair"
+ outpost_arn = (known after apply)
+ password_data = (known after apply)
+ placement_group = (known after apply)
+ primary_network_interface_id = (known after apply)
+ private_dns = (known after apply)
+ private_ip = (known after apply)
+ public_dns = (known after apply)
+ public_ip = (known after apply)
+ secondary_private_ips = (known after apply)
+ security_groups = (known after apply)
+ source_dest_check = true
+ subnet_id = (known after apply)
+ tags = {
+ "Name" = "dms-oracle-sct"
}
+ tenancy = (known after apply)
+ vpc_security_group_ids = (known after apply)
+ ebs_block_device {
+ delete_on_termination = (known after apply)
+ device_name = (known after apply)
+ encrypted = (known after apply)
+ iops = (known after apply)
+ kms_key_id = (known after apply)
+ snapshot_id = (known after apply)
+ tags = (known after apply)
+ throughput = (known after apply)
+ volume_id = (known after apply)
+ volume_size = (known after apply)
+ volume_type = (known after apply)
}
+ enclave_options {
+ enabled = (known after apply)
}
+ ephemeral_block_device {
+ device_name = (known after apply)
+ no_device = (known after apply)
+ virtual_name = (known after apply)
}
+ metadata_options {
+ http_endpoint = (known after apply)
+ http_put_response_hop_limit = (known after apply)
+ http_tokens = (known after apply)
}
+ network_interface {
+ delete_on_termination = (known after apply)
+ device_index = (known after apply)
+ network_interface_id = (known after apply)
}
+ root_block_device {
+ delete_on_termination = true
+ device_name = (known after apply)
+ encrypted = (known after apply)
+ iops = (known after apply)
+ kms_key_id = (known after apply)
+ throughput = (known after apply)
+ volume_id = (known after apply)
+ volume_size = 30
+ volume_type = "standard"
}
}
# aws_instance.dms-oracle-source will be created
+ resource "aws_instance" "dms-oracle-source" {
+ ami = "ami-096fda3c22c1c990a"
+ arn = (known after apply)
+ associate_public_ip_address = true
+ availability_zone = (known after apply)
+ cpu_core_count = (known after apply)
+ cpu_threads_per_core = (known after apply)
+ get_password_data = false
+ host_id = (known after apply)
+ id = (known after apply)
+ instance_state = (known after apply)
+ instance_type = "t2.medium"
+ ipv6_address_count = (known after apply)
+ ipv6_addresses = (known after apply)
+ key_name = "dms-key-pair"
+ outpost_arn = (known after apply)
+ password_data = (known after apply)
+ placement_group = (known after apply)
+ primary_network_interface_id = (known after apply)
+ private_dns = (known after apply)
+ private_ip = (known after apply)
+ public_dns = (known after apply)
+ public_ip = (known after apply)
+ secondary_private_ips = (known after apply)
+ security_groups = (known after apply)
+ source_dest_check = true
+ subnet_id = (known after apply)
+ tags = {
+ "Name" = "dms-oracle-source"
}
+ tenancy = (known after apply)
+ user_data = "d802892e49d3ac5f731448c704cda93789299a09"
+ vpc_security_group_ids = (known after apply)
+ ebs_block_device {
+ delete_on_termination = (known after apply)
+ device_name = (known after apply)
+ encrypted = (known after apply)
+ iops = (known after apply)
+ kms_key_id = (known after apply)
+ snapshot_id = (known after apply)
+ tags = (known after apply)
+ throughput = (known after apply)
+ volume_id = (known after apply)
+ volume_size = (known after apply)
+ volume_type = (known after apply)
}
+ enclave_options {
+ enabled = (known after apply)
}
+ ephemeral_block_device {
+ device_name = (known after apply)
+ no_device = (known after apply)
+ virtual_name = (known after apply)
}
+ metadata_options {
+ http_endpoint = (known after apply)
+ http_put_response_hop_limit = (known after apply)
+ http_tokens = (known after apply)
}
+ network_interface {
+ delete_on_termination = (known after apply)
+ device_index = (known after apply)
+ network_interface_id = (known after apply)
}
+ root_block_device {
+ delete_on_termination = true
+ device_name = (known after apply)
+ encrypted = (known after apply)
+ iops = (known after apply)
+ kms_key_id = (known after apply)
+ throughput = (known after apply)
+ volume_id = (known after apply)
+ volume_size = 30
+ volume_type = "standard"
}
}
# aws_internet_gateway.dms-igw will be created
+ resource "aws_internet_gateway" "dms-igw" {
+ arn = (known after apply)
+ id = (known after apply)
+ owner_id = (known after apply)
+ tags = {
+ "Name" = "dms-igw"
}
+ vpc_id = (known after apply)
}
# aws_rds_cluster.aws_rds_cluster_dms will be created
+ resource "aws_rds_cluster" "aws_rds_cluster_dms" {
+ apply_immediately = (known after apply)
+ arn = (known after apply)
+ availability_zones = (known after apply)
+ backup_retention_period = 7
+ cluster_identifier = "aurora-dms"
+ cluster_identifier_prefix = (known after apply)
+ cluster_members = (known after apply)
+ cluster_resource_id = (known after apply)
+ copy_tags_to_snapshot = false
+ database_name = (known after apply)
+ db_cluster_parameter_group_name = "default.aurora-postgresql10"
+ db_subnet_group_name = (known after apply)
+ deletion_protection = false
+ enable_http_endpoint = false
+ endpoint = (known after apply)
+ engine = "aurora-postgresql"
+ engine_mode = "provisioned"
+ engine_version = "12.4"
+ hosted_zone_id = (known after apply)
+ id = (known after apply)
+ kms_key_id = (known after apply)
+ master_password = (sensitive value)
+ master_username = (sensitive)
+ port = 5432
+ preferred_backup_window = (known after apply)
+ preferred_maintenance_window = (known after apply)
+ reader_endpoint = (known after apply)
+ skip_final_snapshot = true
+ storage_encrypted = (known after apply)
+ vpc_security_group_ids = (known after apply)
}
# aws_rds_cluster_instance.aws_db_instance_dms will be created
+ resource "aws_rds_cluster_instance" "aws_db_instance_dms" {
+ apply_immediately = (known after apply)
+ arn = (known after apply)
+ auto_minor_version_upgrade = true
+ availability_zone = (known after apply)
+ ca_cert_identifier = (known after apply)
+ cluster_identifier = (known after apply)
+ copy_tags_to_snapshot = false
+ db_parameter_group_name = (known after apply)
+ db_subnet_group_name = (known after apply)
+ dbi_resource_id = (known after apply)
+ endpoint = (known after apply)
+ engine = "aurora-postgresql"
+ engine_version = "12.4"
+ id = (known after apply)
+ identifier = "aurora-1-instance-1"
+ identifier_prefix = (known after apply)
+ instance_class = "db.r5.large"
+ kms_key_id = (known after apply)
+ monitoring_interval = 0
+ monitoring_role_arn = (known after apply)
+ performance_insights_enabled = (known after apply)
+ performance_insights_kms_key_id = (known after apply)
+ port = (known after apply)
+ preferred_backup_window = (known after apply)
+ preferred_maintenance_window = (known after apply)
+ promotion_tier = 0
+ publicly_accessible = false
+ storage_encrypted = (known after apply)
+ writer = (known after apply)
}
# aws_route_table.dms-route will be created
+ resource "aws_route_table" "dms-route" {
+ id = (known after apply)
+ owner_id = (known after apply)
+ propagating_vgws = (known after apply)
+ route = [
+ {
+ cidr_block = "0.0.0.0/0"
+ egress_only_gateway_id = ""
+ gateway_id = (known after apply)
+ instance_id = ""
+ ipv6_cidr_block = ""
+ local_gateway_id = ""
+ nat_gateway_id = ""
+ network_interface_id = ""
+ transit_gateway_id = ""
+ vpc_endpoint_id = ""
+ vpc_peering_connection_id = ""
},
]
+ tags = {
+ "Name" = "dms-igw"
}
+ vpc_id = (known after apply)
}
# aws_route_table_association.subnet-association will be created
+ resource "aws_route_table_association" "subnet-association" {
+ id = (known after apply)
+ route_table_id = (known after apply)
+ subnet_id = (known after apply)
}
# aws_security_group.dms-sg-rdp will be created
+ resource "aws_security_group" "dms-sg-rdp" {
+ arn = (known after apply)
+ description = "Allow RDP inbound traffic"
+ egress = [
+ {
+ cidr_blocks = [
+ "0.0.0.0/0",
]
+ description = ""
+ from_port = 0
+ ipv6_cidr_blocks = []
+ prefix_list_ids = []
+ protocol = "-1"
+ security_groups = []
+ self = false
+ to_port = 0
},
]
+ id = (known after apply)
+ ingress = [
+ {
+ cidr_blocks = [
+ "72.137.76.221/32",
]
+ description = ""
+ from_port = 3389
+ ipv6_cidr_blocks = []
+ prefix_list_ids = []
+ protocol = "tcp"
+ security_groups = []
+ self = false
+ to_port = 3389
},
]
+ name = "dms-sg-rdp"
+ name_prefix = (known after apply)
+ owner_id = (known after apply)
+ revoke_rules_on_delete = false
+ tags = {
+ "Name" = "dms-sg-rdp"
}
+ vpc_id = (known after apply)
}
# aws_security_group.dms-sg-ssh will be created
+ resource "aws_security_group" "dms-sg-ssh" {
+ arn = (known after apply)
+ description = "Allow SSH inbound traffic"
+ egress = [
+ {
+ cidr_blocks = [
+ "0.0.0.0/0",
]
+ description = ""
+ from_port = 0
+ ipv6_cidr_blocks = []
+ prefix_list_ids = []
+ protocol = "-1"
+ security_groups = []
+ self = false
+ to_port = 0
},
]
+ id = (known after apply)
+ ingress = [
+ {
+ cidr_blocks = [
+ "72.137.76.221/32",
]
+ description = ""
+ from_port = 22
+ ipv6_cidr_blocks = []
+ prefix_list_ids = []
+ protocol = "tcp"
+ security_groups = []
+ self = false
+ to_port = 22
},
]
+ name = "dms-sg-ssh"
+ name_prefix = (known after apply)
+ owner_id = (known after apply)
+ revoke_rules_on_delete = false
+ tags = {
+ "Name" = "dms-sg-ssh"
}
+ vpc_id = (known after apply)
}
# aws_subnet.dms-subnet will be created
+ resource "aws_subnet" "dms-subnet" {
+ arn = (known after apply)
+ assign_ipv6_address_on_creation = false
+ availability_zone = "us-east-1a"
+ availability_zone_id = (known after apply)
+ cidr_block = "10.0.1.0/24"
+ id = (known after apply)
+ ipv6_cidr_block_association_id = (known after apply)
+ map_public_ip_on_launch = false
+ owner_id = (known after apply)
+ tags = {
+ "Name" = "dms-subnet"
}
+ tags_all = {
+ "Name" = "dms-subnet"
}
+ vpc_id = (known after apply)
}
# aws_subnet.dms-subnet-2 will be created
+ resource "aws_subnet" "dms-subnet-2" {
+ arn = (known after apply)
+ assign_ipv6_address_on_creation = false
+ availability_zone = "us-east-1b"
+ availability_zone_id = (known after apply)
+ cidr_block = "10.0.2.0/24"
+ id = (known after apply)
+ ipv6_cidr_block_association_id = (known after apply)
+ map_public_ip_on_launch = false
+ owner_id = (known after apply)
+ tags = {
+ "Name" = "dms-subnet-2"
}
+ tags_all = {
+ "Name" = "dms-subnet-2"
}
+ vpc_id = (known after apply)
}
# aws_vpc.dms-vpc will be created
+ resource "aws_vpc" "dms-vpc" {
+ arn = (known after apply)
+ assign_generated_ipv6_cidr_block = false
+ cidr_block = "10.0.0.0/16"
+ default_network_acl_id = (known after apply)
+ default_route_table_id = (known after apply)
+ default_security_group_id = (known after apply)
+ dhcp_options_id = (known after apply)
+ enable_classiclink = (known after apply)
+ enable_classiclink_dns_support = (known after apply)
+ enable_dns_hostnames = true
+ enable_dns_support = true
+ id = (known after apply)
+ instance_tenancy = "default"
+ ipv6_association_id = (known after apply)
+ ipv6_cidr_block = (known after apply)
+ main_route_table_id = (known after apply)
+ owner_id = (known after apply)
+ tags = {
+ "Name" = "dms-vpc"
}
+ tags_all = {
+ "Name" = "dms-vpc"
}
}
Plan: 13 to add, 0 to change, 0 to destroy.Warning: Interpolation-only expressions are deprecatedon ec2.tf line 6, in resource "aws_instance" "dms-oracle-source":
6: vpc_security_group_ids = ["${aws_security_group.dms-sg-ssh.id}"]
Terraform 0.11 and earlier required all non-constant expressions to be
provided via interpolation syntax, but this pattern is now deprecated. To
silence this warning, remove the "${ sequence from the start and the }"
sequence from the end of this expression, leaving just the inner expression.
Template interpolation syntax is still used to construct strings from
expressions when the template includes multiple interpolation sequences or a
mixture of literal strings and interpolations. This deprecation applies only
to templates that consist entirely of a single interpolation sequence.
(and 20 more similar warnings elsewhere)------------------------------------------------------------------------Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

Now we will be applying our terraform infrastructure

$ terraform apply -var-file=secret.tfvars
tls_private_key.this: Refreshing state... [id=325f59e5d39e1864111ef61067e2e2bba9677c68]
aws_rds_cluster_parameter_group.default: Refreshing state... [id=aurora-postgresql10]
module.key_pair.aws_key_pair.this[0]: Refreshing state... [id=dms-key-pair]
aws_vpc.dms-vpc: Refreshing state... [id=vpc-059a7214f5205bfdf]
aws_subnet.dms-subnet-2: Refreshing state... [id=subnet-0a944ba71ca0f5686]
aws_internet_gateway.dms-igw: Refreshing state... [id=igw-0d70af02f32a20372]
aws_subnet.dms-subnet: Refreshing state... [id=subnet-0c8dbde9b0ef7332f]
aws_security_group.dms-sg-ssh: Refreshing state... [id=sg-01c0d94722827a86c]
aws_security_group.dms-sg-rdp: Refreshing state... [id=sg-0fa97e6537b716e97]
aws_instance.dms-oracle-source: Refreshing state... [id=i-0c2ac5e6125aadc18]
aws_route_table.dms-route: Refreshing state... [id=rtb-0dcbb8f5c679a6748]
aws_db_subnet_group.dms-rds-subnet-group: Refreshing state... [id=dms-rds-subnet-group]
aws_instance.dms-oracle-sct: Refreshing state... [id=i-0930a8e7c3f0c7ece]
aws_route_table_association.subnet-association: Refreshing state... [id=rtbassoc-0c6ba7f92d7f715f4]
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
-/+ destroy and then create replacement
Terraform will perform the following actions:# aws_rds_cluster.aws_rds_cluster_dms will be created
+ resource "aws_rds_cluster" "aws_rds_cluster_dms" {
+ apply_immediately = (known after apply)
+ arn = (known after apply)
+ availability_zones = (known after apply)
+ backup_retention_period = 7
+ cluster_identifier = "aurora-dms"
+ cluster_identifier_prefix = (known after apply)
+ cluster_members = (known after apply)
+ cluster_resource_id = (known after apply)
+ copy_tags_to_snapshot = false
+ database_name = (known after apply)
+ db_cluster_parameter_group_name = "aurora-postgresql10"
+ db_subnet_group_name = "dms-rds-subnet-group"
+ deletion_protection = false
+ enable_http_endpoint = false
+ endpoint = (known after apply)
+ engine = "aurora-postgresql"
+ engine_mode = "provisioned"
+ engine_version = "12.4"
+ hosted_zone_id = (known after apply)
+ id = (known after apply)
+ kms_key_id = (known after apply)
+ master_password = (sensitive value)
+ master_username = (sensitive)
+ port = 5432
+ preferred_backup_window = (known after apply)
+ preferred_maintenance_window = (known after apply)
+ reader_endpoint = (known after apply)
+ skip_final_snapshot = true
+ storage_encrypted = (known after apply)
+ vpc_security_group_ids = (known after apply)
}
# aws_rds_cluster_instance.aws_db_instance_dms will be created
+ resource "aws_rds_cluster_instance" "aws_db_instance_dms" {
+ apply_immediately = (known after apply)
+ arn = (known after apply)
+ auto_minor_version_upgrade = true
+ availability_zone = (known after apply)
+ ca_cert_identifier = (known after apply)
+ cluster_identifier = (known after apply)
+ copy_tags_to_snapshot = false
+ db_parameter_group_name = (known after apply)
+ db_subnet_group_name = "dms-rds-subnet-group"
+ dbi_resource_id = (known after apply)
+ endpoint = (known after apply)
+ engine = "aurora-postgresql"
+ engine_version = "12.4"
+ id = (known after apply)
+ identifier = "aurora-1-instance-1"
+ identifier_prefix = (known after apply)
+ instance_class = "db.r5.large"
+ kms_key_id = (known after apply)
+ monitoring_interval = 0
+ monitoring_role_arn = (known after apply)
+ performance_insights_enabled = (known after apply)
+ performance_insights_kms_key_id = (known after apply)
+ port = (known after apply)
+ preferred_backup_window = (known after apply)
+ preferred_maintenance_window = (known after apply)
+ promotion_tier = 0
+ publicly_accessible = false
+ storage_encrypted = (known after apply)
+ writer = (known after apply)
}
# aws_rds_cluster_parameter_group.default must be replaced
-/+ resource "aws_rds_cluster_parameter_group" "default" {
~ arn = "arn:aws:rds:us-east-1:464392538707:cluster-pg:aurora-postgresql10" -> (known after apply)
~ family = "aurora5.6" -> "aurora-postgresql12" # forces replacement
~ id = "aurora-postgresql10" -> (known after apply)
name = "aurora-postgresql10"
+ name_prefix = (known after apply)
- tags = {} -> null
# (1 unchanged attribute hidden)
}
Plan: 3 to add, 0 to change, 1 to destroy.Warning: Interpolation-only expressions are deprecatedon ec2.tf line 18, in resource "aws_instance" "dms-oracle-source":
18: vpc_security_group_ids = ["${aws_security_group.dms-sg-ssh.id}"]
Terraform 0.11 and earlier required all non-constant expressions to be
provided via interpolation syntax, but this pattern is now deprecated. To
silence this warning, remove the "${ sequence from the start and the }"
sequence from the end of this expression, leaving just the inner expression.
Template interpolation syntax is still used to construct strings from
expressions when the template includes multiple interpolation sequences or a
mixture of literal strings and interpolations. This deprecation applies only
to templates that consist entirely of a single interpolation sequence.
(and 21 more similar warnings elsewhere)Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yesaws_rds_cluster_parameter_group.default: Destroying... [id=aurora-postgresql10]
aws_rds_cluster_parameter_group.default: Destruction complete after 1s
aws_rds_cluster_parameter_group.default: Creating...
aws_rds_cluster_parameter_group.default: Creation complete after 1s [id=aurora-postgresql10]
aws_rds_cluster.aws_rds_cluster_dms: Creating...
aws_rds_cluster.aws_rds_cluster_dms: Still creating... [10s elapsed]
aws_rds_cluster.aws_rds_cluster_dms: Still creating... [20s elapsed]
aws_rds_cluster.aws_rds_cluster_dms: Still creating... [30s elapsed]
aws_rds_cluster.aws_rds_cluster_dms: Still creating... [40s elapsed]
aws_rds_cluster.aws_rds_cluster_dms: Still creating... [50s elapsed]
aws_rds_cluster.aws_rds_cluster_dms: Still creating... [1m0s elapsed]
aws_rds_cluster.aws_rds_cluster_dms: Still creating... [1m10s elapsed]
aws_rds_cluster.aws_rds_cluster_dms: Creation complete after 1m15s [id=aurora-dms]
aws_rds_cluster_instance.aws_db_instance_dms: Creating...
aws_rds_cluster_instance.aws_db_instance_dms: Still creating... [10s elapsed]
aws_rds_cluster_instance.aws_db_instance_dms: Still creating... [20s elapsed]
aws_rds_cluster_instance.aws_db_instance_dms: Still creating... [30s elapsed]
aws_rds_cluster_instance.aws_db_instance_dms: Still creating... [40s elapsed]
aws_rds_cluster_instance.aws_db_instance_dms: Still creating... [50s elapsed]
aws_rds_cluster_instance.aws_db_instance_dms: Still creating... [1m0s elapsed]
aws_rds_cluster_instance.aws_db_instance_dms: Still creating... [1m10s elapsed]
aws_rds_cluster_instance.aws_db_instance_dms: Still creating... [1m20s elapsed]
aws_rds_cluster_instance.aws_db_instance_dms: Still creating... [1m30s elapsed]
aws_rds_cluster_instance.aws_db_instance_dms: Still creating... [1m40s elapsed]
aws_rds_cluster_instance.aws_db_instance_dms: Still creating... [1m50s elapsed]
aws_rds_cluster_instance.aws_db_instance_dms: Still creating... [2m0s elapsed]
aws_rds_cluster_instance.aws_db_instance_dms: Still creating... [2m10s elapsed]
aws_rds_cluster_instance.aws_db_instance_dms: Still creating... [2m20s elapsed]
aws_rds_cluster_instance.aws_db_instance_dms: Still creating... [2m30s elapsed]
aws_rds_cluster_instance.aws_db_instance_dms: Still creating... [2m40s elapsed]
aws_rds_cluster_instance.aws_db_instance_dms: Still creating... [2m50s elapsed]
aws_rds_cluster_instance.aws_db_instance_dms: Still creating... [3m0s elapsed]
aws_rds_cluster_instance.aws_db_instance_dms: Still creating... [3m10s elapsed]
aws_rds_cluster_instance.aws_db_instance_dms: Still creating... [3m20s elapsed]
aws_rds_cluster_instance.aws_db_instance_dms: Still creating... [3m30s elapsed]
aws_rds_cluster_instance.aws_db_instance_dms: Still creating... [3m40s elapsed]
aws_rds_cluster_instance.aws_db_instance_dms: Still creating... [3m50s elapsed]
aws_rds_cluster_instance.aws_db_instance_dms: Still creating... [4m0s elapsed]
aws_rds_cluster_instance.aws_db_instance_dms: Still creating... [4m10s elapsed]
aws_rds_cluster_instance.aws_db_instance_dms: Still creating... [4m20s elapsed]
aws_rds_cluster_instance.aws_db_instance_dms: Still creating... [4m30s elapsed]
aws_rds_cluster_instance.aws_db_instance_dms: Still creating... [4m40s elapsed]
aws_rds_cluster_instance.aws_db_instance_dms: Still creating... [4m50s elapsed]
aws_rds_cluster_instance.aws_db_instance_dms: Still creating... [5m0s elapsed]
aws_rds_cluster_instance.aws_db_instance_dms: Still creating... [5m10s elapsed]
aws_rds_cluster_instance.aws_db_instance_dms: Still creating... [5m20s elapsed]
aws_rds_cluster_instance.aws_db_instance_dms: Still creating... [5m30s elapsed]
aws_rds_cluster_instance.aws_db_instance_dms: Still creating... [5m40s elapsed]
aws_rds_cluster_instance.aws_db_instance_dms: Still creating... [5m50s elapsed]
aws_rds_cluster_instance.aws_db_instance_dms: Still creating... [6m0s elapsed]
aws_rds_cluster_instance.aws_db_instance_dms: Still creating... [6m10s elapsed]
aws_rds_cluster_instance.aws_db_instance_dms: Still creating... [6m20s elapsed]
aws_rds_cluster_instance.aws_db_instance_dms: Still creating... [6m30s elapsed]
aws_rds_cluster_instance.aws_db_instance_dms: Still creating... [6m40s elapsed]
aws_rds_cluster_instance.aws_db_instance_dms: Still creating... [6m50s elapsed]
aws_rds_cluster_instance.aws_db_instance_dms: Still creating... [7m0s elapsed]
aws_rds_cluster_instance.aws_db_instance_dms: Still creating... [7m10s elapsed]
aws_rds_cluster_instance.aws_db_instance_dms: Creation complete after 7m15s [id=aurora-1-instance-1]
Apply complete! Resources: 3 added, 0 changed, 1 destroyed.

Upon creation of our resources, we are heading to our connect page for RDP connection using Windows Server. Here it is a bit of tricky.

Firstly, you need to locate your private_key from your local environment as shown below. You may grep private_key in terraform.tfstate file (Notes: That’s why you should be very much vigilent in regards to passwords and credentials in this file)

$ cat terraform.tfstate | grep -i private_key
"type": "tls_private_key",
"private_key_pem": "-----BEGIN RSA PRIVATE KEY-----\nMIIEogIBAAKCAQEA4BVzSja4wqvcfcy5kcuDwmqLdqtYuIF0TWLtlvUiaN1AZ8ID\nOJYx(deleted parts of it for security )p9T8f8h2M3he/AWgpJH\nzrvWkfS8Un3lawQJr/7rlRu6Fo0rBz8DrNTX0c77cBbL85xNh477z8tFImMbjT43\n5EKfJwZH4pWd0HHdcBNwy5ywYvcFFCA+VXfuLucU8thqCxkXPwn0j7vnQiLjk19G\ndGX4B5WTp6f5LZm0Ynmwp5lEg3Veeecvt8qOpliBOauDOLsaQD5U6AC9fem9/1uJ\nE9wXcJkCgYEA/i7PB5nQT2H9vyP9MtkKK1KOPVvbusRDtWgdswyJAB2jVJTyY6GS\nGKSWOUFfJI0rm4Bvftxt0VD2MNZhqwhys35DgUKMd3IrhHU9cGv23NjJQ7cy1ecq\ngR59fb/kmcjT7tBDFtxUT1WzSoDJfSDmzkSLzMIKscTNiVS2R/zFng8CgYEA4a+O\nV6rOVhBmp/Wc3LijXyr0JhZplw05CSk4TeE2yzMY9PlMWzy7J9jorimjXoPAg8AT\n2TJ0wmrMpNdexquKb60xJPvNmJ9d/rIgKeLTfl/Vd4EUQl++Huc9SkZ9ZMXoeAqe\n6G+peVWMF2f4HYXlrOkQYQCEPsurczoCMxjHqX0CgYBwrVdhSzIovou5u752V/hG\nFCax1JKnTHGnbSwdPyVMQ9cvm4eH2wvkmLFvWCdRELOQD3NdjWGxNG6uX5qUMv6F\nyycpmdKi8J2R7lb6CyI37HHr7r4+TGdvLZD7uaEg+wHYD8Jt0+Yb9SWxlT28lmU6\ncvB2KF6NR2zFwCO97bO8yQKBgBRYUyisCTXQ/LAfgCiVrISjxqa4VoR7eKzOvnim\n2N2wmYtb/form2OYNkGdF1Ep52z5H9Dwr33nStOBZtXaGPzATDHdUUd09nBDdorQ\nG+jEkuXXCRCCuQzoI6pSeHNhM/e+XVzu1ARQJfTmNoPS0kWoLQXRmhpfGfGlRRV+\nImGxAoGAFNDXYQqtKG+I8w7uOZzxyioTEoeVGCR6OEth7X5USJD8xWVOmjhI9QxP\nTvyjtIju4dF/5tXlJHSjoq/GEdxYwV5eSIcBX5ncx0Xc4yHeTMY3EMyh2kczmOWS\nWwvnOYA0ZD87/wPfUknrJo+QYLhVDE4TXnYtz17oQBo0th1Ylzc=\n-----END RSA PRIVATE KEY-----\n",
"tls_private_key.this"

After this, you may need to store it in a text editor as .pem file. (Notes: \n needs to be deleted and you have to start a new line after \n as well as shown below)

-----BEGIN RSA PRIVATE KEY-----
MIIEogIBAAKCAQEA4BVzSja4wqvcfcy5kcuDwmqLdqtYuIF0TWLtlvUiaN1AZ8ID
OJYxpXIPgeWVVhgrIbMqhCq0WfVqZgAqa9GlWCk7luOslOROZaTwEl5r5mPCgnkO
Xf4Voegxwvur9RAxDiU3VdXWvxYzXN6GGpts5bHzYjLopG2RVjhsi+dzclI5Edwg
AHPhWyCE3AkTsh5dt9ih2K+Q0oibqIdXBkMWv+1vTmEqwX4OnEQl+3S3q+igosa7
7/ESiQ8HeD6SnIBBKNVq8299EHMvcRcY5OfNTdXGS+xxsYtaYhn7K0SyzxvVbsvf
BZNehJGxW+0PZMcFIzWtbQx/3hHfWyZxJ38UUwIDAQABAoIBAAd6T8WbO+ErMQl1
(Deleted a few lines for security)
GKSWOUFfJI0rm4Bvftxt0VD2MNZhqwhys35DgUKMd3IrhHU9cGv23NjJQ7cy1ecq
gR59fb/kmcjT7tBDFtxUT1WzSoDJfSDmzkSLzMIKscTNiVS2R/zFng8CgYEA4a+O
V6rOVhBmp/Wc3LijXyr0JhZplw05CSk4TeE2yzMY9PlMWzy7J9jorimjXoPAg8AT
2TJ0wmrMpNdexquKb60xJPvNmJ9d/rIgKeLTfl/Vd4EUQl++Huc9SkZ9ZMXoeAqe
6G+peVWMF2f4HYXlrOkQYQCEPsurczoCMxjHqX0CgYBwrVdhSzIovou5u752V/hG
FCax1JKnTHGnbSwdPyVMQ9cvm4eH2wvkmLFvWCdRELOQD3NdjWGxNG6uX5qUMv6F
yycpmdKi8J2R7lb6CyI37HHr7r4+TGdvLZD7uaEg+wHYD8Jt0+Yb9SWxlT28lmU6
cvB2KF6NR2zFwCO97bO8yQKBgBRYUyisCTXQ/LAfgCiVrISjxqa4VoR7eKzOvnim
2N2wmYtb/form2OYNkGdF1Ep52z5H9Dwr33nStOBZtXaGPzATDHdUUd09nBDdorQ
G+jEkuXXCRCCuQzoI6pSeHNhM/e+XVzu1ARQJfTmNoPS0kWoLQXRmhpfGfGlRRV+
ImGxAoGAFNDXYQqtKG+I8w7uOZzxyioTEoeVGCR6OEth7X5USJD8xWVOmjhI9QxP
TvyjtIju4dF/5tXlJHSjoq/GEdxYwV5eSIcBX5ncx0Xc4yHeTMY3EMyh2kczmOWS
WwvnOYA0ZD87/wPfUknrJo+QYLhVDE4TXnYtz17oQBo0th1Ylzc=
-----END RSA PRIVATE KEY-----

Click Get password on the following page and paste this private_key into the box, then a password will be decrypted for you

Password decrypted

Download this file from remote desktop file above and input password decrypted

Login console
Confirmatino page
Successfully login

You may encounter issues while downloading on IE browser on EC2 Window Server. Please navigate and click Internet options as shown below

Internet options located

Under Internet icon, click Custom level

Custom level clicked

Then we need to enable two items: File downloaded and Active scripting respectively

Enable file downloaded
Enable active scripting

Notes: If memory is not enough from EC2 Windows Server, you will be encountering error as shown below

So we have no other alternatives, but update our infrastructure — choosing a bigger memory with Windows server

Getting back to our windows.tf file and update ami as shown below

// create a windows instance for the AWS SCT
resource "aws_instance" "dms-oracle-sct" {
ami = "ami-07817f5d0e3866d32"
instance_type = "t2.medium"
key_name = "dms-key-pair"
vpc_security_group_ids = ["${aws_security_group.dms-sg-rdp.id}"]
subnet_id = "${aws_subnet.dms-subnet.id}"
associate_public_ip_address = "true"
root_block_device {
volume_size = "30"
volume_type = "standard"
delete_on_termination = "true"
}
tags = {
Name = "dms-oracle-sct"
}
}

Now, we terraform plan to check out what was updated

$ terraform plan -var-file=secret.tfvars
tls_private_key.this: Refreshing state... [id=325f59e5d39e1864111ef61067e2e2bba9677c68]
aws_vpc.dms-vpc: Refreshing state... [id=vpc-059a7214f5205bfdf]
aws_rds_cluster_parameter_group.default: Refreshing state... [id=aurora-postgresql10]
module.key_pair.aws_key_pair.this[0]: Refreshing state... [id=dms-key-pair]
aws_subnet.dms-subnet-2: Refreshing state... [id=subnet-0a944ba71ca0f5686]
aws_security_group.dms-sg-ssh: Refreshing state... [id=sg-01c0d94722827a86c]
aws_internet_gateway.dms-igw: Refreshing state... [id=igw-0d70af02f32a20372]
aws_subnet.dms-subnet: Refreshing state... [id=subnet-0c8dbde9b0ef7332f]
aws_security_group.dms-sg-rdp: Refreshing state... [id=sg-0fa97e6537b716e97]
aws_route_table.dms-route: Refreshing state... [id=rtb-0dcbb8f5c679a6748]
aws_instance.dms-oracle-source: Refreshing state... [id=i-0c2ac5e6125aadc18]
aws_instance.dms-oracle-sct: Refreshing state... [id=i-0930a8e7c3f0c7ece]
aws_db_subnet_group.dms-rds-subnet-group: Refreshing state... [id=dms-rds-subnet-group]
aws_route_table_association.subnet-association: Refreshing state... [id=rtbassoc-0c6ba7f92d7f715f4]
aws_rds_cluster.aws_rds_cluster_dms: Refreshing state... [id=aurora-dms]
aws_rds_cluster_instance.aws_db_instance_dms: Refreshing state... [id=aurora-1-instance-1]
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:# aws_instance.dms-oracle-sct will be updated in-place
~ resource "aws_instance" "dms-oracle-sct" {
id = "i-0930a8e7c3f0c7ece"
~ instance_type = "t2.micro" -> "t2.medium"
tags = {
"Name" = "dms-oracle-sct"
}
# (26 unchanged attributes hidden)
# (4 unchanged blocks hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.Warning: Interpolation-only expressions are deprecatedon ec2.tf line 18, in resource "aws_instance" "dms-oracle-source":
18: vpc_security_group_ids = ["${aws_security_group.dms-sg-ssh.id}"]
Terraform 0.11 and earlier required all non-constant expressions to be
provided via interpolation syntax, but this pattern is now deprecated. To
silence this warning, remove the "${ sequence from the start and the }"
sequence from the end of this expression, leaving just the inner expression.
Template interpolation syntax is still used to construct strings from
expressions when the template includes multiple interpolation sequences or a
mixture of literal strings and interpolations. This deprecation applies only
to templates that consist entirely of a single interpolation sequence.
(and 21 more similar warnings elsewhere)------------------------------------------------------------------------Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

Then, we are ready to apply it using terraform apply

$ terraform apply -var-file=secret.tfvars
tls_private_key.this: Refreshing state... [id=325f59e5d39e1864111ef61067e2e2bba9677c68]
aws_rds_cluster_parameter_group.default: Refreshing state... [id=aurora-postgresql10]
module.key_pair.aws_key_pair.this[0]: Refreshing state... [id=dms-key-pair]
aws_vpc.dms-vpc: Refreshing state... [id=vpc-059a7214f5205bfdf]
aws_security_group.dms-sg-ssh: Refreshing state... [id=sg-01c0d94722827a86c]
aws_security_group.dms-sg-rdp: Refreshing state... [id=sg-0fa97e6537b716e97]
aws_internet_gateway.dms-igw: Refreshing state... [id=igw-0d70af02f32a20372]
aws_subnet.dms-subnet-2: Refreshing state... [id=subnet-0a944ba71ca0f5686]
aws_subnet.dms-subnet: Refreshing state... [id=subnet-0c8dbde9b0ef7332f]
aws_route_table.dms-route: Refreshing state... [id=rtb-0dcbb8f5c679a6748]
aws_db_subnet_group.dms-rds-subnet-group: Refreshing state... [id=dms-rds-subnet-group]
aws_instance.dms-oracle-sct: Refreshing state... [id=i-0930a8e7c3f0c7ece]
aws_instance.dms-oracle-source: Refreshing state... [id=i-0c2ac5e6125aadc18]
aws_route_table_association.subnet-association: Refreshing state... [id=rtbassoc-0c6ba7f92d7f715f4]
aws_rds_cluster.aws_rds_cluster_dms: Refreshing state... [id=aurora-dms]
aws_rds_cluster_instance.aws_db_instance_dms: Refreshing state... [id=aurora-1-instance-1]
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:# aws_instance.dms-oracle-sct will be updated in-place
~ resource "aws_instance" "dms-oracle-sct" {
id = "i-0930a8e7c3f0c7ece"
~ instance_type = "t2.micro" -> "t2.medium"
tags = {
"Name" = "dms-oracle-sct"
}
# (26 unchanged attributes hidden)
# (4 unchanged blocks hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.Warning: Interpolation-only expressions are deprecatedon ec2.tf line 18, in resource "aws_instance" "dms-oracle-source":
18: vpc_security_group_ids = ["${aws_security_group.dms-sg-ssh.id}"]
Terraform 0.11 and earlier required all non-constant expressions to be
provided via interpolation syntax, but this pattern is now deprecated. To
silence this warning, remove the "${ sequence from the start and the }"
sequence from the end of this expression, leaving just the inner expression.
Template interpolation syntax is still used to construct strings from
expressions when the template includes multiple interpolation sequences or a
mixture of literal strings and interpolations. This deprecation applies only
to templates that consist entirely of a single interpolation sequence.
(and 21 more similar warnings elsewhere)Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yesaws_instance.dms-oracle-sct: Modifying... [id=i-0930a8e7c3f0c7ece]
aws_instance.dms-oracle-sct: Still modifying... [id=i-0930a8e7c3f0c7ece, 10s elapsed]
aws_instance.dms-oracle-sct: Still modifying... [id=i-0930a8e7c3f0c7ece, 20s elapsed]
aws_instance.dms-oracle-sct: Still modifying... [id=i-0930a8e7c3f0c7ece, 30s elapsed]
aws_instance.dms-oracle-sct: Still modifying... [id=i-0930a8e7c3f0c7ece, 40s elapsed]
aws_instance.dms-oracle-sct: Still modifying... [id=i-0930a8e7c3f0c7ece, 50s elapsed]
aws_instance.dms-oracle-sct: Still modifying... [id=i-0930a8e7c3f0c7ece, 1m0s elapsed]
aws_instance.dms-oracle-sct: Still modifying... [id=i-0930a8e7c3f0c7ece, 1m10s elapsed]
aws_instance.dms-oracle-sct: Modifications complete after 1m13s [id=i-0930a8e7c3f0c7ece]
Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

Now we are able to resolve the issue and open our tool! (The link to download this AWS Schema Conversion Tool is found here)

Upon downloading, we open the tool

Open the the tool

You may need to accept the terms

Terms page

Fill out Choose a source page as shown below

Choose a source page

Apart from this, to gain access to our Oracle database sitting on EC2 Redhat 7 server, we have to edit our inbound rules as shown blew. Area erased should be filled in with your own ip address

Security group of oracle database on redhat7

To connect to Oracle, we need to provide server name — your EC2 Public DNS, your port default 1521, Oracle SID — xe, User name — system, Password — manager. Just a reminder, apart from your DNS, you should have the safe inputs if you’re using codes provided in this project

Here is an example how you find user name and password from our code

sqlplus system/manager@localhost/XEPDB1

system — username, manager — password

Blank page for connection

For the Oracle Drive Path, you may download it here

Ultimately, we got it connected and click next

Successfully connected

dip was chosen here, but you’re free to choose any one database listed here

Choose a schema

Database migration assessment report generated

Assessment page

Another driver is for postgresql is required, you can download it here

Locate your driver under downloads

Fillout your target

Target page

Notes: Sever name — writer is the writer type of your endpoint under your RDS database, Sever port — 5432, Database: postgres (make sure you do create a database with a name when using Terraform! I was stuck here for one day to figure it out! ), username and password created along with this database

Voila, we ultimately get our databases connected!

Databases connected
Convert schema

To cross check our RDS Aurora Postgresql, please click modify under writer of your databases

Writer of your database

Change public access to Publicly accessible since our default setup was Not publicly accesible

Public accesible

We also need to add your ip address, which is erased for security reason, as the source for PostgreSQL on port 5432 to connect to Aurora PostgreSQL from Redhat 8

Add your ip address

To login from local environment

$ psql -h aurora-dms.cluster-cmafadkjqola.us-east-1.rds.amazonaws.com -U adminuser -d postgres
Password for user adminuser:
psql (10.15, server 12.4)
WARNING: psql major version 10, server major version 12.
Some psql features might not work.
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)
Type "help" for help.
postgres=>

Notes: You need to provide -h (writer type of your Aurora Postgres database endpoint, -U (master user name), -d (database name), password will be prompted

Find out our schema transferred

postgres=> \dn
aws_oracle_context | adminuser
aws_oracle_data | adminuser
aws_oracle_ext | adminuser
dip | adminuser
public | adminuser

In case you’re keen on learning more about Postgres databases, all command lines here for you!

Clean up the infrastructure:

Now it’s the time to feel the power of Terraform

$ terraform destroy -var-file=secret.tfvars
.
.
.
Destroy complete! Resources: 16 destroyed.

Just to cross check in AWS console

RDS database
Ec2 terminated

Conclusion:

Project infrastructure

Let us conclude this project with the project diagram. In this project we accomplished data transfer from a source server (Redhat 7.6) with Oracle database installed to a target server (RDS Aurora Postgresql) using Terraform. We also used Windows Server sitting in an EC2 instance as a medium to deploy this data transfer with AWS Schema Conversion Tool

This whole project is on Cloud using AWS, but you can definitely use your own Windows System to achieve data transfer from on-premise server to an Aurora Postgresql

Let us discuss the meats now for the one last time — Why Terraform

Throughtout the project, we’ve seen how Terraform deploy our whole infrastructure seemlessly at the very beginning. However, this was an error we had to fix. So that we updated our .tf file and applied our infrastructure again. At the end of the day, we implemented terraform destroy to clean up our whole infrastructure without manual work

With that said, are you ready to Terraform it in Cloud?

--

--

Paul Zhao
Paul Zhao Projects

Amazon Web Service Certified Solutions Architect Professional & Devops Engineer