Create a Read Replica of an external source/master(AWS) in GCP CloudSQL using GCP Database Migration Service (DMS)

Rohan Singh
Google Cloud - Community
7 min readNov 20, 2023

Lately, I had an excellent opportunity to work with the client DB team to manage PetaBytes of crucial data in monstrous AWS RDS and MySQL on EC2 instances. For a few MySQL EC2 instances, there was a requirement to provision read replica in GCP CloudSQL and we couldn’t afford downtime of even a single minute, data access was limited, data were in PBs and since GCP CloudSQL would be a read replica meaning continuous, incremental and smooth replication of new data, thus the manual way was out of the equation.

For such cases, the Google Cloud Platform offers an incredible and user-friendly offering called Database Migration Service or DMS. Using DMS capabilities we accomplish this hefty task in such a seamless manner that it cannot be adequately described. Kudos to GCP! 11 out of 10.

In this solution blog, the configuration of a cross-cloud read replica of MySQL in GCP with a master running in AWS has been demonstrated.

Source: Adobe Firefly (two machines connected with each other over clouds in glitch art effect)

MySQL GTID Mode value — ON_PERMISSIVE and OFF_PERMISSIVE are not supported with GCP DMS. Make sure that gtid_mode is set to either ON or OFF. GCP doesn’t allow us to change the GTID Mode value in the CloudSQL Instance and if the source doesn’t use GTID then follow Configure Cloud SQL and the external server for replication to create CloudSQL Read Replica of an external source.

Assumptions

  • VPN connectivity b/w GCP and other cloud vendors/on-prem
  • Private Service Access in GCP VPC.
  • MySQL, already running out of GCP(I know, I know…)

GCP Database Migration Service aka DMS

GCP Database Migration Service lets you simplify the migration of databases from anywhere to GCP. As of now, GCP DMS offers two types of Migration Jobs — Continuous(Real-time sync b/w source and destination) and One-time(Full dump from the source and load it to the destination).

GCP Database Migration Service

Since our CloudSQL instance will act as a Read Replica of AWS EC2 DB hence Continuous Migration job type will be chosen.

Let’s start by creating a connection profile in GCP DMS. Connection Profile is where you store source database information such as username, password etc.

From GCP Console ➜ Database Migration ➜ Connection profile and Create a connection profile by entering source database information.

GCP DMS Connection Profile

Note: Here the Database Engine is MySQL because our source DB was running MySQL on AWS EC2 instances. The Database Engine could change depending upon your source DB. Private or internal IP of AWS EC2 instance has been entered in Hostname or IP Address.

The username you enter here must have the required privileges(EXECUTE, SELECT, SHOW VIEW, REPLICATION CLIENT, RELOAD) in the external source database.

After creating a connection profile, it is time to set up Migration jobs. The Migration job configuration has five stages which include connectivity testing.

The first step is to fill migration job information.

Migration Jobs Configuration

In the second step, select the connection profile which is created above. You don’t need to do any additional things in the second step.

The third step is about creating a destination CloudSQL Instance which will be a read replica. In my case, the Source DB version was MYSQL 5.7 hence Enterprise SQL Edition was chosen. You can choose accordingly.

Cloud SQL Editions Comparison — Enterprise vs Enterprise Plus

For the connections uncheck Public IP and check Private IP

CloudSQL Connections

Continue choosing Machine shapes, Storage, and DB Flags(if required).

Once you hit Create. Cloud Instance Read Replica with Replication Disabled would be created along with its External Source Representation.

GCP Cloud Instance Read Replica along with its External Source Representation

In the fourth step, the connectivity method needs to be set. Select VPC Peering as the Connectivity Method and choose VPC.

The fifth and final step will present a review page with the necessary information. This is where the connectivity b/w external source and GCP destination would be tested.

Prior to testing, we need to do certain networking stuff —

  1. Enable custom routes in Private Service Access. VPC network ➜ your vpc ➜ Private Service Access ➜ Private Connections to Service and enable Export Custom Routes.
  2. Allow custom routes in VPC Peering. VPC network ➜ VPC Network Peering ➜ servicenetworking-googleapis-com. Edit and enable Exchange IPv4 custom routes.
  3. Whitelist or allow the allocated internal IP range of CloudSQL Private Service Access in the AWS Security Group Inbound Rule. VPC ➜ Security Groups ➜ your security group attached to the DB. Add CIDR range.
  4. Add or update the allocated internal IP range of CloudSQL Private Service Access in AWS VPN Connections. VPC ➜ VPN Connections ➜ your VPN. Edit Static routes.

Phewww!!! Quite a work.

Let’s test the connectivity now.

Test and create migration job

DMS will show all errors and warnings after this.

Some of the errors that we received are —

  • UNSUPPORTED_BINLOG_FORMAT. Set BINLOG_FORMAT to ROW. In Configuring the binary log to any other format, such as STATEMENT or MIXED, might cause the replication to fail.
  • UNSUPPORTED_STORAGE_ENGINE. Currently, only the InnoDB Storage Engine is well supported for migration; change the storage engine in AWS Source DB.
  • Definer is not supported. Update the DEFINER clause to INVOKER on your source MySQL instance prior to setting up your migration job. Read in detail here.
  • The user doesn’t exist in the replica. Create the user mentioned in the error in the read replica.

CloudSQL MySQL Diagnose has listed all issues that might be thrown by DMS with a solution next to it.

Once everything is configured properly, the test will be successfully done.

Test the migration job successful message

Now hit Create Job.

Create & Start Migration Job Pop-up

Based on our data size(approx 500 GB) we had a lock for 1–2 mins. The amount of time that the database will be locked depends on the size of the database and the amount of data that needs to be read. Nonetheless, it would be a good idea to have a DBA check this number before moving forward if you are an infra person like me and if you’re a DBA then you are great.

Once you hit Create & Start, replication will be enabled which you can easily see in the DMS window. DMS gives the Monitoring graph directly in the Migration Job window.

DMS Migration Job Running

In our case the migration failed after 70% of data replication, upon investigation we found that the source database had tables with large blobs thus incrementing max_allowed_packet was required. Lost connection to MySQL server during query when dumping table. We fixed it and restarted the replication.

Add more read replicas under the same external source

Creation of more than one read replicas under the same external source in GCP can be accomplished but not via DMS.

From GCP CloudShell, create a file with replica.json name and add —

{
"settings": {
"tier": "db-custom-36-98304",
"dataDiskSizeGb": "100",
"ipConfiguration": {
"privateNetwork": "projects/cloudorbit/global/networks/andromeda"
},
"availabilityType": "REGIONAL"
},
"masterInstanceName": "ec2-cloudorbit-read-replica-master-2-master",
"region": "us-east1",
"databaseVersion": "MYSQL_5_7",
"name": "ec2-cloudorbit-read-replica-master"
}

Save it and run —

gcloud auth login
ACCESS_TOKEN="$(gcloud auth print-access-token)"
curl --header "Authorization: Bearer ${ACCESS_TOKEN}" \
--header 'Content-Type: application/json' \
--data @/home/rohandash1998/dms-blog-files/replica.json \
-X POST \
https://sqladmin.googleapis.com/sql/v1beta4/projects/cloudorbit/instances

Once you run this command, your second read replica instance will be created with replication disabled.

Second read replica under creation

This will create an instance with a Public IP which you can remove after creation.

Two read replicas under the same external source

To enable the replication, I would recommend following this Google Cloud Tutorial. Use a dump file to set up replication from external databases. For monitoring the replication you may use the monitoring dashboard available in CloudSQL Instance itself.

Anddd… that’s how you create a read replica of the external master in GCP CloudSQL and add more replicas.

GCP Database Migration Service

CloudSQL MySQL Diagnose

Migration Job Statuses

Debugging Connectivity

Read my other Cloud/DevOps/Infra blogs:-

Read industry professional interviews:-

Silly Sit-Downs(SSD) with Rohan

7 stories

Clap if you find this solution informative and useful.

Follow and subscribe to my medium space to stay tuned for interviews and tech blogs. Till then have a good day and Sayonara!!!

--

--

Rohan Singh
Google Cloud - Community

Infrastructure @ SADA | Google Cloud Champion Innovator | Motorcyclist | rohans.dev