Cross-Project Cloud SQL Connection with Private Service Connect and Terraform
As cloud infrastructures evolve, the need for secure and efficient cross-project communication becomes increasingly vital. In this article, we’ll explore how to establish a cross-project Cloud SQL connection using Private Service Connect.
Other connection methods
Before we begin the setup, let’s explore the cross-project/cross-vpc Cloud SQL connection alternatives: Cloud VPN and Cloud SQL Auth Proxy.
Cloud VPN:
- Advantages: Simple setup
- Disadvantages: Performance bottlenecks — limited bandwidth
Cloud SQL Auth Proxy:
- Advantages: Simplified connection, connection authorization with IAM
- Disadvantages: Limited scalability, harder to configure with larger number of SQL instance
Private Service Connect:
- Advantages: Scalable, secure, and efficient cross-project communication
- Disadvantages: Initial setup may seem complex, limited SQL instance configuration through GCP Console.
You can also use all of the above methods with a Shared VPC.
Setup
Prerequisites:
- Terraform (~> 5.7.0)
- gcloud
- Two GCP projects created beforehand
We will set up two projects, “Project A” and “Project B”, with Cloud SQL MySQL instance in Project A, and networking components, Private Service Components, and a test instance in Project B.
The repository can be found here: https://github.com/atakanttl/cloudsql-psc-terraform.git
Terraform resource descriptions
- Enable necessary APIs in both projects.
locals {
service_list = [
"compute.googleapis.com"
]
}
resource "google_project_service" "prj_a_services" {
project = var.project_a
for_each = toset(local.service_list)
service = each.key
}
resource "google_project_service" "prj_b_services" {
project = var.project_b
for_each = toset(local.service_list)
service = each.key
}
- Create a Cloud SQL (MySQL) instance in Project A.
resource "google_sql_database_instance" "prj_a_mysql_test" {
project = var.project_a
name = "prj-a-mysql-test"
database_version = "MYSQL_8_0"
region = var.region
settings {
tier = "db-g1-small"
ip_configuration {
psc_config {
psc_enabled = true
allowed_consumer_projects = [var.project_a, var.project_b]
}
ipv4_enabled = false
}
backup_configuration {
enabled = true
binary_log_enabled = true
}
availability_type = "REGIONAL"
edition = "ENTERPRISE"
disk_type = "PD_HDD"
disk_size = 10
}
deletion_protection = false
depends_on = [google_project_service.prj_a_services]
}
resource "google_sql_user" "test" {
project = var.project_a
name = "test"
instance = google_sql_database_instance.prj_a_mysql_test.name
host = "%"
password = random_password.mysql_test_password.result
}
resource "random_password" "mysql_test_password" {
length = 12
}
Pay attention to the configuration settings, `psc_config` block is needed to setup Private Service Connect. `allowed_consumer_projects`, as the name implies, takes the list of other projects to be allowed to connect to the Cloud SQL instance.
We also create a root user named “test”, with a random password. We will acquire the password later after running the scripts.
- Create networking components in Project B.
Let’s start with a VPC network and a subnet with the IP block of 10.200.0.0/24.
resource "google_compute_network" "prj_b_vpc" {
project = var.project_b
name = "prj-b-vpc"
auto_create_subnetworks = false
mtu = 1460
routing_mode = "REGIONAL"
depends_on = [google_project_service.prj_b_services]
}
resource "google_compute_subnetwork" "prj_b_subnet" {
project = var.project_b
name = "prj-b-subnet"
ip_cidr_range = "10.200.0.0/24"
region = var.region
network = google_compute_network.prj_b_vpc.id
private_ip_google_access = true
}
Cloud NAT backed with a Cloud Router is also required for Internet access. This is needed for the test instance to install required MySQL client.
resource "google_compute_router" "prj_b_nat_router" {
project = var.project_b
name = "prj-b-nat-router"
region = var.region
network = google_compute_network.prj_b_vpc.id
}
resource "google_compute_router_nat" "prj_b_nat_gw" {
project = var.project_b
name = "prj-b-nat-gw"
router = google_compute_router.prj_b_nat_router.name
region = google_compute_router.prj_b_nat_router.region
nat_ip_allocate_option = "AUTO_ONLY"
source_subnetwork_ip_ranges_to_nat = "ALL_SUBNETWORKS_ALL_IP_RANGES"
}
Finally, we need to create a firewall to SSH the VM via IAP.
resource "google_compute_firewall" "prj_b_vpc_allow_ssh_via_iap" {
project = var.project_b
name = "prj-b-vpc-allow-ssh-via-iap"
network = google_compute_network.prj_b_vpc.id
direction = "INGRESS"
priority = "1000"
allow {
protocol = "TCP"
ports = ["22"]
}
source_ranges = ["35.235.240.0/20"]
target_tags = ["ssh"]
}
- Create Private Service Connect Components in Project B.
resource "google_compute_address" "prj_b_psc_address" {
project = var.project_b
name = "prj-b-psc-address"
subnetwork = google_compute_subnetwork.prj_b_subnet.id
address_type = "INTERNAL"
address = "10.200.0.240"
region = var.region
}
resource "google_compute_forwarding_rule" "prj_b_psc_endpoint" {
project = var.project_b
name = "prj-b-psc-endpoint"
region = var.region
load_balancing_scheme = ""
target = google_sql_database_instance.prj_a_mysql_test.psc_service_attachment_link
network = google_compute_network.prj_b_vpc.id
ip_address = google_compute_address.prj_b_psc_address.id
allow_psc_global_access = true
}
`google_compute_address` resource will be the internal IP address that we will use to connect to the Cloud SQL instance. That IP address resides in the subnet we created earlier.
We are using `google_compute_forwarding_rule` resource to create an endpoint to the Cloud SQL instance. Pay attention to `allow_psc_global_access` and `target` inputs. The target points to the Cloud SQL PSC service attachment link, which looks like “projects/45678/regions/myregion/serviceAttachments/myserviceattachment”.
- Create a test instance in Project B.
resource "google_compute_instance" "prj_b_test_vm" {
project = var.project_b
name = "prj-b-test-vm"
machine_type = "e2-micro"
zone = data.google_compute_zones.available.names[0]
desired_status = "RUNNING"
tags = ["ssh"]
boot_disk {
initialize_params {
image = "ubuntu-os-cloud/ubuntu-2004-lts"
}
}
network_interface {
subnetwork = google_compute_subnetwork.prj_b_subnet.id
network_ip = "10.200.0.5"
}
metadata_startup_script = "apt-get update && apt-get install -y mysql-client"
service_account {
scopes = []
}
}
data "google_compute_zones" "available" {
project = var.project_b
region = var.region
status = "UP"
depends_on = [google_project_service.prj_b_services]
}
The test instance will run an Ubuntu 20.04 image, and `metadata_startup_script` will install mysql-client required to test the connection.
Setting up the environment
- Clone the GitHub repository.
git clone https://github.com/atakanttl/cloudsql-psc-terraform.git
- Add a `terraform.tfvars` file and provide necessary values for variables.
- Login to gcloud with your Google Cloud user.
gcloud auth login
gcloud auth application-default login
- Initialize Terraform.
terraform init
- Review Terraform plan.
terraform plan
- Apply the resources, this will take about 10 to 15 minutes.
terraform apply -auto-approve
- Retrieve MySQL test user password.
terraform output -raw mysql_test_password
Copy the password for later use after this step.
- Verify connection on the test instance.
SSH into the test instance in Project B using the command below. Replace the ZONE and PROJECT_B_ID.
gcloud compute ssh "prj-b-test-vm" \
--zone ZONE \
--project PROJECT_B_ID \
--tunnel-through-iap
Then connect to the MySQL instance using the command below.
mysql --host 10.200.0.240 --user test -p
That’s it! If you need to add more projects and networks to the Cloud SQL instance, you need to add the project IDs to the Cloud SQL configuration and recreate the steps for other networks.
You can read more about Cloud SQL with PSC from the official Google Cloud docs: