GCP Private Service Connect(PSC): Service publication

Ishaq Shaikh
Google Cloud - Community
8 min readSep 21, 2023
MJ Prompt: submarine fibre optic cable with little curve at the bottom of the blue ocean, clean sharp focus — v 5.2 — s 100

Introduction

In this blog post we will explore the GCP’s Private Service Connect networking solution. This feature, enables users to publish(produce) their managed services within a single project, enabling effortless and secure service consumption across different GCP projects, extending even to other GCP organisations as well. While, ensuring a streamlined and secure experience, prioritising both efficiency and safety.

Additionally, I’ll demonstrate how to publish a customer managed service in GCP project using Private Service Connect.

However, the next section of this blog post focuses on consuming the published service, different PSC deployment patterns, security considerations, and additional aspects within the realm of PSC.

Before we dive in, here’s an overview of what this blog post will encompass:

  • Understanding Private Service Connect (PSC)
  • Why Private service connect?
  • Exploring the Architecture of Private Service Connect
  • Managed Service publication using Private Service Connect

Private Service Connect (PSC)

Within the realm of Google Cloud networking, Private Service Connect(PSC) is a fully managed simplified solutions which empowers GCP customers or third-party software as a service (SaaS) companies to publish their services within their dedicated VPC networks.
These published services are then accessed privately and securely from the consumer’s VPC network with eliminating the need for intricate tasks like VPC peering, routing table modifications, IP address configurations, and NAT rule implementations, ensuring a hassle-free and a service oriented experience.
Service publication, also gives control and ownership to service producers within their designated VPC network. This means they’re not just delivering services; they’re overseeing their deployment, ensuring a more tailored and secure approach.
But what exactly can be published through PSC?
Let’s break it down:

  • Google services, such as GKE, Apigee, or Cloud Composer. The interesting part is, these services run in tenant projects and VPC networks that are managed by Google.
  • Third-party services, where third parties offer private access to a published service in Google Cloud.
  • Intra-organization services, large organisations often operate with diverse teams and projects.
    Imagine one team crafting a specialized managed service and another team — in an entirely separate VPC network — needing just that service. With Private Service Connect, this inter-team service sharing becomes a reality. It’s like teamwork on a whole new level, even when teams are segmented for various reasons.

Key jargons for PSC:
Service attachment:
Private Service Connect published services uses Service attachments to target a publisher’s(producer’s) load balancer, enabling clients in a consumer VPC network to access it.
Endpoint: These are internal IP addresses within a consumer VPC network, accessible directly by clients in the network.
Endpoints are used to access Service attachments, and multiple endpoints can connect to the same Service attachment, allowing multiple consumers to access the same service instance.
Endpoints are established through forwarding rules linked to a service attachment.

The illustration below demonstrates how the consumer can available access to the published service via PSC.

PSC Illustration

Why Private service connect?

Let’s now understand into the reasons behind why Private Service Connect stands out as the optimal choice for both internally exposing services and accessing third-party managed services, along with the associated benefits.

  • Effortless Performance: Beneath its surface, Private Service Connect (PSC) relies on the robust foundation of Software-Defined Networking (SDN), providing a seamless and high-performing solution.
    A single endpoint is all it takes to unlock line-rate performance, even on a large scale.
  • Enhanced Security: The beauty of PSC lies in keeping all traffic confined within Google’s robust backbone. This setup not only ensures top-tier security but also aligns with compliance requirements, creating a dual advantage.
  • Agility Amplified: With Private Service Connect, the focus remains squarely on the service itself, alleviating the network teams of undue burdens.
    This enhanced agility allows for streamlined operations and quicker response times.
  • Accelerated Service Consumption: PSC not only simplifies the process but also accelerates it. Services can be consumed more swiftly, and the added bonus is the ability to seamlessly integrate third-party vendors or partners without being bogged down by networking complexities.

In a nutshell, Private Service Connect doesn’t just offer a pathway to efficient service consumption, but a holistic approach that champions performance, security, agility, and collaboration.

Architecture of Private Service Connect

Private Service Connect is built upon Google’s Software-Defined Networking (SDN) stack named “ANDROMEDA”, which also powers Google Cloud Load Balancing. Andromeda acts as the distributed control and data plane for Google Cloud networking, supporting Virtual Private Cloud (VPC) networks.
More importantly, it processes packets directly on the physical servers which hosts the virtual machines(VMs), creating a fully distributed data plane without any central bottlenecks from intermediary proxies or appliances.

Let’s further demystify traffic path for Private Service Connect between a typical consumer VPC network and a producer VPC network.

Traffic path for PSC

As seen above, we have consumer endpoints and producer load balancers logically.
However, in the actual process, data travels straight from the physical server(Host 1) containing the client’s virtual machine to the physical server(Host 2) that hosts the producer load balancer’s virtual machine.
Andromeda’s handling of Private Service Connect traffic involves:

  1. Client-Side Load Balancing: The source host (Host 1) decides where to send the traffic, considering factors like location and health.
  2. Encapsulation and Header Addition: The packet from VPC1 gets wrapped in an Andromeda header, specifying it’s for VPC2.
  3. Destination Host Modifications: Upon arrival at the destination host (Host 2), the packet undergoes Source Network Address Translation (SNAT) and Destination Network Address Translation (DNAT).
    This involves modifying the source IP using the NAT subnet and substituting the destination IP with the producer load balancer’s IP

However, exceptions arise when traffic encounters intermediate routing hosts. This typically occurs with inter-regional traffic or in cases of very small or sporadic traffic flows.
But, since PSC traffic is processed fully on the physical hosts, which ensures enhanced performance, unrestricted bandwidth, and minimal latency for Private Service Connect traffic.

❗❗❗ Must have read-resource about ANDROMEDA 🙂 👇❗❗❗

Managed Service publication using Private Service Connect

Let’s illustrate now, wherein we aim to publish our internal PostgreSQL managed instance in our VPC using Private Service Connect(PSC).
To set the stage, I’ve already set up a GCE Virtual Machine with PostgreSQL installed.

The key steps involved in this process include:

1. Creation of PSC NAT Subnet

Importance of PSC NAT Subnet:

  • PSC NAT subnet enables source NAT (SNAT) process, which replaces the consumer VPC’s source IP addresses with those from the designated PSC NAT subnet in the publisher’s VPC, effectively masking the actual source IPs of the consumer VPC.
    However, for published service using internal passthrough Network Load Balancer, enabling the PROXY protocol allows visibility of the consumer’s original source IP address. Make sure to review considerations and compatiblity before enabling PROXY protocol.
  • PSC service attachment can have multiple NAT subnets, but a NAT subnet cannot be used in more than one service attachment.
  • Also, the PSC NAT subnets are exclusively for SNAT of incoming consumer connections and hence cannot be used for VM instances or forwarding rules.
  • For PSC Subnet, the available IP addresses can be calculated as [2^(32 - PREFIX_LENGTH) - 4(reserved IPs)].
    For instance, with a /29 prefix length (the smallest subnet size supported), we’ll have [2^(32–29) - 4] = 4 IP addresses available.
    One IP address is allocated from the NAT subnet for each connected endpoint with the service attachment.

gcloud config set project rich-principle-394408

gcloud config set compute/region asia-south1

# PSC NAT Subnet creation
gcloud compute networks subnets create publishr-subnet-psc --network=dev-vpc \
--range=10.2.0.0/28 --region=asia-south1 \
--purpose=PRIVATE_SERVICE_CONNECT \
--project=rich-principle-394408

2. Configuring the firewall rules

# GCP LB HealthCheck firewall rule
gcloud compute firewall-rules create gcp-lb-hc-fw \
--network=dev-vpc \
--direction=ingress \
--target-tags=pgsql \
--allow=tcp:5432 \
--source-ranges=130.211.0.0/22,35.191.0.0/16

# Allow PSC NAT subnet range
gcloud compute --project=rich-principle-394408 firewall-rules create \
psc-sbnet-allow-fw \
--direction=INGRESS --priority=1000 \
--network=dev-vpc \
--action=ALLOW \
--rules=tcp:5432 \
--source-ranges=10.2.0.0/28 \
--target-tags=pgsql

The last firewall rule, allows incoming requests from our PSC NAT subnet to VMs tagged with the ‘pgsql’ network tag only.

GCP Firewall rule configured

3. Setting up the Internal Load Balancer (ILB)

The following gcloud commands will set up an internal passthrough Network Load Balancer for our PostgreSQL backend workload.

# Creation of the IG for the psql-workload
gcloud compute instance-groups unmanaged create pgsql-ig-a \
--zone=asia-south1-a --project=rich-principle-394408 && \

gcloud compute instance-groups unmanaged add-instances pgsql-ig-a \
--zone=asia-south1-a \
--instances=pgsql-db \
--project=rich-principle-394408



# TCP healthcheck creation
gcloud compute health-checks create tcp pgsql-tcp-hc --region=asia-south1 \
--port=5432 \
--project=rich-principle-394408


# Backend Service creation
gcloud compute backend-services create pgsql-psc-ilb \
--load-balancing-scheme=internal \
--protocol=tcp \
--region=asia-south1 \
--health-checks=pgsql-tcp-hc \
--health-checks-region=asia-south1 && \

gcloud compute backend-services add-backend pgsql-psc-ilb \
--region=asia-south1 \
--instance-group=pgsql-ig-a \
--instance-group-zone=asia-south1-a

# Forwarding rule creation
gcloud compute forwarding-rules create pgsql-psc-ilb \
--region=asia-south1 \
--load-balancing-scheme=internal \
--network=dev-vpc \
--subnet=subnet-01 \
--address=10.1.0.5 \
--ip-protocol=TCP \
--ports=5432 \
--backend-service=pgsql-psc-ilb \
--backend-service-region=asia-south1
Internal TCP ILB for PostgreSQL Backend

4. Configuring the PSC Service Attachment

# Creation of the PSC attachment
gcloud compute service-attachments create pgsql-attchmnt \
--region=asia-south1 \
--producer-forwarding-rule=pgsql-psc-ilb \
--connection-preference=ACCEPT_AUTOMATIC \
--nat-subnets=publishr-subnet-psc

The Service attachment is set to automatic connection preference, ensuring that it always accepts connection requests from consumers without manual intervention.

Coool..we’ve covered quite a bit.

In summary, we’ve explored GCP Private Service Connect, its role in tackling networking challenges, and its architecture. We’ve also covered the steps for publishing a managed service within our network.
If you’re curious about how to consume the published services and learn more about PSC, be sure to check out the next section of the blog.

--

--