‘ Private Service Connect backend’ : what-why-and-how

Gauravmadan
Google Cloud - Community
7 min readJun 3, 2024

Private Service Connect is a Google Cloud Platform’s networking service that allows -

  • Consumers to access managed services privately and securely from inside their VPC network.
  • Allows managed service producers to host and manage these services in their own network, completely separate from the consumer’s network, while offering a secure and private connection to their consumers.

With Private Service Connect, consumers can use their own private IP addresses to access services without leaving their VPC networks (Traffic remains entirely within Google Cloud ).

You can read more details on GCP Private Service Connect offering here

Google Cloud Private Service Connect — Flavours

3 broad categories of PSC

  • PSC endpoint : Ingress to managed services directly through IP addresses
  • PSC backend : Ingress to managed services through load balancers that provide additional routing, security, and observability for managed service traffic
  • PSC interface : Managed service egress into a customer’s VPC

The purpose of this blog is to focus on ‘PSC backend’.

What is a PSC backend

Private Service Connect backends use a load balancer configured with Private Service Connect network endpoint group (NEG) backends. This configuration was previously referred to as a Private Service Connect endpoint with consumer HTTP(S) service controls.

  • PSC backend is a flavor of PSC which Enables Internet or internal HTTP(S) traffic to terminate on a load balancer and be routed to Google APIs or to managed services running in a different VPC via PSC
  • By doing so , it provides centralized control, visibility, and security via load balancer access logs, metrics, and Cloud Armor
  • Also , by doing so , Customers can use their own domain name and certificates to frontend managed services

Let’s look at some of the use-cases where PSC backend can be very useful

Use case1 : Customer having following requirements -

  • Customer setup is architected using a central-connectivity-project and multiple workload-projects . Applications are hosted in ‘workload projects’ and are exposed either using an internal L4 load balancer or internal L7 load balancer.
  • Customer do not want each workload project to attract traffic using individual external L7 load balancers ( for applications in each individual project ) . However , customer wants a centralized external L7 load balancer to attract traffic coming from Internet
  • An IP | GEO policy needs to be in place to allow/block incoming requests from Internet

Let’s start looking at architecture that we can design along with PSC backend -

  1. Since the customer wishes to get all traffic from the internet in a common external L7 load balancer ; we will deploy the same in the centralized-connectivity-project .
  2. Requirements like IP | GEO restrictions can be enforced using Cloud Armor on corresponding load balancer backend service.
  3. On the service producer side ( workload project ) ; publish the service using regional internal L7 load balancer OR internal pass-through network load balancer
  4. Customer can create ‘PSC network endpoint group(NEG)’ in his centralized connectivity project and choose the target of this PSC NEG as ‘Published Service’ . Point this PSC NEG to service published in step (3 ) above

Sample Architecture

Configuration on Producer side

  1. I am not putting down steps for creating an internal L7 load balancer because it is not directly related to the PSC backend . I had configured a HTTPS front end and a HTTPs based backend service
  2. Once internal-L7 LB is configured , we need to create PSC service as follows

Configuration on Consumer side

  1. Create a PSC Network endpoint group as follows

b) Create an external L7 load balancer and in backend-service , please select the PSC NEG as option . Select the PSC NEG created in step (a) .

c) This creates the external L7 load balancer . You may chose HTTP or HTTPS as your front-end protocol and accordingly apply your certificate to attract the traffic from Internet in case of HTTPS. The external layer7 load balancer will give you a public IPV4 address , which you can use to create ‘A’ record in your DNS systems. Example IPV4 address x.x.x.x given by L7 external load balancer can be mapped to abcd.example.com

d) To achieve IP|GEO restriction requirements from customer , a cloud Armor policy can be created and applied to respective load balancer backend service.

Use case2 : Customer having following requirements -

  • Multi-Cloud customer having alternate CSP connected to GCP using VPN / Interconnect
  • Application hosted in GCP and exposed using a L4 iLB or L7 iLB . Each application is in separate GCP project and on-prem / other cloud consumers land onto a common connectivity project , before they go to access the application
  • Rate-limit the incoming requests from alternate CSP . Example : Only 1000 requests per minute from CSP1 are allowed to hit the GCP hosted application . Anything exceeding this should see a 429 error

Again , let’s start looking at architecture that we can design along with PSC backend -

a) On the service producer side ( workload project ) ; publish the service using regional internal L7 load balancer OR internal pass-through network load balancer

b) Same as use case 1 , create a PSC network endpoint group in the connectivity project and point the same to the service created in step (a) above. Sample snapshot -

c) On the consumer side , create an internal layer7 load balancer . Please use the backend as PSC NEG created in step (b) above.

d) Ensure that the load balancer front end IP is reachable from on-prem or another CSP.

d) Cloud Armor is now supported with Internal application load balancer as well . Therefore to meet the requirements like rate-limiting , we can deploy cloud armor rate-limiting / throttle policy on internal layer7 load balancer . Example , in my case , I had a regional cloud Armor policy with a rate-limit rule as follows :

And I had applied the same to internal layer 7 load balancer created in step © above. Sample architecture will look like follows -

Use case3 : Customer having following requirements -

  • A enterprise is serving the cacheable contents ( this can be web content or media content ) via a third party CDN
  • The Origin is in Google Cloud Storage ( GCS )
  • Customer wants that the layer7-external load balancer should only be accessible to CDN . No one else should be able to able to get the content by hitting the L7-xLB directly.
  • Customer DO NOT want to use GCP Cloud Armor’s functionality of whitelisting the CDN IP addresses

From PSC point of view

  1. Create a PSC endpoint with target as All Google APIs

From external L7 Load balancer point of view

a) Create a backend-service-1 which points to PSC network endpoint group

b) Create a backend-service-2 which will return error code if requests don’t come from whitelisted CDN

c) Ask the CDN provider to send the request to layer7 xLB using a fixed header ( example x-cdn : cdn-provider-1 )

d) Write the routing rules in GCP xLB as follows

defaultService: projects/test-prj-12345/global/backendServices/error-service-backend
name: path-matcher-1
routeRules:
- matchRules:
- headerMatches:
- exactMatch: cdn-provider-1
headerName: x-cdn
prefixMatch: /
priority: 0
service: projects/test-prj-12345/global/backendServices/serv-psc-bucket

By doing so we will note that only when external L7 load balancer sees the request coming with pre-determined header , it will route it to GCS backend . If no header is seen in incoming requests , it routes the same to a different backend service ( in my case it was configured to display error page ).

Disclaimer: This is to inform readers that the views, thoughts, and opinions expressed in the text belong solely to the author, and not necessarily to the author’s employer, organization, committee or other group or individual.

--

--