GCP Hybrid Networking Patterns — Part 3

Jasbirs
Google Cloud - Community
9 min readJan 3, 2023

This blog is in continuation of GCP Hybrid Networking Patterns Part 1(Hybrid Connectivity to Single VPC(Shared VPC)) and https://medium.com/google-cloud/gcp-hybrid-networking-patterns-part-2-5e96a4974284Part 2(Hybrid Connectivity to Multiple VPC Networks(or Shared VPC Network). In this blog, I will cover about networking design patterns for Hybrid connectivity using appliances(Shared VPC networks in Hub) on Google Cloud Platform.

Hybrid Connectivity using Appliances (Shared VPC networks in Hub)

1. Interconnect to On-premises

Let’s consider a use case where the workloads are organized into separate Shared VPC networks for Prod and Non-Prod. Interconnect (or HA-VPN) from on-premises (or other clouds) terminates directly into the External VPC.

This pattern provides hybrid connectivity to

  • IaaS resources in the workload Shared VPC networks
  • Google APIs and services (e.g storage.googleapis.com, *.run.app etc) in the workload projects
  • GCP managed services that use Private Services Access

Use this Pattern When…

  1. You have multiple workload Shared VPC networks that need layer 7 inspection for traffic between the VPC networks and on-premises. In this example, we show only two Shared VPC networks (Prod and Non-prod); but you could have up to 7 VPC networks. The workload VPC networks can communicate via the network virtual appliances (NVA) that apply necessary network security policies to allow or deny traffic.
  2. You have multiple workload Shared VPC networks that need to share a common connectivity to on-premises or other clouds. In this example, connectivity to on-premises and other clouds goes via the NVA and through the External VPC.
  3. You require network connectivity to managed services that use Private Services Access to the workload Shared VPC networks. This pattern allows access to any managed services from on-premises and any workload Shared VPC.

Scaling Out (Number of Workload Shared VPC)

The maximum number of workload Shared VPC is 7 (as limited by the maximum number of network interfaces per instance). If an extra interface is required for management traffic to the NVA, then only 6 workload Shared VPC networks can be deployed.

1) Cloud Interconnect

Set up Dedicated Interconnect or Partner Interconnect to Google cloud. Connect to two Edge Availability Domains (EAD) in the same Metro in order to achieve 99.99% SLA. You can connect your Cloud Interconnects to multiple regions in the same Shared VPC.

2) VLAN Attachment

A VLAN attachment connects your interconnect in a Google point of presence (PoP) to a cloud router in a specified region.

3) Cloud Router

A Cloud router exchanges dynamic routes between your VPC networks and on-premises routers. You can configure dynamic routing between your on-premises routers and a cloud router in a particular region. Each cloud router is implemented by two software tasks that provide two interfaces for high availability. Configure BGP routing to each of the cloud router’s interfaces.

4) VPC Global Dynamic Routing

Configure global dynamic routing in the Shared VPC to allow exchange of dynamic routes between all regions.

2. HA-VPN to On-premises

1) Cloud HA-VPN

The Cloud HA-VPN gateway is used to establish IPsec tunnels to the on-premises VPN gateway over the Internet. HA-VPN offers a 99.99% SLA. You can have multiple HA-VPN tunnels into different regions in the External VPC.

2) Cloud Routers

Configure dynamic routing between the on-premises routers and a cloud router in each region. Each cloud router is implemented by two software tasks that provide two interfaces for high availability. Configure BGP routing to each of the cloud router’s interfaces.

3) VPC Global Dynamic Routing

Configure Global dynamic routing in the External VPC to allow exchange of dynamic routes between all regions.

3. Cloud DNS Forwarding and Peering

Overview

In a hybrid environment, DNS resolution can be performed in GCP or on-premises. Let’s consider a use case where on-premises DNS servers are authoritative for on-premises DNS zones, and Cloud DNS is authoritative for GCP zones.

1) On-premises DNS

Configure your on-premises DNS server to be authoritative for on-premises DNS zones. Configure DNS forwarding (for GCP DNS names) by targeting the Cloud DNS inbound forwarding IP address, which is created via the Inbound Server Policy configuration in the External VPC. This allows on-premises network to resolve GCP DNS names.

2) External VPC — DNS Egress Proxy

Advertise the Google DNS egress proxy range 35.199.192.0/19 to the on-premises network via the cloud routers. Outbound DNS requests from Google to on-premises are sourced from this IP address range.

3) External VPC — Cloud DNS

a) Configure Inbound Server Policy for inbound DNS requests from on-premises.

b) Configure Cloud DNS Forwarding Zone for on-premises DNS names (targeting on-premises DNS resolvers).

4) Hub Host Project — Cloud DNS

a) Configure DNS Peering Zone for on-premises DNS names targeting the External VPC as the peer network. This allows Non-Prod resources to resolve on-premises DNS names.

b) Configure Non-Prod DNS Private Zones in the Hub Host Project and attach Non-Prod Shared VPC, Prod Shared VPC and External VPC to the zone. This allows all hosts (on-premises and in all service projects) to resolve the Non-Prod DNS names.

5) Hub Host Project — Cloud DNS

a) Configure DNS peering zone for on-premises DNS names setting the External VPC as the peer network. This allows Prod resources to resolve on-premises DNS names.

b) Configure Prod DNS private zones in the Hub Host Project and attach Prod Shared VPC, Non-Prod Shared VPC and External VPC to the zone. This allows all hosts (on-premises and in all service projects) to resolve the Prod DNS names.

4. Private Service Connect (PSC) for Google APIs(Access to all Supported APIs and Services)

Overview

You can use Private Service Connect (PSC) to access all supported Google APIs and services from Google Compute Engine hosts and on-premises. Let’s consider PSC access to a service in Service Project 4 via the External VPC and Prod Shared VPC.

Creating PSC Endpoints

  1. Choose a PSC endpoint address (e.g 10.0.0.1) and create a PSC endpoint in the External VPC with a target of “all-apis”- which gives access to all supported Google APIs and services.
  2. Choose a PSC endpoint address (e.g 10.2.2.2) and create a PSC endpoint in the Prod Shared VPC with a target of “all-apis”. Service Directory automatically creates a DNS record (with DNS name of p.googleapis.com) linked to each of the PSC endpoint IP addresses.

Access from GCE Hosts

GCE-4 host in Service Project 4 can access all supported Google APIs via the PSC endpoint (10.2.2.2) in the Prod Shared VPC.

3. Enable Private Google Access on all subnets with compute instances that require access to Google APIs via PSC.

4. If your GCE clients can use custom DNS names (e.g. storage-xyz.p.googlepapis.com), you can use the auto-created p.googleapis.com DNS name.

5. If your GCE clients cannot use custom DNS names, you can create Cloud DNS records using the default DNS names (e.g storage.googleapis.com).

Access from On-premises Hosts

On-premises hosts can access all supported Google APIs via the PSC endpoint in the External VPC.

6. Advertise the PSC endpoint address to the on-premises network.

7. If your on-premises clients can use custom DNS names (e.g. storage-xyz.p.googlepapis.com), you can create A records mapping the custom DNS names to the PSC endpoint address.

8. If your on-premises clients cannot use custom DNS names, you can create A records mapping the default DNS names (e.g storage.googleapis.com) to the PSC endpoint address.

5. Private Service Connect (PSC) for Google APIs(Access to APIs and Services supported on VPC Service Control)

Overview

You can use Private Service Connect (PSC) to access all supported secure Google APIs and services from Google Compute Engine hosts and on-premises. Let’s consider PSC access to a service in Service Project 4 via the External VPC and Prod Shared VPC.

Creating PSC Endpoints

  1. Choose a PSC endpoint address (e.g 10.0.0.1) and create a PSC endpoint in the External VPC with a target of “vpc-sc”- which gives access to all supported Google APIs and services.
  2. Choose a PSC endpoint address (e.g 10.2.2.2) and create a PSC endpoint in the Prod Shared VPC with a target of “vpc-sc”. Service Directory automatically creates a DNS record (with DNS name of p.googleapis.com) linked to each of the PSC endpoint IP addresses.

Access from GCE Hosts

GCE-4 host in Service Project 4 can access all supported Google APIs via the PSC endpoint (10.2.2.2) in the Prod Shared VPC.

3. Enable Private Google Access on all subnets with compute instances that require access to Google APIs via PSC.

4. If your GCE clients can use custom DNS names (e.g. storage-xyz.p.googlepapis.com), you can use the auto-created p.googleapis.com DNS name.

5. If your GCE clients cannot use custom DNS names, you can create Cloud DNS records using the default DNS names (e.g storage.googleapis.com).

Access from On-premises Hosts

On-premises hosts can access all secure Google APIs via the PSC endpoint in the External VPC.

6. Advertise the PSC endpoint address to the on-premises network.

7. If your on-premises clients can use custom DNS names (e.g. storage-xyz.p.googlepapis.com), you can create A records mapping the custom DNS names to the PSC endpoint address.

8. If your on-premises clients cannot use custom DNS names, you can create A records mapping the default DNS names (e.g storage.googleapis.com) to the PSC endpoint address.

6. VPC Service Control

Overview

VPC Service Control uses ingress and egress rules to control access to and from a perimeter. The rules specify the direction of allowed access to and from different identities and resources.

Let’s consider a specific use case where we require access to a protected service in Service Project 4 via the External VPC and Prod Shared VPC. This a simple use case and not exhaustive.

Below is an description of VPC service control action for our specific scenario.

1) Service Project Perimeter

The perimeter contains our service project (Service Project 4); and includes Google APIs and services to be protected in the service project.

2) API access from GCE Hosts

A GCE client can access secured APIs through a PSC endpoint in a Shared VPC. Let’s consider the perimeter around Service Project 4. The network interface of the compute instance GCE-4 is in the Prod Shared VPC of Hub Host Project. API calls from GCE-4 instance to a service (e.g. storage.googleapis.com) in Service Project 4, appear to originate from Hub Host Project — where the instance interface and PSC endpoint are located.

3) Ingress Rule — Hub Host Project into Perimeter

Configure an ingress rule that allows Google API calls from Hub Host Project to the protected services in Service Project 4 perimeter. This rule allows API calls from GCE instances (e.g. GCE-4) into the perimeter.

4) API access for on-premises hosts

On-premises hosts can access secured APIs in Service Project 4 via the PSC endpoint in the External VPC. API calls from on-premises to services in Service Project 4 appear to originate from Hub Host Project — where the Interconnect and the PSC endpoint are located.

The ingress rule (in step 3) allows API calls from on-premises to Service Project 4 perimeter via Hub Host Project.

--

--

Jasbirs
Google Cloud - Community

Strategic cloud Engineer, Infrastructure, Application Development, Machine Learning@Google Cloud