Understanding Google Cloud Hybrid Connectivity
disclaimer : all my writings, opinions and thoughts are mine.
Google Cloud offers several options when it comes to connect on-premises environments to Google Cloud :
- Direct and Carrier peering
- Cloud VPN
- Cloud Interconnect
Many organizations find it difficult to choose a hybrid connectivity product. Hybrid connectivity is key in many use cases such as migration, multi-cloud environment or simply hybrid workloads.
We will go through each solution separately to help you understand it better, followed by a clear comparison to help you choose the best one for your use case.
Internet network Infrastructure 101
Let us first recall how the internet infrastructure works.
No one owns the entire internet network infrastructure. The Internet is a collection of interconnected networks known as Autonomous Systems and interconnected through what we call internet peerings.
Each Autonomous System is identified by a public Autonomous System Number (ASN) and each ASN is associated with a pool of public IP addresses known as prefixes or IP ranges.
The primary and most known Google public ASN is ASN15169.
Having said that we can now draw a very simplified high-level architecture of the internet infrastructure as follows :
Note that for the sake of diagram simplicity, we did not draw all existing ASNs on the diagram but only Google Cloud ASN alongside some famous other ones such as :
- ASN3356 : Lumen technologies formerly known as Level 3
- ASN3215 : Orange
- ASN1299 : Arelion formerly known as Telia
- ASN 15169 : Google
Peerings between ASNs mostly happen in colocation facilities (= kind of “shared private” datacenters) and sometimes in Internet Exchanges known as IX (= kind of “shared public” datacenters).
Peerings between ASNs rely on the Border Gateway Protocol (BGP) to exchange routes dynamically. The following diagram shows an example of internet peering between Google and another Internet service provider (ISP) where Google advertise a prefix (34.129.0.0/16) and ISP its own prefix (8.23.254.0/24) :
In simple words, BGP is a routing protocol for dynamically exchanging route information such as prefixes (IP address ranges). Using a routing protocol brings much more flexibility and scalability instead of configuring statically the routes.
Direct peering
Direct peering is used when an organization wants to accelerate and optimize connectivity to Google services like Youtube, Google Maps, Google Workspace or any Google publicly exposed APIs. Direct peering enhances connectivity to reach Google services (Workspace, Youtube etc) by reducing latency and hops to those services. An enterprise may choose Direct peering when consumption is heavy for those services.
How to build direct peering? Well, as its name implies, Direct Peering is a direct BGP peering between the customer managed router and the Google public ASN15169 as follow :
You can notice that the enterprise willing to connect to Google through Direct Peering must have a public Autonomous System Number (ASN). Enterprises can request a public ASN to the ARIN (for US based companies) or the RIPE (for EU based companies).
Enterprises without an ASN or not willing to acquire one can still connect through Carrier Peering explained in the next section.
Note that Direct Peering does not give access to your Google Cloud VPC unless you build a Google Cloud VPN on top of it.
Carrier peering
What if an enterprise willing to connect directly to Google to optimize access to Google services is not willing to acquire an ASN? What if the enterprise does have the necessary internal resources, people and skills on BGP?
In that case, peering can be done through GCP Carrier Peering solution where the enterprise contracts with a third party network carrier responsible for providing the peering connectivity through its own ASN (network). The following diagram illustrate this :
Note that Carrier Peering does not give access to your Google Cloud VPC unless you build a Google Cloud VPN on top of it.
Cloud VPN
Building a VPN is the fastest, cheapest and easiest way to set up connectivity from your on-premises environment to your Google Cloud VPC.
Google Cloud VPN consists of a highly available VPN Gateway (HA VPN Gateway) in Google Cloud and managed by Google Cloud.
HA VPN Gateway is not a single instance or appliance but a regionally distributed gateway natively designed for 99.99% availability SLA.
At the time of writing, legacy Classic VPN gateway is being deprecated and not recommended since high availability is not natively supported but must be set up manually through several classic VPN Gateways.
Once the HA VPN is deployed in your VPC then you can set up the IPsec tunnel from your on-premises IPsec client or appliance.
Exchanging and advertising routes within IPsec tunnels between your on-premises IPsec appliance to your Google HA VPN Gateway can be done in a static way as follows :
Exchanging and advertising routes within IPsec tunnels between your on-premises IPsec appliance to your Google HA VPN Gateway can be done in a dynamic way through BGP sessions by using Google Cloud Router as follows :
Note that Cloud VPN relies on VPN IPsec on top of Internet infrastructure but can also be built on top of a Direct/Carrier Peering connectivity to take maximum benefits of Direct/Carrier Peering.
It is important to bear in mind that the BGP session is built with a private Autonomous System Number (ASN) since it is a private network connectivity on a virtual private network (VPN tunnel).
High availability and SLAs for Cloud VPN
Cloud VPN underlying physical infrastructure relies on the Internet network meaning that packets are delivered in a best effort way with no SLA. However, Google Cloud can provide a 99.99% availability SLA but at the gateway level as follows :
Cloud Interconnect — Dedicated Interconnect
Cloud Interconnect comes into two flavors, Dedicated Interconnect and Partner Interconnect.
Dedicated Interconnect is currently available for bandwidth from 10 Gbps to 100 Gbps.
In both cases, connecting to a signal Google Cloud region can make you access your workload in any region thanks to the global reach of Google’s backbone.
Provision the physical connectivity
Dedicated Interconnect is a direct private physical connectivity to Google Cloud giving access to your VPC within a specific Google Cloud region.
This direct physical connectivity must happen in a colocation facility where Google Cloud has a Dedicated Interconnect Point-of-Presence. The full colocation facilities are listed in the official documentation here.
It basically consists of a cross-connect within the colocation facilities as follows :
If the enterprise is not physically present in a colocation facility where Google Cloud Dedicated Interconnect is available, then the enterprise can buy a dark fiber optic pair to reach the colocation facility where Google Cloud is present with a constraint of 10km maximum distance (10GBASE-LR or 100GBASE-LR4).
That dark fiber optic pair can be bought through the colocation facility provider or a network service provider as per the following diagram :
In the case where the distance is more than 10 km then the dark fiber optic pair must be lit by a DWDM optical system. Instead of building its own DWDM point-to-point connection the enterprise can rent it from a network service provider. The following diagram illustrate this :
Provision the logical connectivity
Provisioning the logical connectivity consists of “building” the layer 2 and layer 3 connectivity as per the OSI model.
Concretely here it means creating VLAN attachments to map to the VPC you want to connect to your on-premises VLANs (layer 2) on top of which you will be need to build a BGP session to exchange routing information (layer 3) as follows :
Note that the BGP session is built with private Autonomous System Number (ASN) since it is a private network connectivity on a private physical network.
In this schema schema the VLANs between the two router are different VLAN than the on on premises VLAN even if they are in similar colors.
Indeed, you build a peering between two different networks (two routers) :
- You connect these routers physically with a cable (optical fiber)
- The connection creates a network between the two routers.
- This connection is a network generally of a /29 size and very small since we only need IP on the routers interfaces.
- However, when you want to have multiple separate logical networks on top of this physical cable you need to use VLANs to separate logically that cable and avoid deploying multiple physical cables
High availability and SLAs
To reach 99.99% availability SLA for Dedicated Interconnect, enterprises must deploy it through two different colocation facilities located in different metro areas (cities) and with two links per metro as follows :
Note that in each colocation facility the Dedicated interconnect must use diverse zones. Colocation facilities are always split into multiple standalone zones for resiliency purposes.
Cloud Interconnect — Layer 2 Partner Interconnect
Partner Interconnect provides private connectivity from 50 Mbps to 50 Gbps (at the time of writing) through a supported Google Cloud Cloud Interconnect Partner.
That partner possesses existing layer 2 peerings with Google Cloud which are multi-tenant meaning not dedicated for a single customer but shared for all of its customers.
In the case of layer 2 Partner Interconnect, the service provider only provides layer 2 connectivity to the customer on top of what the customer must build its own layer 3 connectivity to Google Cloud Router (BGP session).
The physical connectivity
As already mentioned, the physical multi-tenant connectivity between the service provider partner and Google Cloud is already in place as illustrated in the left side of the diagram below.
Only the physical connectivity between the service provider partner and the customer remains to be built for each customer as illustrated in right side of the diagram below :
That physical connection from customer to service provider partner is called ‘the last mile’ or ‘access network’.
The service provider partner will connect the customer facility with optical fiber and will install a managed switch to deliver and manage the layer 2 connectivity.
This kind of service provider connectivity, from customer premises to Google Cloud, is often called pseudo-wire, because it is a point-to-point connection whose path is not relying on a whole single physical wire end-to-end.
The logical connectivity
It is customer responsibility to :
- To map its Google Cloud VPCs to its on-premises VLANs by configuring the VLAN attachments on its Google Cloud console
- Build the layer 3 connectivity between its own on-premises router and Google Cloud Router through a BGP session to advertise its on-premises IP address ranges.
You can notice that the service provider partner handles most of the job excluding the BGP session.
Cloud Interconnect — Layer 3 Partner Interconnect
The physical connectivity
The physical connectivity remains the same as the Layer 2 Partner Interconnect.
However, main differences compared to Layer 2 Partner Interconnect are in the logical connectivity as described hereafter.
The logical connectivity
The following diagram shows how the logical connectivity is built :
Now, let’s break down each components to understand what is going on in such configuration:
- A switch and a router are deployed on-premises and managed by the partner service provider
- The service provider partner builds a VPN over its private MPLS network for the customer and isolate customer traffic through the use of a VRF (basically one VRF equals to one VPN)
- The service provider partner is responsible for the BGP session between its managed on-premises router and its edge router on its backbone
- Customer must advertise statically or dynamically its on-site IP address ranges from its own router to the service provider managed on-premises router
- The service provider partner builds and manages another BGP session with the customer Google Cloud Router. That session must be built on top of the service provider existing multi-tenant layer 2 peering with Google Cloud. Of course, customer must download from the GCP console all necessary information needed to provide to the service provider to let them configure that BGP session
You can notice that the service provider partner handles most of the job including the BGP sessions.
Cloud Interconnect — High availability
High availability is a hot topic in network discussions. Many people believe that buying from two different network service providers will help them achieve high-availability.
However, physical fiber optical routes often take the same paths for all service providers like highways, railways, submarine cables and so on.
Plus these service providers often do Wholesale business to share and sell network capacity with each other.
Moreover, high availability means a contractual agreement (SLAs) and thus only possible between two sides : the customer and the provider.
Best practice is to contract high-availability with SLAs from a single service provider.
The following diagram shows an example of last mile HA to connect to the service provider for a layer 3 :
Please note that service providers are always peering with Google Cloud in a high availability way as a standard, that is why the previous schema only shows the access network part.
In all cases, the service provider partner will assess technical feasibility, business feasibility and SLAs on a per customer basis to provide a tailored solution.
Hybrid connectivity to multiple GCP regions
Google Cloud network is global then VPC are global natively and connecting to a single region will make you benefit from Google’s global backbone to reach other GCP regions through the Google global IP network.
Still wondering which one to choose ?
Comparison between them all
What a journey! We now understand each Google Cloud hybrid connectivity option individually but you may still wonder what the difference is between them all, especially between Dedicated Interconnect and Partner Interconnect:
- Direct peering and Carrier peering is pretty easy right? It is a direct connection to Google, not Google Cloud, for direct access to Google public services like Youtube or Workspace for instance.
- Cloud VPN needs an internet access upon which you set up IPsec tunnels to get access to Google Cloud. Given that getting internet access is pretty easy then we can obviously conclude that it is the easiest and cheapest way to connect to your Google Cloud VPCs. Of course if you are already using Direct or Carrier Peering you can use it for your Cloud VPN to take maximum advantage of it and also giving you better latency and performance for your VPN.
- Cloud Interconnect comes into two flavors, Dedicated Interconnect and Partner Interconnect, to connect to your GCP VPCs:
- Dedicated Interconnect when you can easily meet Google Cloud within a colocation facility from 10 Gbps to 100 Gbps — You must build and manage by yourself the layer 3 with BGP sessions
- Partner Interconnect if meeting Google Cloud in a colocation facility is not that easy for you — or if your are used to work with a specific service provider for network connectivity — or you want smaller network speeds starting at 50 Mbps — or you do not want to manage BGP sessions perhaps because your teams are not familiar with it — or if you need specific SLAs for the underlying network infrastructure
Decision tree to help you decide
You can notice that in the decision tree we do not take bandwidth speed into account despite native limitations on each product. This is because you can always bypass it by parallelizing physical and logical circuits to reach your needs.
Take away and summary
We got very deep to understand all the options offered by Google Cloud to help you connect your on-premises workloads. As an enterprise you can also get in touch with Google Cloud to discuss your needs and help you out.
Google Cloud has strong partnerships with integrators & partners to help you achieve your goals and focus on what’s more important to you : your core business and your digital transformation!