Hybrid Networking: Google Cloud Interconnect
It’s clear that you won’t migrate your entire on-prem to the cloud in the first go. It makes more sense to migrate portions at a time where appropriate. As such, you’ll still need a way for your on-prem systems to communicate with your newly minted cloud resource.
Sure you can reach your cloud resources with the appropriate VPC, firewalls, and IPs in place, but if your hybrid architecture requires connections with higher availability and lower latency, then a direct connection to Google’s network can offer more reliability and performance.
Bursting to the Cloud Requires Speed
So, let’s look at how this plays out in reality. Black Friday is coming up, and you’re expecting a significant spike in compute resources to support the inbound traffic to your site. Now, if you have your compute in a GCP VPC, then you could directly ping them from your on-prem systems. This would technically work — you would get scalable systems in the cloud, and secure processing on-prem.
On the backside, however, there are a load of potential issues. First, this forces the communication between your VPC and on-prem to go through the public internet, communicating through publicly visible IPs, most likely requiring you to secure your traffic via a VPN in the middle. Second, this produces a significant performance challenge — The public internet is not the most performant, and the extra overhead of VPN on the packets can lead to less-than-desired performance on a significant shopping holiday.
Instead, you want a reliable way to connect your on-premise workloads to the public cloud in a secure, fast, and reliable manner. In other words, you want to extend your on-premises private network into Google Cloud over a dedicated link.
Google Cloud Interconnect
The purpose of Google Cloud Interconnect is to provide direct, fast connection between your on-premise and Google Cloud networks in a secure fashion. This security and performance mixture is hyper important for a large number of industries who work between on-prem and in-cloud. For example, data migration, replication, disaster recovery, or other high-performance computing situations. In these scenarios, it’s invaluable to directly connect to your cloud systems, while extending your private, on-prem network to the cloud. Google Cloud Interconnect provides a few options that can suit your specific needs, but I want to highlight Dedicated Interconnect. It provides direct physical connections between your on-premise network and Google’s network.
It works like this:
- You set up a cross connect between your own router and the Google Network in a common colocation facility. If you can’t perform this, then the official docs have some options for you.
- A BGP session is configured over the interconnect between the Cloud Router and your on-premises router.
- Traffic from your on-premise network can reach your Google Cloud VPC and vice versa.
With Dedicated Interconnect, the immediate benefit is an enterprise-grade connection to your Google VPC with a dedicated circuit of 10 Gbps pipe directly to a Google location. Moreover, you now have connectivity reaching beyond Google’s existing network locations, allowing you to scale connectivity and save on egress traffic costs from your VPC network to your on-premise network. You can transfer large amounts of data between networks, which can be more cost effective than purchasing additional bandwidth over the public Internet. On top of that, you’ll experience fewer disruptions and drops with a predictable user experience. Traffic between your on-premises and VPC networks doesn’t touch the public internet, meaning fewer hops and potential points of failure where it can be disrupted or dropped. A key benefit is that your VPC’s internal IP addresses can be directly accessed from your on-premise network with peering — No NAT device or VPN tunnel required.
Dedicated Interconnect Set Up
Now let’s dive into how to actually get that set up. First, let’s make sure you’ve got a VPC set up for your cloud environment — if you haven’t set that up, see my previous article. Now, the first steps in using Cloud Interconnect involve deciding if you want to use a Dedicated connection, or a Partner connection.
While there are a few nuances between the two (depending on your needs), the primary ones that I’m concerned with are:
- Speed — Dedicated Interconnect is ideal for situations where you need more than 10 Gbps connections (partner can be used for lower speed needs).
- Access — If your on-prem can’t physically meet Google’s network in a colocation facility, then you can use Partner Interconnect.
Now, for the sake of my demo, I’m going to use Dedicated Interconnect, because it’s a little simpler, and can be cheaper in some situations. Let’s walk through how to set this up:
Now, it’s important to note that you are, in fact ordering a dedicated interconnect connection from your on-prem to Google. Once Google is done allocating the connection, it will send you a confirmation email and LOA-CFAs, which you will send to your vendor. They will provision the cross connects between the Google peering edge and your on-premises network.
Once this is all setup, Google has to test your access extensively before you’ll be able to use the interconnect directly. Once Google verifies you have a cross-connect, you’ll see your Interconnect is ready to configure and set up a VLAN attachment.
Depending on your availability needs, you can configure Dedicated Interconnect to support mission-critical services or applications that can tolerate some downtime. To achieve a specific level of reliability, Google has two prescriptive configurations, one for 99.99% availability and another for 99.9% availability. For the highest level availability, Google recommends the configuration for 99.99% availability. Check out the documentation to set up a redundant interconnect.
Now that you have configured a dedicated connection to Google’s network, the next step is to protect your Cloud instances by configuring firewall rules, which I will cover in the next article. Stay tuned.