Enabling enterprise-grade migration across cloud providers — Achieving Network Transitivity on an enterprise scale in GCP

Rohan Singh
Google Cloud - Community
8 min readJun 11, 2024

Transitioning between cloud platforms can be a complex and intricate process, requiring various mandatory configurations and setups. Enterprises often encounter a significant amount of back-and-forth before reaching a tipping point where they can anticipate a smooth migration process.

This solution blog is a collaboration of Rahul Kumar Singh (Staff Cloud Infrastructure Engineer) and Rohan Singh (Senior Cloud Infrastructure Engineer and Google Cloud Champion Innovator) at SADA Systems — An Insight Company.

When migrating from another cloud service provider to Google Cloud Platform (GCP), such as transitioning from AWS to GCP, establishing connectivity between the platforms is imperative. The process is relatively straightforward when moving from a single cloud to a single GCP project. However, within large enterprises comprising multiple organizational subunits, configuring the setup for this connectivity becomes significantly complex and can lead to a chaotic transition.

Opting for one-to-one connectivity means establishing a direct link between individual networks from AWS to GCP. While this ensures dedicated and potentially faster connections, it also means greater expense. For instance, you can expect to pay approximately $100-$200 (may vary) per month for a HA VPN setup between GCP and AWS. However, the actual cost may be higher or lower depending on the specific configuration and usage. For a rough estimation with a higher quantity, let’s consider having more than 6 connections. Using a basic calculation method, assuming a fixed cost of $200 per connection, the estimated expense for more than 6 connections would be approximately $1200 (6 connections * $200 each). Please note, this estimation is rough and may vary based on actual configurations and other factors.

The recommended solution involves establishing transitive network characteristics between AWS and GCP for enhanced connectivity. This configuration aims to facilitate seamless communication and data transfer between the two cloud platforms. The architectural diagram below illustrates the network structure necessary to achieve this transitive connection:

Cross-Cloud Network Architecture Diagram

High quality here.

In the diagram, three parts can be seen where the left side represents GCP, the right side represents AWS, and the middle part is the conduit connecting GCP and AWS.

Let’s understand our use case

Considering a migration initiative from AWS to GCP, the enterprise currently operates multiple VPCs and accounts within AWS, each representing distinct organizational sub-units or departments. For instance, let’s consider two departments situated in New York, USA, and Bengaluru, India. These departments’ resources are segregated into separate VPCs, each located in different geographical regions within the same AWS account.

Their objective is to migrate both departments’ resources to GCP by organizing them into distinct folders and subsequently into projects under the same organizational umbrella in Google Cloud. This migration strategy aligns with Google Cloud’s Shared VPC model, enabling the segregation of resources into different projects within the same organization.

Let’s zoom in on GCP and see what the configurations look like

Within GCP, the infrastructure follows the Shared VPC practice, featuring host projects dedicated to the New York and Bengaluru towers, each supported by individual service projects. Additionally, a pivotal component of our GCP architecture is the transit VPC, which plays a vital role in managing VPN connections to the New York and Bengaluru host projects, as well as enabling a partner interconnect to AWS via the Partner interconnect service provider for instance PacketFabric or Equinix.

When you establish a partner interconnect connection, the partner interconnect provider is accessible. Through the partner interconnect established with a service provider, multiple tunnels are created to ensure high availability between AWS and GCP. This setup ensures that all necessary configurations related to CIDR ranges are appropriately configured, allowing for clear visibility of advertised ranges from both clouds.

The architectural diagram below provides a detailed visualization of this network infrastructure within GCP, emphasizing the connectivity and configurations established for robust communication between AWS and GCP:

Detailed Cross-Cloud Network Architecture Diagram

Let’s zoom in on AWS and see what the configurations look like

We have one AWS Transit Gateway acting like a hub and spoke between multiple VPCs. AWS Transit Gateway utilizes the Transit Gateway Attachment. We associate subnets in each required zone with the Transit Gateway Attachment that routes traffic via Transit Gateway.

There is also an AWS Direct Connect which is almost equivalent to GCP Interconnect and provides the shortest path to your cloud while traffic remains private with speed varying between 1 to 100 Gbps. Direct Connect utilizes Direct Connect Gateway to control traffic movement with AWS.

Hub and Spoke N/W Model

GCP Partner Interconnect Service Provider

Traffic flow between the clouds

Now that we have the network configurations understood, let’s look at the traffic flow between the clouds (GCP→AWS).

Suppose their application, operating within a service project dedicated to the New York tower, requires access to an AWS RDS Instance while the database remains hosted on AWS and has not yet been migrated. The traffic originates from this application, which runs on Google Kubernetes Engine (GKE) within GCP.

Assuming the necessary networking rules are in place within GKE to enable outbound traffic, the initial point of interaction for this outbound traffic would be the Cloud Router established within the New York host project in GCP.

Upon reaching the Cloud Router within the New York host project, a dynamic route is established via a VPN tunnel between the host project and the transit VPC project. This dynamic route facilitates the traffic’s journey from the Cloud Router in the New York host project to the transit VPC project

The transit VPC project has two Cloud Routers (R1 and R2):

  1. R1 is connected to all the VPN connections from New York and Bengaluru host project towers.
  2. R2 is connected to the interconnect which is further connected to the partner interconnect provider.

The question arises: Why opt for two routers instead of one in our networking setup?

The decision to employ two routers stems from the configuration involving two distinct VPN connections. The first VPN connection serves the purpose of linking the Transit VPC with the host projects associated with different towers. Meanwhile, the second VPN connection establishes connectivity between the Transit VPC and the partner interconnect provider.

The reason for utilizing two Cloud Routers is rooted in the limitation that a single Cloud Router can only be assigned to one VPN connection at a time. As a result, a dual-router setup becomes imperative to accommodate and manage these separate VPN connections effectively. This configuration ensures each VPN connection has its dedicated Cloud Router to facilitate smooth and independent connectivity between the various components within our network architecture.

Continuing forward, the traffic that has now arrived at the Cloud Router (R1) within the transit VPC becomes visible to R2, given their shared presence within the same VPC. Subsequently, the traffic proceeds from R2, marking the point where it exits the GCP network, initiating the journey towards the partner interconnect provider.

At this stage, the traffic has completed half of its route, having traversed through the internal networking components within the transit VPC. The onward path from R2 signifies the transition from the GCP environment towards the partner interconnect provider, which plays a crucial role in establishing connectivity beyond the GCP network boundary.

The partner interconnect provider operates a dedicated router (distinct from GCP’s Cloud Router) responsible for managing connectivities between GCP and AWS through VPN tunnels. These tunnels establish bidirectional communication, enabling data transmission between GCP and AWS environments.

Within these VPN tunnels, CIDR Ranges originating from GCP to AWS and vice versa are permitted in the Border Gateway Protocol (BGP) sessions. This configuration enables the reception and advertisement of CIDRs, allowing the seamless exchange of network information between GCP and AWS.

To ensure high availability and fault tolerance in our interconnected network, our setup includes four tunnels within the partner interconnect provider. This multiplicity of tunnels enhances redundancy and reliability, minimizing the risk of disruption by providing alternative paths for data transmission.

It’s important to note that while the number of tunnels can be increased for further redundancy, careful consideration must be given to GCP’s resource limits.

Considering the mention of four tunnels, it’s natural to question the bandwidth allocation for these connections.

Determining the bandwidth for these tunnels typically involves a collaborative discussion with the customer and adhering to the bandwidth limits supported by the partner interconnect provider. While it’s possible to have a minimum of two tunnels configured with high bandwidth, it’s essential to note that higher bandwidth often comes with increased costs. Therefore, choosing the appropriate bandwidth for these tunnels requires careful consideration.

One key factor to consider is optimizing the bandwidth to avoid overprovisioning. Allocating higher bandwidth than necessary might result in underutilization and unnecessary expenditure. However, if budget constraints are not an issue, the bandwidth allocation can be adjusted as needed.

Progressing further in the networking setup, traffic flows from the partner interconnect provider to AWS Direct Connect, utilizing established routes to reach the Transit Gateway. Subsequently, utilizing route tables and security groups, the traffic navigates to specific instances within their respective Virtual Private Clouds (VPCs) within the AWS environment.

Now, you might be wondering about the connectivity to resources located in other VPCs within AWS but situated in different regions.

To facilitate this connectivity in a cost-effective manner, a VPN connection has been set up between the VPCs in different regions. However, it’s crucial to note that while this approach allows connectivity, it involves a trade-off between cost and performance.

The decision to establish VPN connections between VPCs in disparate regions serves as a cost-effective solution, albeit with potential limitations on performance. This cost-over-performance trade-off is a consideration made to balance the expenses involved in inter-region connectivity against the expected performance levels.

Important points to note

  • GCP Subnet CIDRs and Private Service Connect (PSC) CIDRs (e.g., CloudSQL, Memorystore) requiring access to AWS resources should be added to the Security Group of each resource individually.
  • Enabling import/export Custom Routes is necessary for both Private Service Connect and GKE VPC Network Peering Connection.
  • Routes pertaining to Private Service Connect (PSC) must be advertised over BGP as custom routes since they aren’t automatically advertised upon establishing the VPC peering connection.
  • To ensure the return of traffic along the same path, mandatory route table entries are needed, specifying the Transit Gateway as the next hop.
  • GCP reserves several IPv4 addresses for internal use., Therefore, it’s advisable to conduct thorough checks before implementing changes to avoid conflicts.

GCP Prohibited IPv4 subnet ranges

GCP Reserved ranges for other different services

--

--

Rohan Singh
Google Cloud - Community

Infrastructure @ SADA | Google Cloud Champion Innovator | Motorcyclist | rohans.dev