Simple Hybrid Hub-Spoke Network Topology on Google Cloud Platform (GCP)

Hassene BELGACEM
Google Cloud - Community
12 min readMay 29, 2023

Increasingly, Hub-Spoke Network Architecture is adopted due to its manageability and scalability. Alongside this, hybrid environments are becoming more prevalent, primarily for their security benefits and to accommodate the gradual process of cloud migration. This article delves into Hub-Spoke Network Topology and Hybrid Connectivity, with a focus on their implementation in the GCP context.

Whether you’re a network engineer, a cloud engineer, or a cloud enthusiast, this guide will offer valuable insights into how to setup a real configuration that takes in consideration both concepts.

What is Hub and Spoke Network Topology ?

The Hub-and-Spoke network topology is a model where all traffic flows along connections radiating out from a central node, or ‘hub’. The hub is the central point of the network to which all other nodes, or ‘spokes’, are connected. The spokes have no direct connection to each other; all communications between them must pass through the hub.

Hub-Spoke network topology

This model can simplify network configuration and management because each spoke only requires a single connection to the hub, rather than separate connections to all other nodes. This design also makes it easier to manage security and monitor traffic because all data flows through a central point. However, because all traffic passes through the hub, it can become a bottleneck if it doesn’t have sufficient capacity to handle the traffic. Furthermore, if the hub fails, all connected spokes lose connectivity.

There are several solutions for hub-spoke in the context Google Cloud Platform. The optimal choice depends on specific use cases, requirements, and existing infrastructure. Here are a few common solutions for we will be using in our lab.

Shared VPC

As implied by its name, Shared VPC in Google Cloud Platform (GCP) provides operations teams with the capability to centrally establish subnets, route tables, and firewall rules, and then share these subnets with other projects. This fosters centralized network management and control, simplifying the enforcement of consistent policies and security measures across multiple projects. In this design, since we’re dealing with only one network, there’s no need to perform any additional steps for route propagation.

However, with these advantages come certain challenges, particularly concerning security. Maintaining a high degree of isolation can be difficult when projects of varying security levels and environments reside within one large network. For instance, a configuration error could inadvertently expose high-security applications to lower security ones, posing significant risks.

Given these potential security concerns, we recommend the use of separate Shared VPCs for different environment types and security levels. This approach strikes a balance between the benefits of centralized management and the need for adequate isolation and security.

VPC Network Peering

Google Cloud’s VPC Network Peering facilitates private RFC 1918 connectivity across two Virtual Private Cloud (VPC) networks, irrespective of their project or organizational affiliation. The ease of setting up VPC Network Peering is one of its key benefits, eliminating the need for external appliances. By default, local routes are automatically shared between both sides of the connection. This means that the routes within each network are made known to the other network. In addition to this, there’s the option to export custom routes, provided that these routes are applicable to all instances in the network.

However, it’s crucial to note that VPC Network Peering does not support transitive peering; it only establishes a one-to-one connection. Therefore, any networks requiring communication must establish a direct peering connection between them.

We recommend the use of peering when there’s a need to isolate a specific workload. Typically, peering is employed to connect various Shared VPCs to a central Hub, which grants complete control over the traffic traversing different zones. This methodology allows for optimal network management, ensuring that data flow remains under your purview, thereby enhancing the security and efficiency of your network operations.

Virtual Private Network (VPN)

A Virtual Private Network (VPN) offers secure connections between networks, providing encrypted links that enable safe data transmission, even across multiple networks and over the internet. However, due to their design, VPNs might experience higher latencies, slower speeds, and bandwidth limitations (as set by Google Cloud) when compared to direct peering connections. As a result, they may not be the best choice for scenarios that require high performance or significant data transfers.

Both static and dynamic routing options are available for VPN, each with its unique characteristics and use cases. Static routing involves manually configuring the routes that the VPN connection should use. This means that administrators need to specify the networks that should be reachable over the VPN connection. While static routes offer simplicity and direct control, they can become labor-intensive and error-prone in large, complex networks where routes may frequently change.

On the other hand, dynamic routing leverages the Border Gateway Protocol (BGP) to automatically manage the routes over the VPN connection. In dynamic routing, the VPN automatically learns about the routes from your on-premises network and vice versa. This provides a more flexible and scalable solution for large networks, as it eliminates the need for manual route updates. However, it also requires more initial setup, as BGP needs to be configured correctly on both ends of the VPN connection.

Choosing between static and dynamic routing depends on your specific network configuration and requirements. While static routing may be suitable for smaller or simpler networks, dynamic routing can be a more efficient choice for larger or rapidly changing networks.

Network Design

Now, its time to explore a practical application where we implement a hub-spoke network topology on Google Cloud Platform (GCP), based on the solutions and recommendations discussed above.

For our setup, we will establish a two-tier hub and spoke network topology catering to different security requirements.

Two-tier hub and spoke network topology

From right to left, the first tier is designed to accommodate multiple applications with identical security levels and environments. Since these applications belong to the same security zone, our primary goal will be to streamline management. To achieve this, we will leverage a Shared VPC housed in the Security Hub project. Additionally, we will share subnets with the spoke projects, which will host user workloads. This design promotes an efficient network management process and fosters an environment where workloads can be effectively managed.

The second tier is designed for segregating different security zones, each represented by a Shared VPC. To maintain this isolation while still allowing necessary interactions, we’ll set up a classic hub, hosted is Hub project, network (“VPC Gateway” in the schema ) and connect it to each spoke (or Shared VPC) using network peering.

The second tier is also responsable for managing ingress and egress traffic from/to internet or on-premises networks.

Hybrid connection to the on-premises networks are isolated in a purpose built VPC (“VPC Hybrid” in the schema ).

The suggested approach is to limit internet access through a proxy appliance witch is depployed in different network (“VPC Hybrid” in the schema ) . Depending on your specific requirements, you can opt for Google Cloud’s Secure Web Proxy managed service or the Squid proxy, either way it is recommanded to have a dedicated network for managing traffic from/to internet as described in the flowing schema. We will dive into this part as a detailed guides on setting up each of these options within a hub-spoke context are available in my previous articles.

By doing so, we will take advantage of the simplicity and centralized management that Shared VPCs offer, while still ensuring isolation and control over traffic between different security zones with the use of network peering.

Furthermore, we will establish an on-premises network to simulate a traditional, non-cloud environment. To ensure secure, encrypted connectivity over the internet between this on-premises network and our central hub, we will implement a VPN connection. By doing so, we aim to replicate a hybrid network environment where the on-premises systems are able to communicate securely and reliably with the cloud-based resources on Google Cloud Platform.

Network Design Extentions

The flexibility of this network design allows for the integration of additional features :

  • Connection to Google Services: If you need to leverage Google Cloud services, it recommend to use Private Service Connect. This feature provides secure, scalable, and high-speed private connectivity from Google Cloud’s Virtual Private Clouds (VPCs) to Google Cloud services.
  • Inter-spokes Connection: In scenarios where spoke networks need to interact with each other, GCP doesn’t provide a transit gateway solution natively. However, you can manually create one using a Linux VM. A comprehensive guide on establishing this configuration is provided in another article of mine.

This network design, thus, offers a versatile framework that can be tailored to incorporate various additional functionalities as per your business requirements.

How to build this design ?

In this section, we are going to deep dive into the procedure involved in constructing a simplified version of our design, as we will not create the shared-VPC witch require multiple projects. We will unfold this process step by step, elaborating on each aspect in detail to provide a comprehensive understanding of how to achieve our target design.

Hybrid Hub and Spoke Network Design
  • Step 0: We will start with setting the necessary env variables, this will facilitate installation steps
export PROJECT_ID="your-project-id"
export REGION="your-region" # ex: europe-west3

export HUB_NETWORK_NAME="hub-network"
export HUB_SUBNET_NAME="hub-subnet"

export SPOKE_NETWORK_NAME="spoke-network"
export SPOKE_SUBNET_NAME="spoke-subnet"

export ONPREM_NETWORK_NAME="onprem-network"
export ONPREM_SUBNET_NAME="onprem-subnet"
export VPN_SHARED_SECRET="your-secret-vpn"
  • Step 1: Time to build hub network, then create and configure a HA VPN instance that we will use later for connecting to the on-premise network.
# Create a Hub custom Network and its subnet
gcloud compute networks create $HUB_NETWORK_NAME \
--project=$PROJECT_ID \
--subnet-mode=custom
gcloud compute networks subnets create $HUB_SUBNET_NAME \
--project=$PROJECT_ID \
--network=$HUB_NETWORK_NAME \
--role="ACTIVE" \
--purpose="PRIVATE" \
--range=192.168.0.0/23 --region=$REGION

# Create Google Cloud Router
gcloud compute routers create hub-router \
--project=$PROJECT_ID \
--region=$REGION \
--asn=65001 \
--network $HUB_NETWORK_NAME
# Create VPN Gateways
gcloud compute vpn-gateways create hub-gateway \
--project=$PROJECT_ID \
--region=$REGION \
--network $HUB_NETWORK_NAME
  • Step 2: Now, we move on to the creation of the Spoke network. The only requirement here is that the IP range of the Spoke network must not overlap with the Hub network’s IP range to avoid any IP conflicts.
# Create a Spoke custom Network and its subnet
gcloud compute networks create $SPOKE_NETWORK_NAME \
--project=$PROJECT_ID \
--subnet-mode=custom
gcloud compute networks subnets create $SPOKE_SUBNET_NAME \
--project=$PROJECT_ID \
--network=$SPOKE_NETWORK_NAME \
--range=10.0.1.0/24 --region=$REGION
  • Step 3: Next, we need to establish network peering between the Hub network and each of the Spoke networks.

Note: We added “ — export-custom-routes” and “ — import-custom-routes” so that both networks will share custom routes

# Hub to spoke
gcloud compute networks peerings create hub-to-spoke \
--project=$PROJECT_ID \
--export-custom-routes \
--import-custom-routes \
--network=$HUB_NETWORK_NAME --peer-network=$SPOKE_NETWORK_NAME \
--auto-create-routes
gcloud compute networks peerings create spoke-to-hub \
--project=$PROJECT_ID \
--export-custom-routes \
--import-custom-routes \
--network=$SPOKE_NETWORK_NAME --peer-network=$HUB_NETWORK_NAME \
--auto-create-routes

The result must be something like this:

Network Peering
  • Step 4: For the purpose of this exercise, we’ll simulate an on-premise site by creating a Virtual Private Cloud (VPC). We’ll create a HA VPN instance then we will use to connect the hub and on-premise networks.
# Create a Spoke custom Network and its subnet
gcloud compute networks create $ONPREM_NETWORK_NAME \
--project=$PROJECT_ID \
--subnet-mode=custom
gcloud compute networks subnets create $ONPREM_SUBNET_NAME \
--project=$PROJECT_ID \
--network=$ONPREM_NETWORK_NAME \
--range=10.0.2.0/24 --region=$REGION

# Create Google Cloud Router
gcloud compute routers create onprem-router \
--project=$PROJECT_ID \
--region=$REGION \
--asn=65002 \
--network $ONPREM_NETWORK_NAME
# Create VPN Gateways
gcloud compute vpn-gateways create onprem-gateway \
--project=$PROJECT_ID \
--region=$REGION \
--network $ONPREM_NETWORK_NAME
  • Step 5: Now let’s start connecting the HA VPN instances by creating VPN tunnel that will be used for data transfert, in our case and for hight availability purpose, we will be creating 2 tunnels on each HA VPN instance.
# Create HUB VPN Tunnels
gcloud compute vpn-tunnels create hub-tunnel-0 \
--project=$PROJECT_ID \
--region=$REGION \
--vpn-gateway hub-gateway \
--peer-gcp-gateway onprem-gateway \
--router hub-router \
--interface=0 \
--shared-secret $VPN_SHARED_SECRET
gcloud compute vpn-tunnels create hub-tunnel-1 \
--project=$PROJECT_ID \
--region=$REGION \
--vpn-gateway hub-gateway \
--peer-gcp-gateway onprem-gateway \
--router hub-router \
--interface=1 \
--shared-secret $VPN_SHARED_SECRET

# Create Onprem VPN Tunnels
gcloud compute vpn-tunnels create onprem-tunnel-0 \
--project=$PROJECT_ID \
--region=$REGION \
--vpn-gateway onprem-gateway \
--peer-gcp-gateway hub-gateway \
--router onprem-router \
--interface=0 \
--shared-secret $VPN_SHARED_SECRET
gcloud compute vpn-tunnels create onprem-tunnel-1 \
--project=$PROJECT_ID \
--region=$REGION \
--vpn-gateway onprem-gateway \
--peer-gcp-gateway hub-gateway \
--router onprem-router \
--interface=1 \
--shared-secret $VPN_SHARED_SECRET
  • Step 6: At this point, our setup could function by manually adding routes on each side of the connection (at the Router level). However, to simplify the process and improve efficiency, we’ve opted to employ the Border Gateway Protocol (BGP). BGP will allow for the automatic sharing of subnet routes, reducing the need for manual intervention and configuration. Here the configuration we decided to implement :
BGP configuration
# Router configuration, Create HUB BGP sessions
gcloud compute routers add-interface hub-router \
--project=$PROJECT_ID \
--interface-name=hub-to-onprem-0 \
--ip-address=169.254.0.1 \
--mask-length=30 \
--vpn-tunnel=hub-tunnel-0 \
--region=$REGION
gcloud compute routers add-interface hub-router \
--project=$PROJECT_ID \
--interface-name=hub-to-onprem-1 \
--ip-address=169.254.1.1 \
--mask-length=30 \
--vpn-tunnel=hub-tunnel-1 \
--region=$REGION
gcloud compute routers add-bgp-peer hub-router \
--peer-name=hub-peer-0 \
--interface=hub-to-onprem-0 \
--peer-ip-address=169.254.0.2 \
--peer-asn=65002 \
--region=$REGION
gcloud compute routers add-bgp-peer hub-router \
--peer-name=hub-peer-1 \
--interface=hub-to-onprem-1 \
--peer-ip-address=169.254.1.2 \
--peer-asn=65002 \
--region=$REGION

# Router configuration, Create Onprem BGP sessions
gcloud compute routers add-interface onprem-router \
--project=$PROJECT_ID \
--interface-name=onprem-to-hub-0 \
--ip-address=169.254.0.2 \
--mask-length=30 \
--vpn-tunnel=onprem-tunnel-0 \
--region=$REGION
gcloud compute routers add-interface onprem-router \
--project=$PROJECT_ID \
--interface-name=onprem-to-hub-1 \
--ip-address=169.254.1.2 \
--mask-length=30 \
--vpn-tunnel=onprem-tunnel-1 \
--region=$REGION
gcloud compute routers add-bgp-peer onprem-router \
--peer-name=onprem-peer-0 \
--interface=onprem-to-hub-0 \
--peer-ip-address=169.254.0.1 \
--peer-asn=65001 \
--region=$REGION
gcloud compute routers add-bgp-peer onprem-router \
--peer-name=onprem-peer-1 \
--interface=onprem-to-hub-1 \
--peer-ip-address=169.254.1.1 \
--peer-asn=65001 \
--region=$REGION
Hybrid router configuration
VPN Tunnels configuration

Test and validate this design ?

A straightforward method to validate your implementation is by confirming that new entries for the spoke and on-premise networks have been automatically incorporated into the routing tables. Here an example of how it should be :

Spoke routing table

Note that routes to the on-premise environment has beed added automatically

Conclusion

The Hub-Spoke Network Topology and Hybrid Connectivity offer invaluable tools for modern organizations navigating the complex landscape of cloud and on-premise infrastructures. Particularly within the Google Cloud Platform (GCP), these approaches offer simplicity, scalability, and a more secure way to handle network tasks, both for migration purposes and ongoing operations.

A new version of this article is now available with Network Connectivity Center here :

Originally published at https://hassene.belgacem.io .

--

--

Hassene BELGACEM
Google Cloud - Community

Cloud Architect | Trainer . Here, I share my thoughts and exp on the topics like cloud computing and cybersecurity. https://www.linkedin.com/in/hassene-belgacem