Building Hub-Spoke Network Topology with Network Connectivity Center (VPC spokes) on GCP

Hassene BELGACEM
Google Cloud - Community
9 min readNov 17, 2023

This article presents an updated version of my previous publication from a year ago, inspired by Google’s recent launch of the Network Connectivity Center. This new service appears to be a simplified version to AWS Transit Gateway. Although it currently lacks certain features such as Custom Routing Tables and traffic control it significantly streamlines the construction of hub-and-spoke network topologies on GCP.

Hub-Spoke Network Architecture is adopted due to its manageability and scalability. Alongside this, hybrid environments are becoming more prevalent, primarily for their security benefits and to accommodate the gradual process of cloud migration. This article delves into Hub-Spoke Network Topology and Hybrid Connectivity, with a focus on their implementation in the GCP context.

What is Hub and Spoke Network Topology ?

The Hub-and-Spoke network topology is a model where all traffic flows along connections radiating out from a central node, or ‘hub’. The hub is the central point of the network to which all other nodes, or ‘spokes’, are connected. The spokes have no direct connection to each other; all communications between them must pass through the hub.

Hub-Spoke network topology

This model can simplify network configuration and management because each spoke only requires a single connection to the hub, rather than separate connections to all other nodes. This design also makes it easier to manage security and monitor traffic because all data flows through a central point. However, because all traffic passes through the hub, it can become a bottleneck if it doesn’t have sufficient capacity to handle the traffic. Furthermore, if the hub fails, all connected spokes lose connectivity.

There are several solutions for hub-spoke in the context Google Cloud Platform that i already detailed in the previous article, so for today il will focus only the on 2 of them :

Shared VPC

As implied by its name, Shared VPC in Google Cloud Platform (GCP) provides operations teams with the capability to centrally establish subnets, route tables, and firewall rules, and then share these subnets with other projects. This fosters centralized network management and control, simplifying the enforcement of consistent policies and security measures across multiple projects. In this design, since we’re dealing with only one network, there’s no need to perform any additional steps for route propagation.

However, with these advantages come certain challenges, particularly concerning security. Maintaining a high degree of isolation can be difficult when projects of varying security levels and environments reside within one large network. For instance, a configuration error could inadvertently expose high-security applications to lower security ones, posing significant risks.

Given these potential security concerns, we recommend the use of separate Shared VPCs for different environment types and security levels. This approach strikes a balance between the benefits of centralized management and the need for adequate isolation and security.

Network Connectivity Center

The Network Connectivity Center (NCC) is a networking service designed to simplify complex hybrid and multi-cloud environments. It acts as a central hub that interconnects various network spokes, which can be individual Google VPCs, on-premises networks, or other cloud providers’ networks, allowing for unified connectivity management.

A Network Connectivity Center hub can either connect VPC networks within Google Cloud or link to external networks, but not both at once. It enables you to:

  • Connect VPC networks across the same or different Google Cloud organizations.
  • Link external networks to a Google Cloud VPC through router appliance VMs (site-to-cloud).
  • Manage VPC interconnectivity using router appliance VMs.
  • Use a Google Cloud VPC as a network to connect external sites, with options like VPN tunnels, VLAN attachments, or router appliance VMs (site-to-site).

NCC does not come with an integrated firewall by default, placing the responsibility of security measures on the users. However, it offers robust integration options with third-party network appliances, including firewalls from leading vendors like Palo Alto Networks, ...

For the purposes of this discussion, our architecture will leverage the VPC-Spokes model of NCC.

NCC VPC-Spokes Limitations

NCC is mainly about advertising networks and subnets routes based on BGP protocol but unlike AWS Transit Gateway, the Network Connectivity Center (NCC) cannot concurrently support a mixture of hybrid and VPC spokes. Additionally, GCP lacks the capability to create custom routes within spoke networks that designate the NCC hub as a next hop. Thos limitations present a challenge for routing traffic from the spokes back to on-premise networks and peered networks and i will give two network designs that can help you mitigate them.

Network Design

Now, its time to explore a practical application where we implement a hub-spoke network topology on Google Cloud Platform (GCP), based on the solutions and recommendations discussed above.

For our setup, we will establish a two-tier hub and spoke network topology catering to different security requirements.

Two-tier hub and spoke network topology with NCC and PSC/NEG

From right to left, the first tier is designed to accommodate multiple applications with identical security levels and environments. Since these applications belong to the same security zone, our primary goal will be to streamline management. To achieve this, we will leverage a Shared VPC housed in the Security Hub project. Additionally, we will share subnets with the spoke projects, which will host user workloads. This design promotes an efficient network management process and fosters an environment where workloads can be effectively managed.

The second tier is designed for segregating different security zones, each represented by a Shared VPC. To maintain this isolation while still allowing necessary interactions, we’ll set up an NCC hub, hosted is Global Hub project connect it to each Shared VPC (or first tier hub).

To mitigate NCC limitations, we can expose custom GCP services within the hybrid network by utilizing load balancers in conjunction with Private Service Connect (PSC). This setup allows accessing your services endpoints from on-premise network. Additionally, re-exposing on-premises services within the hybrid network is mandatory using load balancers paired with Hybrid Network Endpoint Groups (NEG). This dual strategy ensures that both cloud-hosted and on-premises services are accessible within the hybrid network, facilitating seamless connectivity and integration across the enterprise’s network infrastructure.

Alternatively, deploying a private proxy appliance within the hybrid network to perform Network Address Translation (NAT) of on-premise IP addresses to internal, routable GCP IP addresses, may serve as a viable solution. This workaround can help maintain internal traffic flow until such features are potentially integrated into NCC.

Two-tier hub and spoke network topology with NCC and Http Proxy

Either way it is recommanded to have a dedicated network and appliance for managing traffic each type of traffic independently (internet vs on-premise). We will dive into this part as a detailed guides on setting up each of these options within a hub-spoke context are available in my previous articles.

How to build this design ?

In this section, we are going to deep dive into the procedure involved in constructing a simplified version of our design, as we will not create the shared-VPC witch require multiple projects, we also decided to go with the alternative with Http proxy as it’s is easier to demonstrate. We will unfold this process step by step, elaborating on each aspect in detail to provide a comprehensive understanding of how to achieve our target design.

Hybrid Hub and Spoke Network Design, simplified version
  • Step 0: We will start with setting the necessary env variables, this will facilitate installation steps.
export PROJECT_ID="your-project-id"
export REGION="your-region" # ex: europe-west2

export HYBRID_NETWORK_NAME="hybrid-network"
export HYBRID_SUBNET_NAME="hybrid-subnet"

export DMZ_NETWORK_NAME="dmz-network"
export DMZ_SUBNET_NAME="dmz-subnet"

export SPOKE_NETWORK_NAME="spoke-network"
export SPOKE_SUBNET_NAME="spoke-subnet"

export ONPREM_NETWORK_NAME="onprem-network"
export ONPREM_SUBNET_NAME="onprem-subnet"

export VPN_SHARED_SECRET="your-secret-vpn"
  • Step 1: Time to build private networks, egress trafic and ingress from health check ranges mut be allowed. for more realistic demo, default route to internet must be deleted. The only requirement here is that the IP range of a spoke networks must not overlap with each other to avoid any IP conflicts. In case of IP conflict we can use Private NAT as a workaround but this not in the scope of this article. Finally, for the purpose of this demo, we’ll simulate an on-premise site by creating a Virtual Private Cloud (VPC).

create a file named ‘setup-private-network.sh’ with the following content :

#!/bin/bash

set -e
set -o

PROJECT_ID=$1
REGION=$2
NETWORK_NAME=$3
SUBNETWORK_NAME=$4
SUBNETWORK_RANGE=$5

# Create a Hybrid custom Network and its subnet
gcloud compute networks create $NETWORK_NAME \
--project=$PROJECT_ID \
--subnet-mode=custom
gcloud compute networks subnets create $SUBNETWORK_NAME \
--project=$PROJECT_ID \
--network=$NETWORK_NAME \
--role="ACTIVE" \
--purpose="PRIVATE" \
--range=${SUBNETWORK_RANGE} --region=$REGION

# Delete default internet gateway Route
ROUTE_NAME=$(gcloud compute routes list --filter="network: ${NETWORK_NAME} AND nextHopGateway:default-internet-gateway" --format="value(name)")
gcloud compute routes delete $ROUTE_NAME --quiet

# Create Health check firewall rule
gcloud compute firewall-rules create ${NETWORK_NAME}-allow-health-checks \
--network=$NETWORK_NAME \
--action=ALLOW \
--direction=INGRESS \
--source-ranges=35.191.0.0/16,130.211.0.0/22 \
--target-tags=l7-web-proxy \
--rules=tcp:3128

# Allow egress traffic
gcloud compute firewall-rules create ${NETWORK_NAME}-allow-egress \
--network=$NETWORK_NAME \
--action=allow \
--direction=EGRESS \
--rules="tcp:0-65535,udp:0-65535" \
--destination-ranges="0.0.0.0/0"

To create the networks, you just need to run the following commends :

# Create networks
bash setup-private-network.sh $PROJECT_ID $REGION $HYBRID_NETWORK_NAME $HYBRID_SUBNET_NAME "192.168.0.0/24"
bash setup-private-network.sh $PROJECT_ID $REGION $SPOKE_NETWORK_NAME $SPOKE_SUBNET_NAME "192.168.1.0/24"
bash setup-private-network.sh $PROJECT_ID $REGION $ONPREM_NETWORK_NAME $ONPREM_SUBNET_NAME "10.0.0.0/16"
  • Step 2: Now, we move on to the creation of the DMZ network. The requirement are basically the same except that our Http proxy will connect to internet, so we need to create a new internet route.
bash setup-private-network.sh $PROJECT_ID $REGION $DMZ_NETWORK_NAME $DMZ_SUBNET_NAME "192.168.2.0/24"

# Add internet route
gcloud compute routes create ${NETWORK_NAME}-internet-route \
--project=$PROJECT_ID \
--destination-range=0.0.0.0/0 \
--next-hop-gateway=default-internet-gateway
  • Step 4: Create 2 HA VPN gateways, in hybrid and on-premise networks, and connect them by creating IPSEC tunnels and BGP sessions. I will be using a script for this setup, you can read my previous article if you need more details.
# Setup VPN connectivity
curl -L https://raw.githubusercontent.com/belgacem-io/blog-utilities/main/scripts/gcp-setup-vpn.sh --output gcp-setup-vpn.sh
bash gcp-setup-vpn.sh $PROJECT_ID $REGION $HYBRID_NETWORK_NAME $ONPREM_NETWORK_NAME $VPN_SHARED_SECRET

You need to check if all ressources have been created by running the following commands and validating the results

$ gcloud --project=$PROJECT_ID compute networks  list
NAME SUBNET_MODE BGP_ROUTING_MODE IPV4_RANGE GATEWAY_IPV4
dmz-network CUSTOM REGIONAL
hybrid-network CUSTOM REGIONAL
onprem-network CUSTOM REGIONAL
spoke-network CUSTOM REGIONAL

$ gcloud --project=$PROJECT_ID compute networks subnets list
NAME REGION NETWORK RANGE STACK_TYPE IPV6_ACCESS_TYPE INTERNAL_IPV6_PREFIX EXTERNAL_IPV6_PREFIX
dmz-subnet europe-west3 dmz-network 192.168.2.0/24 IPV4_ONLY
hybrid-subnet europe-west3 hybrid-network 192.168.0.0/24 IPV4_ONLY
onprem-subnet europe-west3 onprem-network 10.0.0.0/16 IPV4_ONLY
spoke-subnet europe-west3 spoke-network 192.168.1.0/24 IPV4_ONLY

$ gcloud --project=$PROJECT_ID compute routers list
NAME REGION NETWORK
hybrid-network-router europe-west3 hybrid-network
onprem-network-router europe-west3 onprem-network

$ gcloud --project=$PROJECT_ID compute vpn-gateways list
NAME INTERFACE0 INTERFACE1 INTERFACE0_IPV6 INTERFACE1_IPV6 NETWORK REGION
hybrid-network-gateway 34.157.60.67 34.157.187.120 hybrid-network europe-west3
onprem-network-gateway 34.157.59.182 34.157.178.178 onprem-network europe-west3

$ gcloud --project=$PROJECT_ID compute vpn-tunnels list
NAME REGION GATEWAY PEER_ADDRESS
hybrid-network-tunnel-0 europe-west3 hybrid-network-gateway 34.157.60.67
hybrid-network-tunnel-1 europe-west3 hybrid-network-gateway 34.157.187.120
onprem-network-tunnel-0 europe-west3 onprem-network-gateway 34.157.59.182
onprem-network-tunnel-1 europe-west3 onprem-network-gateway 34.157.178.178
  • Step 5: Now it’s time to set up the hub-and-spoke network topology, and the process is straightforward. Begin by creating a central hub, and then connect each Virtual Private Cloud (VPC) as spokes to this hub. For every step of the configuration, a single gcloud command line is all that's necessary
# Enable the network connectivity API in case it is not yet enabled
gcloud --project=$PROJECT_ID services enable networkconnectivity.googleapis.com

#Create a NCC hub
gcloud --project=$PROJECT_ID network-connectivity hubs create ncc-global-hub

# Configure SPOKE Network as an NCC spoke and assign it to the NCC hub that was previously created.
gcloud network-connectivity spokes linked-vpc-network create "$SPOKE_NETWORK_NAME" \
--project=$PROJECT_ID \
--hub=ncc-global-hub \
--vpc-network=$SPOKE_NETWORK_NAME \
--global

# Configure DMZ Network as an NCC spoke and assign it to the NCC hub that was previously created.
gcloud network-connectivity spokes linked-vpc-network create "$DMZ_NETWORK_NAME" \
--project=$PROJECT_ID \
--hub=ncc-global-hub \
--vpc-network=$DMZ_NETWORK_NAME \
--global

# Configure Hybrid Network as an NCC spoke and assign it to the NCC hub that was previously created.
gcloud network-connectivity spokes linked-vpc-network create "$HYBRID_NETWORK_NAME" \
--project=$PROJECT_ID \
--hub=ncc-global-hub \
--vpc-network=$HYBRID_NETWORK_NAME \
--global

Conclusion

In conclusion, while the Network Connectivity Center (NCC) offers significant advantages for creating scalable and manageable network topologies within Google Cloud, it currently lacks certain functionalities, such as mixing hybrid and VPC spokes and traffic control. As cloud services continue to evolve, it is hoped that NCC will expand its capabilities, further enhancing its utility for complex networking scenarios. For now, users must navigate these constraints with creative solutions to optimize their network infrastructure within Google Cloud.

--

--

Hassene BELGACEM
Google Cloud - Community

Cloud Architect | Trainer . Here, I share my thoughts and exp on the topics like cloud computing and cybersecurity. https://www.linkedin.com/in/hassene-belgacem