Building Hub-Spoke Network Topology with Linux based Transit Gateway on Google Cloud Platform (GCP)

Hassene BELGACEM
Google Cloud - Community
8 min readApr 13, 2023

Connecting and managing these networks can be a daunting task, and it’s important to have a solution that can simplify the process. This is where the concept of a Transit Gateway comes in.

What is Transit Gateway ?

A Transit Gateway is a virtual hub that connects multiple networks together, allowing for efficient and secure communication between them. It acts as a centralized point for managing network traffic, and provides a scalable solution for connecting multiple networks together. Google Cloud Platform (GCP) does not currently have a native product that can cover this function, however, you can still build a Transit Gateway on GCP using a combination of virtual machines and with some Network services.

In this article, we will guide you on how to build a Linux based Transit Gateway on Google Cloud using Compte Engine services.

Design

Hub and Spoke Network Topology with Linux based Transit Gateway on Google Cloud Platform (GCP)

The diagram illustrates a network architecture consisting of a hub VPC network and multiple spoke VPC networks. These networks are interconnected using VPC Network Peering. The peering configuration allows for automatic export of all custom routes without filters from the hub network to the spokes.

In the hub VPC network, there is a “Linux Based virtual appliance” hosted within a managed instance group, positioned behind an internal TCP/UDP load balancer. To ensure proper routing, static routes are configured with the internal TCP/UDP load balancer as the next hop. These static routes are exported over the VPC Network Peering by utilizing custom routes.

Due to the lower priority of the static routes compared to the automatically generated routes from peering, the hub network ignores them. On the other hand, each spoke network does not have peering connections with one another. As a result, no routes for the second spoke network exist, and the exported routes from the hub network take precedence.

Consequently, for each spoke network, a new route will be established with the other spoke network as the destination, and the TCP/UDP load balancer serving as the next hop for the traffic destined to the other spoke. This configuration ensures that communication between the spokes occurs through the central TCP/UDP load balancer, enabling efficient and reliable routing within the network.

For the “Linux Based virtual appliance”, we will be using a standard linux distribution. The key idea is to enabling IP forwarding for routing capabilities, configuring firewall rules with iptables for secure traffic management, and ensuring only specific traffic types like ICMP and SSH are allowed. The setup focuses on routing and securing traffic between different network segments within the cloud environment.

How to build this design ?

In the following sections, we will explore the process of building a transit gateway.

  • Step 0: To begin, we will start with setting the necessary env variables, this will facilitate install steps
export PROJECT_ID="your-project-id"
export REGION="your-region" # ex: europe-west3
export HUB_NETWORK_NAME="hub-network"
export HUB_SUBNET_NAME="hub-subnet"
export SPOKE1_NETWORK_NAME="spoke1-network"
export SPOKE1_SUBNET_NAME="spoke1-subnet"
export SPOKE2_NETWORK_NAME="spoke2-network"
export SPOKE2_SUBNET_NAME="spoke2-subnet"
export TEMPLATE_NAME="tgw-tmpl"
export MIG_NAME="tgw-mig"
export LOAD_BALANCER_NAME="tgw-lb"
  • Step 1 : Before we start, it is necessary to establish the prescribed network architecture consisting of a central Hub network and 2 connected Spoke networks.
# Create a Hub custom Network and its subnet
gcloud compute networks create $HUB_NETWORK_NAME \
--project=$PROJECT_ID \
--subnet-mode=custom
gcloud compute networks subnets create $HUB_SUBNET_NAME \
--project=$PROJECT_ID \
--network=$HUB_NETWORK_NAME \
--range=192.168.0.0/24 --region=$REGION

# Allow ingress traffic from spokes
gcloud compute firewall-rules create hub-allow-ingress-spokes \
--network=$HUB_NETWORK_NAME \
--action=ALLOW \
--direction=INGRESS \
--source-ranges=10.0.0.0/8 \
--target-tags=secure-web-proxy \
--rules=ALL
# Allow egress traffic
gcloud compute firewall-rules create hub-allow-egress \
--network=$HUB_NETWORK_NAME \
--action=allow \
--direction=EGRESS \
--rules="tcp:0-65535,udp:0-65535" \
--destination-ranges="0.0.0.0/0"

# Create a Spoke 1 custom Network and its subnet
gcloud compute networks create $SPOKE1_NETWORK_NAME \
--project=$PROJECT_ID \
--subnet-mode=custom
gcloud compute networks subnets create $SPOKE1_SUBNET_NAME \
--project=$PROJECT_ID \
--network=$SPOKE1_NETWORK_NAME \
--range=10.0.1.0/24 --region=$REGION

# Allow egress traffic
gcloud compute firewall-rules create spoke1-allow-egress \
--network=$SPOKE1_NETWORK_NAME \
--action=allow \
--direction=EGRESS \
--rules="tcp:0-65535,udp:0-65535" \
--destination-ranges="0.0.0.0/0"

# Create a Spoke 2 custom Network and its subnet
gcloud compute networks create $SPOKE2_NETWORK_NAME \
--project=$PROJECT_ID \
--subnet-mode=custom
gcloud compute networks subnets create $SPOKE2_SUBNET_NAME \
--project=$PROJECT_ID \
--network=$SPOKE2_NETWORK_NAME \
--range=10.0.2.0/24 --region=$REGION

# Allow egress traffic
gcloud compute firewall-rules create spoke2-allow-egress \
--network=$SPOKE2_NETWORK_NAME \
--action=allow \
--direction=EGRESS \
--rules="tcp:0-65535,udp:0-65535" \
--destination-ranges="0.0.0.0/0"
  • Step 2 : Next, we need to establish network peering between the Hub network and each of the Spoke networks. In the Hub network, make sure to enable the exporting of custom routes. Conversely, in each of the Spoke networks, enable the importing of custom routes. This will facilitate the exchange of custom route information between the networks.
# Hub to spoke 1
gcloud compute networks peerings create hub-to-spoke1 \
--project=$PROJECT_ID \
--network=$HUB_NETWORK_NAME --peer-network=$SPOKE1_NETWORK_NAME \
--auto-create-routes --export-custom-routes --import-custom-routes
gcloud compute networks peerings create spoke1-to-hub \
--project=$PROJECT_ID \
--network=$SPOKE1_NETWORK_NAME --peer-network=$HUB_NETWORK_NAME \
--auto-create-routes --export-custom-routes --import-custom-routes

# Hub to spoke 2
gcloud compute networks peerings create hub-to-spoke2 \
--project=$PROJECT_ID \
--network=$HUB_NETWORK_NAME --peer-network=$SPOKE2_NETWORK_NAME \
--auto-create-routes --export-custom-routes --import-custom-routes
gcloud compute networks peerings create spoke2-to-hub \
--project=$PROJECT_ID \
--network=$SPOKE2_NETWORK_NAME --peer-network=$HUB_NETWORK_NAME \
--auto-create-routes --export-custom-routes --import-custom-routes

To validate that the peering is working properly, you can check the console , the status must be active for the 4 connections:

Create peering spoke x
  • Step 3: Write kernel and network configuration steps as a cloud-init file named gateway.yml. Below is an example of the file’s content:
#cloud-config
write_files:
- path: /etc/sysctl.conf
permissions: "0644"
owner: root
content: |
net.ipv4.ip_forward = 1
net.ipv6.conf.all.forwarding = 1
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.all.proxy_arp = 1
runcmd:
- sysctl -p
# Accept all ICMP (troubleshooting)
- iptables -A INPUT -p icmp -j ACCEPT
# Accept SSH local traffic to the eth0 interface (health checking)
- iptables -A INPUT -p tcp --dport 22 -d 0.0.0.0 -j ACCEPT
# Drop everything else
#- iptables -A INPUT -j DROP
# Accept all return transit traffic for established flows
- iptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
# Accept all transit traffic from internal ranges
# Replace by actual multiple source/destination/proto/ports rules for fine-grained ACLs.
- iptables -A FORWARD -s 10.0.0.0/8,172.16.0.0/13,192.168.0.0/16 -d 10.0.0.0/8,172.16.0.0/13,192.168.0.0/16 -j ACCEPT
  • Step 4: Within the Hub network, you need to create a virtual machine template that has the “IP forwarding” feature enabled. Additionally, make sure to assign a network tag to this template to ensure proper association and identification. Used the created template as base for a new Managed Instance Group.
# Create the instance template
gcloud compute instance-templates create $TEMPLATE_NAME \
--project=$PROJECT_ID \
--region=$REGION \
--machine-type=e2-medium \
--network=$HUB_NETWORK_NAME \
--subnet=$HUB_SUBNET_NAME \
--image-project=ubuntu-os-cloud \
--image-family=ubuntu-minimal-2004-lts \
--can-ip-forward \
--no-address \
--tags secure-web-proxy \
--metadata-from-file=user-data=gateway.yml --metadata=enable-oslogin=TRUE

#Create a managed instance group using the template
gcloud compute instance-groups managed create $MIG_NAME \
--project=$PROJECT_ID \
--region=$REGION \
--base-instance-name=$MIG_NAME \
--template=$TEMPLATE_NAME \
--size=1 --region=$REGION

5. Step 5: Create an TCP/UDP Network Load Balancer with autoscaling capabilities. This must be achieved by utilizing the previously created virtual machine templates.

# Create Health check
gcloud compute health-checks create tcp $LOAD_BALANCER_NAME \
--project=$PROJECT_ID \
--region=$REGION \
--port 22

# Create Health check firewall rule
gcloud compute firewall-rules create hub-allow-health-checks \
--network=$HUB_NETWORK_NAME \
--action=ALLOW \
--direction=INGRESS \
--source-ranges=35.191.0.0/16,130.211.0.0/22 \
--target-tags=secure-web-proxy \
--rules=tcp:22

# Create load balancer
gcloud compute backend-services create $LOAD_BALANCER_NAME \
--project=$PROJECT_ID \
--protocol=TCP \
--region=$REGION \
--load-balancing-scheme=INTERNAL \
--health-checks-region=$REGION \
--health-checks=$LOAD_BALANCER_NAME

gcloud compute backend-services add-backend $LOAD_BALANCER_NAME \
--project=$PROJECT_ID \
--region=$REGION \
--instance-group=$MIG_NAME

gcloud compute forwarding-rules create $LOAD_BALANCER_NAME \
--project=$PROJECT_ID \
--region=$REGION \
--load-balancing-scheme=internal \
--subnet=$HUB_SUBNET_NAME \
--backend-service=$LOAD_BALANCER_NAME \
--ip-protocol=TCP \
--ports=ALL
  • Step 6: Finally, you need to configure internet routes that will be exported to the Spoke networks. These routes will enable connectivity to the internet from the Spoke networks via the Hub network.
gcloud compute routes create hub-to-spoke \
--project=$PROJECT_ID \
--network $HUB_NETWORK_NAME \
--destination-range "10.0.0.0/8" \
--next-hop-ilb-region $REGION \
--next-hop-ilb $LOAD_BALANCER_NAME \
--priority 1000

Test and validate this design ?

Testing and validation of a network design are critical steps. To accomplish this, a virtual machine (VM) can be installed within each spoke network, the first will be acting as a representative client for the network’s endpoints, and in the second we will install a HTTP server (ex: haproxy, nginx…) so it can act as server . By utilizing the ‘curl’ command, it is possible to access to the installed HTTP server and evaluate the network’s ability to establish connections, route traffic, and resolve DNS queries.

  • Step 1: Create a client and server virtual machines in different the spoke networks
# Create client VM
gcloud compute instances create client-vm \
--project=$PROJECT_ID \
--zone=${REGION}-a \
--machine-type=e2-medium \
--network=$SPOKE1_NETWORK_NAME \
--subnet=$SPOKE1_SUBNET_NAME \
--no-address \
--tags client-vm --metadata enable-oslogin=TRUE

# Allow ssh ingress traffic
gcloud compute firewall-rules create client-allow-ssh-ingress \
--project=$PROJECT_ID \
--network=$SPOKE1_NETWORK_NAME \
--action=allow \
--direction=INGRESS \
--rules=tcp:22 \
--target-tags=client-vm

# Create server VM
gcloud compute instances create server-vm \
--project=$PROJECT_ID \
--zone=${REGION}-a \
--machine-type=e2-medium \
--network=$SPOKE2_NETWORK_NAME \
--subnet=$SPOKE2_SUBNET_NAME \
--tags server-vm --metadata enable-oslogin=TRUE

# Allow ssh ingress traffic
gcloud compute firewall-rules create server-allow-ssh-ingress \
--project=$PROJECT_ID \
--network=$SPOKE2_NETWORK_NAME \
--action=allow \
--direction=INGRESS \
--rules=tcp:22 \
--target-tags=server-vm
  • Step 2: Connect to server VM, this can be achieved by simply utilizing the “ssh” button found in the console, and install and apache server
sudo apt update -y
sudo apt install -y nginx
  • Step 3: For this final stage, you need to connect to the client VM. To verify connectivity between the 2 VMs through the transit gateway you’ve established, retrieve the IP address of the server VM and use it to execute the following command.
curl http://IP_SERVER_VM

Conclusion

In conclusion, building a transitive network on Google Cloud can be a complex but highly beneficial task for organizations that need to connect multiple networks with different routing policies and security requirements. We have explored the process of building a transitive network using Google Cloud’s Virtual Private Cloud Network Peering and Transit Gateway features, as well as the use of Terraform to automate the deployment and management of the network infrastructure. Here a fully working example of how to create all the required components, including a transit VPC, spoke VPCs, firewall rules, and virtual machines.

Originally published at https://hassene.belgacem.io .

--

--

Hassene BELGACEM
Google Cloud - Community

Cloud Architect | Trainer . Here, I share my thoughts and exp on the topics like cloud computing and cybersecurity. https://www.linkedin.com/in/hassene-belgacem