Google Cloud Networking: Hybrid Architecture with Hub and Spoke Topology

Ken Tandrian
15 min readJun 20, 2024

--

In today’s hybrid IT landscape, businesses need seamless connections between on-premise infrastructure and cloud resources.

One of the most popular networking architectures is the hub-and-spoke topology. This architecture centralizes network control while granting secure access to various cloud and on-premise environments. This article guides you through the implementation of this approach for hybrid connectivity within Google Cloud, highlighting its advantages for managing complex network configurations.

The architecture diagram
The architecture diagram

Lab Design

In this section, we are going to deep dive into the steps required to build the architecture. Generally, these are the main steps:

  1. Create projects for hub, spoke, and simulated on-premise environments.
  2. Set up custom VPC networks in each project, with 1 subnetwork in each network.
  3. Set up firewall rules.
  4. Set up VPC network peering between hub and spoke networks.
  5. Set up HA VPN between on-premise and hub networks.
  6. Create VMs for testing.
  7. Set up DNS managed zones in hub and spoke networks.
  8. Set up custom DNS server in simulated on-premise environment using BIND.
  9. Set up DNS forwarding between on-premise and hub networks.
  10. Test the architecture.

Step 1: Project Set-up

Let’s start by exporting several variables that we will use throughout the lab. You can skip this step if you have your projects ready.

Note that project IDs should be globally unique. Therefore, you will need to come up with your own’s project IDs.

# TODO: change these Project IDs
export HUB_PROJECT_ID="dns-hub"
export SPOKE_PROJECT_ID="dns-spoke"
export ONPREM_PROJECT_ID="dns-onprem"

export REGION="asia-southeast2"

export HUB_NETWORK_NAME="hub-network"
export HUB_SUBNET_NAME="hub-subnet"

export SPOKE_NETWORK_NAME="spoke-network"
export SPOKE_SUBNET_NAME="spoke-subnet"

export ONPREM_NETWORK_NAME="onprem-network"
export ONPREM_SUBNET_NAME="onprem-subnet"

Now, let’s create 3 new projects for the architecture, each for hub, spoke, and simulated on-premise environments.

# Create simulated on-premise project
gcloud projects create $ONPREM_PROJECT_ID \
--name="On-premise Project"

# Create hub project
gcloud projects create $HUB_PROJECT_ID \
--name="Hub Project"

# Create spoke project
gcloud projects create $SPOKE_PROJECT_ID \
--name="Spoke Project"

Attach these projects to your billing account. The commands below will link the 3 projects to the same billing account.

# TODO: change to your billing account ID
export BILLING_ACCOUNT_ID="0X0X0X-0X0X0X-0X0X0X"

gcloud billing projects link $ONPREM_PROJECT_ID \
--billing-account=$BILLING_ACCOUNT_ID

gcloud billing projects link $HUB_PROJECT_ID \
--billing-account=$BILLING_ACCOUNT_ID

gcloud billing projects link $SPOKE_PROJECT_ID \
--billing-account=$BILLING_ACCOUNT_ID

Then, let’s enable some APIs in these projects.

gcloud services enable compute.googleapis.com config.googleapis.com \
--project=$ONPREM_PROJECT_ID

gcloud services enable compute.googleapis.com dns.googleapis.com \
--project=$HUB_PROJECT_ID

gcloud services enable compute.googleapis.com dns.googleapis.com \
--project=$SPOKE_PROJECT_ID

Step 2: VPC Networks

Next, we will create 3 VPC networks, one in each project.

# Create VPC network and subnetwork in on-premise project
gcloud compute networks create $ONPREM_NETWORK_NAME \
--project=$ONPREM_PROJECT_ID \
--subnet-mode="custom"

gcloud compute networks subnets create onprem-subnet \
--project=$ONPREM_PROJECT_ID \
--network=$ONPREM_NETWORK_NAME \
--range=10.10.0.0/24 \
--region=$REGION

# Create VPC network and subnetwork in hub project
gcloud compute networks create $HUB_NETWORK_NAME \
--project=$HUB_PROJECT_ID \
--subnet-mode="custom"

gcloud compute networks subnets create hub-subnet \
--project=$HUB_PROJECT_ID \
--network=$HUB_NETWORK_NAME \
--range=10.11.0.0/24 \
--region=$REGION

# Create VPC network and subnetwork in spoke project
gcloud compute networks create $SPOKE_NETWORK_NAME \
--project=$SPOKE_PROJECT_ID \
--subnet-mode="custom"

gcloud compute networks subnets create spoke-subnet \
--project=$SPOKE_PROJECT_ID \
--network=$SPOKE_NETWORK_NAME \
--range=10.12.0.0/24 \
--region=$REGION

Step 3: Firewall Rules

Now, let’s set up firewall rules to allow SSH and ICMP.

gcloud compute firewall-rules create onprem-network-allow-ssh-icmp \
--project=$ONPREM_PROJECT_ID \
--network=$ONPREM_NETWORK_NAME \
--allow=tcp:22,icmp \
--description="Allow SSH and ICMP to VMs" \
--direction=INGRESS

gcloud compute firewall-rules create hub-network-allow-ssh-icmp \
--project=$HUB_PROJECT_ID \
--network=$HUB_NETWORK_NAME \
--allow=tcp:22,icmp \
--description="Allow SSH and ICMP to VMs" \
--direction=INGRESS

gcloud compute firewall-rules create spoke-network-allow-ssh-icmp \
--project=$SPOKE_PROJECT_ID \
--network=$SPOKE_NETWORK_NAME \
--allow=tcp:22,icmp \
--description="Allow SSH and ICMP to VMs" \
--direction=INGRESS

Step 4: VPC Network Peering

To connect hub and spoke networks, we will utilize VPC network peering. The peering connections should be created twice, one from hub network and the other from spoke network.

gcloud compute networks peerings create hub-to-spoke \
--project=$HUB_PROJECT_ID \
--network=$HUB_NETWORK_NAME \
--peer-project=$SPOKE_PROJECT_ID \
--peer-network=$SPOKE_NETWORK_NAME \
--export-custom-routes

gcloud compute networks peerings create spoke-to-hub \
--project=$SPOKE_PROJECT_ID \
--network=$SPOKE_NETWORK_NAME \
--peer-project=$HUB_PROJECT_ID \
--peer-network=$HUB_NETWORK_NAME \
--import-custom-routes

Step 5: HA VPN Connection

The hub network connects to on-premise network using highly available (HA) VPN connection. Ideally, Cloud Interconnect will also work if you need larger bandwidth.

Step 5.1: Create VPN Gateways

We will create 2 VPN gateways, each in hub and on-premise networks.

gcloud compute vpn-gateways create hub-vpn-gw1 \
--project=$HUB_PROJECT_ID \
--region=$REGION \
--network=$HUB_NETWORK_NAME

gcloud compute vpn-gateways create onprem-vpn-gw1 \
--project=$ONPREM_PROJECT_ID \
--region=$REGION \
--network=$ONPREM_NETWORK_NAME

Step 5.2: Create Cloud Routers

Before creating Cloud Router resources, set 2 ASNs (Autonomous System Numbers) to be used by each router. In this example, we will use 65001 for hub router and 65002 for on-premise router.

# Set up Google ASN for both routers
export ASN_HUB=65001
export ASN_ONPREM=65002

# Create Cloud Routers
gcloud compute routers create hub-router1 \
--project=$HUB_PROJECT_ID \
--region=$REGION \
--network=$HUB_NETWORK_NAME \
--asn=$ASN_HUB \
--advertisement-mode=CUSTOM \
--set-advertisement-groups=ALL_SUBNETS \
--set-advertisement-ranges=10.12.0.0/24="Spoke network subnet"

gcloud compute routers create onprem-router1 \
--project=$ONPREM_PROJECT_ID \
--region=$REGION \
--network=$ONPREM_NETWORK_NAME \
--asn=$ASN_ONPREM

Note that the Cloud Router in hub network should advertise the subnets from spoke network. Otherwise, on-premise network and spoke network will not be able to communicate although DNS queries are resolved.

Step 5.3: Create VPN Tunnels

Let’s create 2 VPN tunnels from each network.

For organizations with “Restrict VPN Peer IPs” organization policy set to “Deny All”, this step might give you error. To handle that issue, you will need to allow the specific VPN peer IP in the organization policy.

# TODO: Create 2 shared secrets
export SHARED_SECRET_1=[shared-secret-1]
export SHARED_SECRET_2=[shared-secret-2]

# VPN Gateways
export ONPREM_GW="projects/$ONPREM_PROJECT_ID/regions/$REGION/vpnGateways/onprem-vpn-gw1"
export HUB_GW="projects/$HUB_PROJECT_ID/regions/$REGION/vpnGateways/hub-vpn-gw1"

# Create 2 tunnels in hub network
gcloud compute vpn-tunnels create hub-tunnel0 \
--project=$HUB_PROJECT_ID \
--region=$REGION \
--peer-gcp-gateway=$ONPREM_GW \
--ike-version=2 \
--shared-secret=$SHARED_SECRET_1 \
--router=hub-router1 \
--vpn-gateway=hub-vpn-gw1 \
--interface=0

gcloud compute vpn-tunnels create hub-tunnel1 \
--project=$HUB_PROJECT_ID \
--region=$REGION \
--peer-gcp-gateway=$ONPREM_GW \
--ike-version=2 \
--shared-secret=$SHARED_SECRET_2 \
--router=hub-router1 \
--vpn-gateway=hub-vpn-gw1 \
--interface=1

# Create 2 tunnels in on-premise network
gcloud compute vpn-tunnels create onprem-tunnel0 \
--project=$ONPREM_PROJECT_ID \
--region=$REGION \
--peer-gcp-gateway=$HUB_GW \
--ike-version=2 \
--shared-secret=$SHARED_SECRET_1 \
--router=onprem-router1 \
--vpn-gateway=onprem-vpn-gw1 \
--interface=0

gcloud compute vpn-tunnels create onprem-tunnel1 \
--project=$ONPREM_PROJECT_ID \
--region=$REGION \
--peer-gcp-gateway=$HUB_GW \
--ike-version=2 \
--shared-secret=$SHARED_SECRET_2 \
--router=onprem-router1 \
--vpn-gateway=onprem-vpn-gw1 \
--interface=1

Step 5.4: Create BGP Peering for Each Tunnel

We will create 4 router interfaces and attach 1 BGP peer to each of them.

# Router interface and BGP peer for tunnel0 in hub network
gcloud compute routers add-interface hub-router1 \
--interface-name if-hub-tunnel0-to-onprem \
--ip-address 169.254.0.1 \
--mask-length 30 \
--vpn-tunnel hub-tunnel0 \
--region $REGION \
--project $HUB_PROJECT_ID

gcloud compute routers add-bgp-peer hub-router1 \
--peer-name bgp-hub-tunnel0-to-onprem \
--interface if-hub-tunnel0-to-onprem \
--peer-ip-address 169.254.0.2 \
--peer-asn $ASN_ONPREM \
--region $REGION \
--project $HUB_PROJECT_ID

# Router interface and BGP peer for tunnel1 in hub network
gcloud compute routers add-interface hub-router1 \
--interface-name if-hub-tunnel1-to-onprem \
--ip-address 169.254.1.1 \
--mask-length 30 \
--vpn-tunnel hub-tunnel1 \
--region $REGION \
--project $HUB_PROJECT_ID

gcloud compute routers add-bgp-peer hub-router1 \
--peer-name bgp-hub-tunnel1-to-onprem \
--interface if-hub-tunnel1-to-onprem \
--peer-ip-address 169.254.1.2 \
--peer-asn $ASN_ONPREM \
--region $REGION \
--project $HUB_PROJECT_ID

# Router interface and BGP peer for tunnel0 in on-premise network
gcloud compute routers add-interface onprem-router1 \
--interface-name if-onprem-tunnel0-to-hub \
--ip-address 169.254.0.2 \
--mask-length 30 \
--vpn-tunnel onprem-tunnel0 \
--region $REGION \
--project $ONPREM_PROJECT_ID

gcloud compute routers add-bgp-peer onprem-router1 \
--peer-name bgp-onprem-tunnel0-to-hub \
--interface if-onprem-tunnel0-to-hub \
--peer-ip-address 169.254.0.1 \
--peer-asn $ASN_HUB \
--region $REGION \
--project $ONPREM_PROJECT_ID

# Router interface and BGP peer for tunnel1 in on-premise network
gcloud compute routers add-interface onprem-router1 \
--interface-name if-onprem-tunnel1-to-hub \
--ip-address 169.254.1.2 \
--mask-length 30 \
--vpn-tunnel onprem-tunnel1 \
--region $REGION \
--project $ONPREM_PROJECT_ID

gcloud compute routers add-bgp-peer onprem-router1 \
--peer-name bgp-onprem-tunnel1-to-hub \
--interface if-onprem-tunnel1-to-hub \
--peer-ip-address 169.254.1.1 \
--peer-asn $ASN_HUB \
--region $REGION \
--project $ONPREM_PROJECT_ID

Step 5.5: Validate Connection

Now, let’s check if the tunnels are up and running. Run the commands below and see if they return “Tunnel is up and running.”

gcloud compute vpn-tunnels describe hub-tunnel0 \
--project $HUB_PROJECT_ID \
--region $REGION \
--format "get(detailedStatus)"

gcloud compute vpn-tunnels describe hub-tunnel1 \
--project $HUB_PROJECT_ID \
--region $REGION \
--format "get(detailedStatus)"

gcloud compute vpn-tunnels describe onprem-tunnel0 \
--project $ONPREM_PROJECT_ID \
--region $REGION \
--format "get(detailedStatus)"

gcloud compute vpn-tunnels describe onprem-tunnel1 \
--project $ONPREM_PROJECT_ID \
--region $REGION \
--format "get(detailedStatus)"

Step 6: Virtual Machines for Testing

Let’s create 3 VM instances, one in each project. These VM instances will be used for DNS lookup test and ping test.

For organizations with “Shielded VMs” organization policy enforced, this step might give you error. To handle that issue, you will need to turn off the enforcement on project level.

# VM instance for hub network
gcloud compute instances create hub-vm \
--project=$HUB_PROJECT_ID \
--zone=${REGION}-a \
--machine-type=e2-medium \
--network=$HUB_NETWORK_NAME \
--subnet=$HUB_SUBNET_NAME \
--tags=client-vm \
--metadata enable-oslogin=TRUE \
--no-address

# VM instance for spoke network
gcloud compute instances create spoke-vm \
--project=$SPOKE_PROJECT_ID \
--zone=${REGION}-a \
--machine-type=e2-medium \
--network=$SPOKE_NETWORK_NAME \
--subnet=$SPOKE_SUBNET_NAME \
--tags=client-vm \
--metadata enable-oslogin=TRUE \
--no-address

# VM instance for on-premise network
gcloud compute instances create onprem-vm \
--project=$ONPREM_PROJECT_ID \
--zone=${REGION}-a \
--machine-type=e2-medium \
--network=$ONPREM_NETWORK_NAME \
--subnet=$ONPREM_SUBNET_NAME \
--tags=client-vm \
--metadata enable-oslogin=TRUE \
--no-address

Grab the internal IPs of each VM. We will use them in next steps.

Step 7: DNS Managed Zones

Now, we will set up DNS Managed Zones in hub network and spoke network.

Step 7.1: Create private DNS zones

# Create private DNS zone "cloud.local" in hub network
gcloud dns managed-zones create cloud-local-zone \
--dns-name="cloud.local." \
--description="Private DNS zone for resources in hub network" \
--project=$HUB_PROJECT_ID \
--networks=$HUB_NETWORK_NAME \
--visibility=private

# Create private DNS zone "spoke.cloud.local" in spoke network
gcloud dns managed-zones create spoke-local-zone \
--dns-name="spoke.cloud.local." \
--description="Private DNS zone for resources in spoke network" \
--project=$SPOKE_PROJECT_ID \
--networks=$SPOKE_NETWORK_NAME \
--visibility=private

Step 7.2: Create DNS peering zones

Next, let’s configure DNS peering between hub network and spoke network. Spoke network will peer using “local.” DNS name so that it will be able to access both “cloud.local” and “site.local” DNS names.

# Create peering DNS zone "spoke.cloud.local." in hub network
gcloud dns managed-zones create spoke-peering-zone \
--dns-name="spoke.cloud.local." \
--description="Private DNS peering zone to spoke network" \
--project=$HUB_PROJECT_ID \
--networks=$HUB_NETWORK_NAME \
--target-project=$SPOKE_PROJECT_ID \
--target-network=$SPOKE_NETWORK_NAME \
--visibility=private

# Create peering DNS zone "local." in spoke network
gcloud dns managed-zones create hub-peering-zone \
--dns-name="local." \
--description="Private DNS peering zone to hub network" \
--project=$SPOKE_PROJECT_ID \
--networks=$SPOKE_NETWORK_NAME \
--target-project=$HUB_PROJECT_ID \
--target-network=$HUB_NETWORK_NAME \
--visibility=private

Step 7.3: Add DNS records

# Create test.cloud.local record
cat > test-cloud-record.yml <<EOF
kind: dns#resourceRecordSet
name: test.cloud.local.
rrdatas:
- [INTERNAL_IP_OF_HUB_VM]
ttl: 300
type: A
EOF

# Import the record to cloud local zone
gcloud dns record-sets import -z=cloud-local-zone \
--project=$HUB_PROJECT_ID \
--delete-all-existing test-cloud-record.yml

# Create test.spoke.cloud.local record
cat > test-spoke-record.yml <<EOF
kind: dns#resourceRecordSet
name: test.spoke.cloud.local.
rrdatas:
- [INTERNAL_IP_OF_SPOKE_VM]
ttl: 300
type: A
EOF

# Import the record to spoke local zone
gcloud dns record-sets import -z=spoke-local-zone \
--project=$SPOKE_PROJECT_ID \
--delete-all-existing test-spoke-record.yml

Step 8: Custom DNS Server

We will use BIND 9 as the custom DNS server, which is currently available in Google Cloud Marketplace.

Step 8.1: Launch the DNS Server

Here are the steps to set up the DNS server on Google Compute Engine VM:

  1. Go to the Product Page in Google Cloud Marketplace. You can also search for “DNS Server - BIND DNS Server on Ubuntu 20.04 LTS”.
  2. Click “Get Started” and agree to the “Terms and agreements”.
  3. Click “Launch” and fill in the details. Make sure that you select a zone in the region that you used for the on-premise network.
  4. Click “Deploy”.

For organizations with “Define trusted image projects” organization policy enabled, you should allow images from “projects/mpi-cloud-infra-services-publi” in the policy.

For organizations with “Define allowed external IPs for VM instances” organization policy set to “deny all”, you should allow this particular VM to use external IP in the policy.

Step 8.2: Sign in and add DNS record

After the deployment is completed, go to Google Compute Engine page and SSH to “dns-server-vm”.

  1. Run “sudo passwd” and set a new password for “root” user.
  2. Grab the external IP of “dns-server-vm”.
  3. Go to [EXTERNAL_IP]:10000 to access Webmin.
  4. Sign in using “root” as the user and the new password.
  5. In the left navigation bar, click “Refresh Modules” to load the BIND DNS Server module.
  6. Go to “Servers”, and click “BIND DNS Server”.
  7. Click “Create master zone”.
  8. Set the “Domain name / Network” as “site.local” and “Email address” as your own email address. Click “Create”.
  9. Click on the newly created master zone name and click “Address” to add a new A record.
  10. Set the “Name” as “test” and “Address” as the internal IP address of “onprem-vm” in the on-premise project.
  11. Click on the “Apply configuration” button on the top-right corner of the page.

Step 8.3: Make On-premise network use the new DNS server

We will set the internal IP of the new DNS server as the alternate DNS server of the on-premise network. This is a workaround as we are simulating on-premise environment in a Google Cloud project.

export ONPREM_DNS_SERVER_INT_IP=[internal-ip-of-dns-server-vm] 

gcloud dns policies create forward-to-bind9 \
--description="Forward DNS queries to BIND server" \
--project=$ONPREM_PROJECT_ID \
--networks=$ONPREM_NETWORK_NAME \
--private-alternative-name-servers=$ONPREM_DNS_SERVER_INT_IP \
--enable-logging

Step 9: DNS Forwarding

Now, we need to set up DNS forwarding to forward DNS queries from on-premise network to hub DNS server and vice versa.

Step 9.1: Hub to on-premise forwarding

First, let’s set up outbound DNS forwarding from hub network to on-premise DNS server.

export ONPREM_DNS_SERVER_EXT_IP=[external-ip-of-dns-server-vm]

# Create outbound forwarding DNS zone "site.local"
gcloud dns managed-zones create site-forwarding-zone \
--dns-name="site.local." \
--description="Private DNS zone to forward to on-premise DNS server" \
--project=$HUB_PROJECT_ID \
--networks=$HUB_NETWORK_NAME \
--forwarding-targets=$ONPREM_DNS_SERVER_EXT_IP \
--visibility=private

Step 9.2: On-premise to hub forwarding

Next, let’s set up inbound DNS forwarding from on-premise network to hub DNS server.

gcloud dns policies create hub-inbound-policy \
--description="DNS inbound policy from onprem-network to hub-network" \
--project=$HUB_PROJECT_ID \
--networks=$HUB_NETWORK_NAME \
--enable-inbound-forwarding \
--enable-logging

Now, go to Cloud DNS → DNS Server Policies and select “hub-inbound-policy”. Go to “In Use By” tab and grab the “inbound query forwarding IP”. We will use this IP to set up forwarding in BIND DNS Server.

Now, go back to Webmin and follow these steps:

  1. On “BIND DNS Server” page, go to “Edit Config File”.
  2. Select “/etc/bind/named.conf.options” in the file selector.
  3. Change the config file to this:
acl good-clients {
35.199.192.0/19;
};
options {
directory "/var/cache/bind";
dnssec-validation no;
allow-recursion { good-clients; };
listen-on-v6 { any; };
forwarders {
[inbound-query-forwarding-ip];
};
};

Remember to change “[inbound-query-forwarding-ip]” to the IP from “hub-inbound-policy” DNS server policy.

Click on the green “Save” button on the bottom-left corner and click on the “Apply configuration” button on the top-right corner of the page to save the settings.

Step 10: Validation

Step 10.1: Cloud NAT

Since our test VMs don’t have external IPs, they cannot connect to the internet by default. Therefore, we need to configure Cloud NAT to enable outbound connection to the internet.

gcloud compute routers nats create hub-nat \
--router=hub-router1 \
--project=$HUB_PROJECT_ID \
--region=$REGION \
--auto-allocate-nat-external-ips \
--nat-all-subnet-ip-ranges \
--enable-logging

gcloud compute routers nats create onprem-nat \
--router=onprem-router1 \
--project=$ONPREM_PROJECT_ID \
--region=$REGION \
--auto-allocate-nat-external-ips \
--nat-all-subnet-ip-ranges \
--enable-logging

# For spoke network, we need to create Cloud Router first
gcloud compute routers create spoke-router1 \
--project=$SPOKE_PROJECT_ID \
--region=$REGION \
--network=$SPOKE_NETWORK_NAME

gcloud compute routers nats create spoke-nat \
--router=spoke-router1 \
--project=$SPOKE_PROJECT_ID \
--region=$REGION \
--auto-allocate-nat-external-ips \
--nat-all-subnet-ip-ranges \
--enable-logging

Step 10.2: Test!

Now that everything is set up, let’s test the architecture by SSH to each VMs in the three projects (hub-vm, spoke-vm, onprem-vm) and run these commands:

# Install dnsutils package
sudo apt install dnsutils

# Run DNS lookup
nslookup test.cloud.local
nslookup test.spoke.cloud.local
nslookup test.site.local

If the VPN connection and VPC peering are set up correctly, you should be able to ping other VMs through the DNS name like this:

ping test.cloud.local

Teardown

To clean all resources that we have created, run these commands:

# Delete Cloud NAT
gcloud compute routers nats delete hub-nat \
--project=$HUB_PROJECT_ID \
--region=$REGION \
--router=hub-router1
gcloud compute routers nats delete spoke-nat \
--project=$SPOKE_PROJECT_ID \
--region=$REGION \
--router=spoke-router1
gcloud compute routers nats delete onprem-nat \
--project=$ONPREM_PROJECT_ID \
--region=$REGION \
--router=onprem-router1

# Delete Cloud Router in spoke-network
gcloud compute routers delete spoke-router1 \
--project=$SPOKE_PROJECT_ID \
--region=$REGION

# Delete DNS forwarding and peering zones
gcloud dns managed-zones delete site-forwarding-zone \
--project=$HUB_PROJECT_ID
gcloud dns managed-zones delete spoke-peering-zone \
--project=$HUB_PROJECT_ID
gcloud dns managed-zones delete hub-peering-zone \
--project=$SPOKE_PROJECT_ID

# Delete DNS private zones
gcloud dns record-sets delete test.cloud.local. \
-z=cloud-local-zone \
--project=$HUB_PROJECT_ID \
--type=A
gcloud dns managed-zones delete cloud-local-zone \
--project=$HUB_PROJECT_ID

gcloud dns record-sets delete test.spoke.cloud.local. \
-z=spoke-local-zone \
--project=$SPOKE_PROJECT_ID \
--type=A
gcloud dns managed-zones delete spoke-local-zone \
--project=$SPOKE_PROJECT_ID

# Delete DNS server policies
gcloud dns policies update hub-inbound-policy \
--networks="" \
--project=$HUB_PROJECT_ID
gcloud dns policies delete hub-inbound-policy \
--project=$HUB_PROJECT_ID

gcloud dns policies update forward-to-bind9 \
--networks="" \
--project=$ONPREM_PROJECT_ID
gcloud dns policies delete forward-to-bind9 \
--project=$ONPREM_PROJECT_ID

# Delete VM instances
gcloud compute instances delete hub-vm \
--project=$HUB_PROJECT_ID \
--zone=${REGION}-a
gcloud compute instances delete spoke-vm \
--project=$SPOKE_PROJECT_ID \
--zone=${REGION}-a
gcloud compute instances delete onprem-vm
--project=$ONPREM_PROJECT_ID \
--zone=${REGION}-a

# Delete BGP Peering and Interfaces
gcloud compute routers remove-bgp-peer hub-router1 \
--peer-name=bgp-hub-tunnel0-to-onprem \
--project=$HUB_PROJECT_ID \
--region=$REGION
gcloud compute routers remove-bgp-peer hub-router1 \
--peer-name=bgp-hub-tunnel1-to-onprem \
--project=$HUB_PROJECT_ID \
--region=$REGION
gcloud compute routers remove-interface hub-router1 \
--interface-name=if-hub-tunnel0-to-onprem \
--project=$HUB_PROJECT_ID \
--region=$REGION
gcloud compute routers remove-interface hub-router1 \
--interface-name=if-hub-tunnel1-to-onprem \
--project=$HUB_PROJECT_ID \
--region=$REGION

gcloud compute routers remove-bgp-peer onprem-router1 \
--peer-name=bgp-onprem-tunnel0-to-hub \
--project=$ONPREM_PROJECT_ID \
--region=$REGION
gcloud compute routers remove-bgp-peer onprem-router1 \
--peer-name=bgp-onprem-tunnel1-to-hub \
--project=$ONPREM_PROJECT_ID \
--region=$REGION
gcloud compute routers remove-interface onprem-router1 \
--interface-name=if-onprem-tunnel0-to-hub \
--project=$ONPREM_PROJECT_ID \
--region=$REGION
gcloud compute routers remove-interface onprem-router1 \
--interface-name=if-onprem-tunnel1-to-hub \
--project=$ONPREM_PROJECT_ID \
--region=$REGION

# Delete VPN Tunnels
gcloud compute vpn-tunnels delete hub-tunnel0 \
--project=$HUB_PROJECT_ID \
--region=$REGION
gcloud compute vpn-tunnels delete hub-tunnel1 \
--project=$HUB_PROJECT_ID \
--region=$REGION
gcloud compute vpn-tunnels delete onprem-tunnel0 \
--project=$ONPREM_PROJECT_ID \
--region=$REGION
gcloud compute vpn-tunnels delete onprem-tunnel1 \
--project=$ONPREM_PROJECT_ID \
--region=$REGION

# Delete Cloud Router in on-premise and hub networks
gcloud compute routers delete hub-router1 \
--project=$HUB_PROJECT_ID \
--region=$REGION
gcloud compute routers delete onprem-router1 \
--project=$ONPREM_PROJECT_ID \
--region=$REGION

# Delete VPN Gateways
gcloud compute vpn-gateways delete hub-vpn-gw1 \
--project=$HUB_PROJECT_ID \
--region=$REGION
gcloud compute vpn-gateways delete onprem-vpn-gw1 \
--project=$ONPREM_PROJECT_ID \
--region=$REGION

# Delete VPC Peering
gcloud compute networks peerings delete hub-to-spoke \
--project=$HUB_PROJECT_ID \
--network=$HUB_NETWORK_NAME
gcloud compute networks peerings delete spoke-to-hub \
--project=$SPOKE_PROJECT_ID \
--network=$SPOKE_NETWORK_NAME

# Delete Firewall Rules
gcloud compute firewall-rules delete onprem-network-allow-ssh-icmp \
--project=$ONPREM_PROJECT_ID
gcloud compute firewall-rules delete hub-network-allow-ssh-icmp \
--project=$HUB_PROJECT_ID
gcloud compute firewall-rules delete spoke-network-allow-ssh-icmp \
--project=$SPOKE_PROJECT_ID

To delete BIND DNS server, go to Solutions → Solution deployments, select the deployment and click “Delete”. After that, you can continue deleting the VPC networks and projects:

# Delete on-premise VPC network
gcloud compute networks subnets delete onprem-subnet \
--project=$ONPREM_PROJECT_ID \
--region=$REGION
gcloud compute networks delete $ONPREM_NETWORK_NAME \
--project=$ONPREM_PROJECT_ID

# Delete hub VPC network
gcloud compute networks subnets delete hub-subnet \
--project=$HUB_PROJECT_ID \
--region=$REGION
gcloud compute networks delete $HUB_NETWORK_NAME \
--project=$HUB_PROJECT_ID

# Delete spoke VPC network
gcloud compute networks subnets delete spoke-subnet \
--project=$SPOKE_PROJECT_ID \
--region=$REGION
gcloud compute networks delete $SPOKE_NETWORK_NAME \
--project=$SPOKE_PROJECT_ID

# Delete projects
gcloud projects delete $ONPREM_PROJECT_ID
gcloud projects delete $HUB_PROJECT_ID
gcloud projects delete $SPOKE_PROJECT_ID

Further Reads

  • To ensure successful DNS and VPC peering connections between hub and spoke networks, check out this article: Transit Network.
  • Visual guide on how to set up BIND DNS server in Google Cloud: YouTube.
  • Infrastructure as Code: Terraform codes are available in this GitHub repository.

--

--