VPC Network Peering on Google Cloud Platform(GCP)
Overview
VPC Network Peering enables you to peer VPC networks so that workloads in different VPC networks can communicate in private RFC 1918 space. Traffic stays within Google’s network and doesn’t traverse the public internet.
VPC Network Peering is useful for:
- SaaS (Software-as-a-Service) ecosystems in GCP. You can make services available privately across different VPC networks within and across organizations.
- Organizations with several network administrative domains can peer with each other.
If you have multiple network administrative domains within your organization, VPC Network Peering allows you to make services available across VPC networks in private RFC 1918 space. If you offer services to other organizations, VPC Network Peering allows you to make those services available in private RFC 1918 space to those organizations. The ability to offer services across organizations is useful if you want to offer services to other enterprises, and it is useful within your own enterprise if you have several distinct organization nodes due to your own structure or as a result of mergers or acquisitions.
VPC Network Peering gives you several advantages over using external IP addresses or VPNs to connect networks, including:
- Network Latency: Public IP networking suffers higher latency than private networking. All peering traffic stays within Google’s network.
- Network Security: Service owners do not need to have their services exposed to the public Internet and deal with its associated risks.
- Network Cost: GCP charges egress bandwidth pricing for networks using external IPs to communicate even if the traffic is within the same zone. If however, the networks are peered they can use internal IPs to communicate and save on those egress costs. Regular network pricing still applies to all traffic.
For information about creating peering connections, see Using VPC Network Peering.
Key Properties
Peered VPC networks exhibit the following key properties:
- VPC Network Peering works with Compute Engine, GKE, and App Engine flexible environment.
- Peered VPC networks remain administratively separate. Routes, firewalls, VPNs, and other traffic management tools are administered and applied separately in each of the VPC networks.
- Each side of a peering association is set up independently. Peering will be active only when the configuration from both sides matches. Either side can choose to delete the peering association at any time.
- Peering and the option to import and export custom routes can be configured for one VPC network even before the other VPC network is created. Although route exchange only occurs after both sides of the have been configured.
- VPC peers always exchange all subnet routes. You can also exchange custom routes (static and dynamic routes), depending on whether the peering configurations have been configured to import or export them. For more information, see Importing and exporting custom routes.
- Subnet and static routes are global. Dynamic routes can be regional or global, depending on the VPC network’s dynamic routing mode.
- A given VPC network can peer with multiple VPC networks, but there is a limit.
- IAM permissions for creating and deleting VPC Network Peering are included as part of the project owner, project editor, and network admin roles.
- Peering traffic (traffic flowing between peered networks) has the same latency, throughput, and availability as private traffic in the same network.
- Billing policy for peering traffic is the same as the billing policy for private traffic in same network.
VPC Network Peering setup
Create a custom network in projects
In this setup you have been provisioned 2 projects, the first project as a testingproject and second as testingproject2.
For managing two projects start a new cloud shell by click + icon.
In the second cloud shell, set project ID by running the following, replacing <PROJECT_ID2> with GCP Project ID for the 2nd project:
gcloud config set project <PROJECT_ID2>
testingproject:
Go back to first cloud shell and create a custom network:
gcloud compute networks create network-a — subnet-mode custom
Create a subnet within this VPC and specify a region and IP range by running:
gcloud compute networks subnets create network-a-central — network network-a \ — range 10.0.0.0/16 — region us-central1
Create a VM instance:
gcloud compute instances create vm-a — zone us-central1-a — network network-a — subnet network-a-central
Run the following to enable SSH and icmp
, because you'll need a secure shell to communicate with VMs during connectivity testing:
gcloud compute firewall-rules create network-a-fw — network network-a — allow tcp:22,icmp
Next you set up testingproject2 in the same way.
testingproject2:
Switch to the second cloud shell and create a custom network:
gcloud compute networks create network-b — subnet-mode custom
Create a subnet within this VPC and specify a region and IP range by running:
gcloud compute networks subnets create network-b-central — network network-b \ — range 10.8.0.0/16 — region us-central1
Create a VM instance:
gcloud compute instances create vm-b — zone us-central1-a — network network-b — subnet network-b-central
Run the following to enable SSH and icmp
, because you'll need a secure shell to communicate with VMs during connectivity testing:
gcloud compute firewall-rules create network-b-fw — network network-b — allow tcp:22,icmp
Setting up a VPC Network Peering session
Consider an organization which needs VPC Network Peering to be established between network-A in testingproject, and network-B in testingproject2. In order for VPC Network Peering to be established successfully, administrators of network-A and network-B must separately configure the peering association.
Peer network-a with network-b:
You will need to select the correct project in the console before you apply the settings. You’ll do that by clicking down arrow next to the GCP Project ID at the top of the screen, then selecting which project ID you need.
testingproject
Go to the VPC Network Peering
in the Google Cloud Platform Console by navigating to the Networking section and clicking VPC Network
> VPC network peering
in the left menu. Once you're there:
- Click Create connection.
- Click Continue.
- Type “peer-ab” as the Name for this side of the connection.
- Under Your VPC network, select the network you want to peer (network-a).
- Set the Peered VPC network radio buttons to In another project.
- Paste in the Project ID of the second project.
- Type in the VPC network name of the other network (network-b).
- Click Create.
At this point, the peering state remains INACTIVE because there is no matching configuration in network-b in testingproject2.
Example Output:
Peer network-b with network-a
Note: Switch to the second project in the console.
testingproject2
- Click Create connection.
- Click Continue.
- Type “peer-ba” as the Name for this side of the connection.
- Under Your VPC network, select the network you want to peer (network-b).
- Set the Peering VPC network radio buttons to In another project, unless you wish to peer within the same project.
- Specify the Project ID of the first project.
- Specify VPC network name of the other network (network-a).
- Click Create.
Example Output:
VPC Network Peering becomes ACTIVE and routes are exchanged As soon as the peering moves to an ACTIVE state, traffic flows are set up:
- Between VM instances in the peered networks: Full mesh connectivity.
- From VM instances in one network to Internal Load Balancing endpoints in the peered network.
The routes to peered network CIDR prefixes are now visible across the VPC network peers. These routes are implicit routes generated for active peerings. They don’t have corresponding route resources. The following command lists routes for all VPC networks for testingproject.
gcloud compute routes list — project <FIRST_PROJECT_ID>
Example Output:
Connectivity Test
testingproject
Navigate to VM instances console: Click Navigation Menu > Compute Engine > VM instances.
Copy the INTERNAL_IP for vm-a
.
testingproject2
Click Product & services > Compute > Compute Engine > VM instances.
SSH into vm-b
instance.
In the SSH shell of vm-b
, run the following command replacing <INTERNAL_IP_OF_VM_A>
with the vm-a instance INTERNAL_IP:
ping -c 5 <INTERNAL_IP_OF_VM_A>
Example Output: