Centralize control with Shared VPC

Stephanie Wong
Nov 8 · 6 min read

As your cloud application scales, you’ll eventually face a network admin’s daily struggle: how do I maintain tight control over network resources, but not be a roadblock to teams spinning up the resources they need to do work?

⚔️The battle for control vs. flexibility

Large organizations with multiple cloud projects value the ability to share resources, while maintaining logical separation between groups or departments. And this makes complete sense: centrally, network admins are on the hook to maintain sanity for the overall network, which makes sharing resources critically important.

The unavoidable issue is that every team has special needs. It becomes a hamster-in-a-wheel constant churn for admins to have to set up new projects, access policies, billing, quotas, etc etc. If your company is large enough, this can be a full time job.

☁️Shared VPC to the rescue

Google created Shared VPC to make it easier for an organization to connect resources from multiple projects to a common VPC network. Now resources can communicate with one another securely and efficiently using internal IPs from their subnetwork. When you use Shared VPC, you designate a project as a host project and attach one or more other service projects to it. The VPC networks in the host project are called Shared VPC networks.

With Shared VPC, you can centrally manage the creation of routes, firewalls, subnet IP ranges, VPN connections, and more for the entire organization, and allow developers to own billing, quotas, IAM permissions, and autonomously operate their development projects.

Imagine you were the network admin at an e-commerce company (and if you are one, hats off to you). You have a single externally facing web application server that uses services like personalization, recommendation, and analytics, all internally available but built by different development teams. You can have a Shared VPC network with a host project and 3 service projects for personalization, recommendation, and analytics, all on different subnets.

With Shared VPC, the network and security admin would set up overall security policies in the host project, like restricting which VMs can have public IPs and access to the internet by setting up an organization policy that disables external IP access for VMs. In this example, the personalization, recommendation and analytics services. Meanwhile each team can spin up VMs in their assigned service project and decide more fine-grained details like deploying workloads, service account permissions, billing, and all the other things you don’t want daily requests for.

⚡Throughput performance

One lesser known point is that VMs still get the same network throughput caps and VM-to-VM latency as when they’re not on shared networks. The cap is a limit that can’t be exceeded and doesn’t indicate the actual throughput of your egress traffic. I was curious what the actual internal throughput difference might be between twoVMs in different subnets on a Shared VPC, vs. 2 VMs in different subnets on a regular VPC. I used n1-standard-1 instances in us-west1 and us-central1. The results were very comparable with a negligible improvement in throughput on the default VPC instances! Seems worth it given the centralized control you get with Shared VPC.

📖 Shared VPC Set Up

Let’s dive into how to set up a Shared VPC:

You’re going to need to be an organization administrator of your Google Cloud environment, so make sure you’ve already set that up. You can also assign someone the role of Shared VPC Admin to complete these steps.

  1. In the Cloud Shell terminal, create a host project, and 2 service projects (development and production). Replace [xxx] with your own unique set of numbers. All Google Cloud projects globally must be unique so you might need to try out a few options with longer lengths.
gcloud projects create host-project-[xxx] — name=”hostproject” \--enable-cloud-apisgcloud projects create dev-project-[xxx] — name=”development” \--enable-cloud-apisgcloud projects create prod-project-[xxx] — name=”production” \--enable-cloud-apis

2. Set your configuration to your hostproject. Replace [project ID] with your hostproject ID

gcloud config set project [hostproject ID]

3. Create a custom VPC with 2 subnets.

gcloud compute networks create vpc1 --subnet-mode=customgcloud compute networks subnets create development --network=vpc1 — range=192.168.1.0/24gcloud compute networks subnets create production --network=vpc1 — range=192.168.25.0/24

4. Create 2 firewall rules — 1 to allow SSH to all instances in the network, and 1 to allow ICMP traffic between instances in these subnets, using instance tags.

gcloud firewall-rules create allow-ssh --allow tcp:22 --source-ranges 0.0.0.0/0gcloud firewall-rules create allow-icmp --allow tcp:80,icmp --target-tags development,production

Configure Shared VPC

Now let’s create the Shared VPC!

  1. Go to the left panel and select Networks → Shared VPC.
  2. Click Set up Shared VPC.
  3. Click Save & Continue.
  4. Leave sharing mode on Individual Subnets.
  5. Select both subnets (development and production) as the subnets to be shared.
  6. Click Continue.
  7. On the next page select the 2 service projects to attach (development and production). Leave the user roles as default. Click Save.

Test Shared VPC configuration

  1. Switch over to the development service project in the Cloud Shell.
gcloud config set project [development project ID]

2. Create a VM with a development tag and specify the development subnet.

gcloud compute instances create dev-instance --zone us-central1-a --tags development --network vpc1 --subnet development

3. Check the Compute Engine instances page once deployed and copy the internal IP of dev-instance.

4. Switch over to the production service project.

gcloud config set project [production project ID]

5. Create a VM with a production tag and specify the production subnet.

gcloud compute instances create prod-instance --zone us-central1-a --tags production --network vpc1 --subnet production

6. SSH into the prod instance and try to ping the internal IP of the dev instance in the other subnet.

gcloud compute ssh prod-instanceping [dev-instance internal IP address] -c 5

You be able to send packets to the dev-instance successfully! The firewall rules you created in the host project propagated down to the service projects thanks to Shared VPC.

✔️Conclusion

You want to make sure that when configuring subnet IP ranges in the same or different regions, you allow sufficient IP space between subnets for future growth. Plus, GCP lets you to expand an existing subnet without affecting any existing VM’s IP address and with zero downtime. How cool is that?

Shared VPC is a powerful feature that makes GCP more flexible and manageable for your organization. Which means you might just have enough time to go fishing on the weekend without a fire drill ping interrupting your overhead cast 🎣.

🕒 Now what?

  1. Learn more about Shared VPC here.
  2. Subscribe to the GCP Youtube channel where you’ll find a lot more on Cloud Networking including my Networking End to End series.
  3. Follow me on Twitter to stay up to date on the latest on GCP.
  4. And check out the Google’s Cloud events near you.

Stephanie Wong

Written by

Google Cloud Developer Advocate and producer of awesome online content. Creator of the series, GCP Networking End-to-End; host of Google’s Next onAir. @swongful

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade