Direct VPC Egress with Cloud Run

Alistair Grew
Appsbroker CTS Google Cloud Tech Blog
5 min readAug 15, 2023

The obsolescence of the serverless VPC access connector?

Source: https://cloud.google.com/blog/products/serverless/announcing-direct-vpc-egress-for-cloud-run

Introduction

I am a big fan of Cloud Run, Google’s managed Knative-based serverless environment for containers. In my popular ‘Are Kubernetes days numbered?’ post I spoke about some of the new functionality that Google has recently developed to make it even more compelling:

Startup CPU boost, Always on CPU Allocation, Sidecar Containers, Cloud Run Jobs, Request Concurrency

So, when casually browsing Linkedin before bed I stumbled across Wietse Venema’s (who literally wrote the book on Cloud Run) post about a new feature ‘Direct VPC’ access I was excited.

So what is Direct VPC Egress?

Put simply this is a new way to connect Cloud Run instances to resources in a VPC Network such as CloudSQL (using private IP), GCE & Memorystore instances, to reuse the diagram from above:

Source: https://cloud.google.com/blog/products/serverless/announcing-direct-vpc-egress-for-cloud-run

But can’t I already do this with the serverless VPC access connector?

So what? You might be saying, the serverless VPC access connector has allowed me to do this for ages already, and yes you would be right. However, under the hood, this connector has to provision GCE hidden instances in order to provide that connectivity, the instances attract additional 24/7 cost which I feel goes against one of the pros of ‘serverless’ which is elasticity and only paying when you are serving.

Source: https://cloud.google.com/blog/products/serverless/announcing-direct-vpc-egress-for-cloud-run

These costs can add up as well, whilst a default 2–10 ‘f1-micro’ based connector may start at $14/month providing 100Mbps an ‘e2-standard-4’ connector maxed out at 10 instances, providing 16Gbps costs a whopping $1250/month. What makes this more scary is that the connector will scale up but not scale back down again as per the warning below:

Setting up a Serverless Access Connector in the Console

Setting up Direct VPC Egress

First a quick disclaimer:

Direct VPC Egress is currently in preview so functionality may well change before GA release.

Firstly let's get some limitations out of the way:

Source: https://cloud.google.com/run/docs/configuring/vpc-direct-vpc#limitations

There is nothing really that I think is particularly unexpected, especially from a preview service but worth knowing the limitations regardless.

Setting it up is as easy, if not easier from the console than with the serverless access connector. Expand out the container networking and security section, select networking, select connect to a VPC for outbound traffic, and set your specific configuration as per below:

Configuring Direct VPC Egress in the Console

After deployment, if you wish to confirm your settings it is easily found under networking. One useful bit of information is that the network tags are on a per-revision basis which could allow some clever blue/green or canary testing logic:

Console, confirming network connectivity.

Thoughts and Conclusion

So am I a fan? Yes absolutely, the ease of setup is superb and this really brings a much simpler mechanism for connecting Cloud Run into VPCs. As someone who is currently exploring modernising a client’s PHP application into Cloud Run, this is definitely something I will consider (once it is available in europe-west2!). I am also interested to see how this functionality might evolve, a logical jump is to make this available for Cloud Function gen2 which operates on Cloud Run and possibly even App Engine Standard.

Do I think it will replace the serverless access connector? I currently see a few pros still for the older method:

  • Use of fewer RFC 1918 addresses, ultimately a serverless access connector is scoped to a /28 subnet. With Direct VPC Egress with the maximum number of currently supported instances of 100 the recommendation would be for 400 IP addresses or a /23 subnet.
  • Being able to route out via the Cloud NAT, I have sometimes seen cases where serverless services need to present a single outbound IP address that appears on a whitelist for a 3rd party service (though my preference is always for strong auth over this!). This isn’t currently possible with this new method.
  • Support for more than 100 instances. As mentioned above there are some services that may scale out more. In this case, though I would be inclined to use fewer larger instances and increase the number of requests routed to each instance using concurrent requests as this would also minimize cold starts.

In all though I think this is a new exciting method that has the potential to encourage even more adoption of Cloud Run.

Until next time though keep it Googley :)

About CTS

CTS is the largest dedicated Google Cloud practice in Europe and one of the world’s leading Google Cloud experts, winning 2020 Google Partner of the Year Awards for both Workspace and GCP.

We offer a unique full stack Google Cloud solution for businesses, encompassing cloud migration and infrastructure modernisation. Our data practice focuses on analysis and visualisation, providing industry specific solutions for Retail, Financial Services, Media and Entertainment.

We’re building talented teams ready to change the world using Google technologies. So if you’re passionate, curious and keen to get stuck in — take a look at our Careers Page and join us for the ride!

--

--

Alistair Grew
Appsbroker CTS Google Cloud Tech Blog

GCP Architect based in the Manchester (UK) area. Thoughts here are my own and don’t necessarily represent my employer.