Why I definitively switched from Cloud Functions to Cloud Run

guillaume blaquiere
Google Cloud - Community
8 min readJul 17, 2022

Serverless tools are really easy to use for any developers and you can easily build applications and scale them from 0 to planet scale solution!
In that journey, the project management, deployment, automation and all that industrialisation topics must be efficient to keep the advantage and to continue to grow smoothly.

On Google Cloud, Cloud Functions is (was?) one of the most popular services to deploy a simple piece of code in HTTP mode or in Background mode (answer to events, like PubSub messages or Cloud Storage events).

However, the introduction of Cloud Run in 2019 and the very fast and very good feature additions to the product lead me to reconsider the use of Cloud Functions and, slowly, lead me to abandon it.

Here, why I abandoned Cloud Functions.

Product differences

I already talked about that topic in one of my very first articles. It was mainly cost oriented but there were a few advantages for Cloud Functions.

Today, there is no longer advantage for Cloud Functions

Event management

The introductions of Eventarc totally erased the event management difference: It’s now possible to process the same events as before, and even more because more than 70 other Google Cloud products are supported by Eventarc.

Slice of CPU for cost reduction

The recent introduction of the partial vCPU allocation allows to set less than 1 vCPU to your Cloud Run instance. That feature is especially interesting if your code is mainly waiting API calls’ response and perform a few processing.

Cloud Functions allowed to have part of CPU, it’s now the same thing with Cloud Run, but with the capability to also change the memory config independently of the CPU power.
More possible combinations!

The Cloud Run killer features

Through months, Cloud Run gains more and more features, and now overpass the Cloud Functions capacity. Here some of the most important ones

Serverless container platform

The containers are a popular and are a modern way to package application. But the true advantages here, is the ability to let the developers the capacity to configure and define their own runtime environment

Languages, binaries, multi-process, (…) are now possible and there is no longer limit in the supported languages or included dependencies. There is a true freedom for the developers!

Freedom is true for development, but also for portability. Because containers are a universal way to package an application, you can run them anywhere! On your computer, on a VM, on Kubernetes, … The portability is natively included!

Rollback and traffic splitting

Releases occur often, and more and more fast thanks to CI/CD pipeline and DevOps mindset. Bug, issues, version validation are also coming faster.

Cloud Run offers traffic splitting to easily dispatch a percentage of traffic to different versions.

  • The previous one in case of rollback
  • 50–50 for Blue/Green deployment
  • Gradually to the new one for canary release

Min instances and cold start metric

Serverless are great for your pocket, and for the planet. The services scale to 0 if you no longer use them. No server running, nothing to pay, no power waste for nothing.

The consequence is the time to wait when a new instance starts. It’s call the cold start. According to your container configuration, it can take a few milliseconds or several seconds (some example here). Cloud Run now tracks that metric to help you to understand the cold start duration and impact on your service.

To prevent cold starts, you can also set min-instances to Cloud Run and to keep one or several instances hot. Of course, you will pay for them, but your service is already started and ready to serve request!

Cloud Run jobs

The latest awesome addition of Cloud Run are Jobs. Like Cloud Run, it’s the ability to run container in serverless mode, but, this time, the exposition of an HTTP webserver is no longer required.

Cloud Run jobs runs containers in parallel and stop when their jobs are over (or retries them in case of error).

Beyond the simple product comparison

Above the raw feature list comparison, and as I discussed, in introduction, the project lifecycle from development to deployment is important for any applications.

Testing capability

In that article, I already demonstrate the testability advantages to Cloud Run: Simply run your webserver, or your container, locally.
No emulator and no Functions Framework to run it! It’s also a consequence of the portability explained before

Terraform deployment

The use of Terraform to deploy my latest app has been the cornerstone that lead me to abandon Cloud Functions forever!

With Cloud Functions, the Terraform module requires a ZIP file either locally present on your disk or on Cloud Storage.

A ZIP FILE!!!

That means, you have to create it, in your CI/CD and to put it on Cloud Storages.

How do you create a correct ZIP structure? How do you manage the versions on Cloud Storage? How do you rollback?

Really??? A zip file?!!! No, it’s not serious!!

At high level, a container has the same purpose: it’s a manner to package application code.
But it’s OCI structure, there is a registry to store them, you can sign the manifest, you have protocols and interoperability between platforms,…

Cloud Run with Terraform runs very smoothly: You have to deploy your container.
Because of popular stander, many tools can build containers for you; Docker through a Dockerfile or dedicated libraries like JIB in Java or Ko in Golang. Only industry standards, no homemade/weak solution.

Confession of weakness

The ultimate fact is the Cloud Functions 2nd generation. That gen is great! You still have the language limitations, but you leverage all the Cloud Run features: Longer timeouts, concurrency, CPU/Memory scalability, rollback & traffic splitting,…

Great Why and How?

Simply because Cloud Functions 2nd generation is backend on Cloud Run!! And the runtime features are now exactly the same.

Yes, Cloud Run is a very strong and great product and Cloud Functions had no choice to embrace it to continue to live and evolve!

Why to continue to use Cloud Functions?

I personally can’t answer that question. Cloud Functions, 1st or 2nd gen, no longer has any advantages. It’s the opposite, it limits your developments, and therefore limit your innovation!

  • Limited number of possible languages
  • Limited supported event types (Eventarc compliant with 2nd gen only)
  • No runtime configuration
  • Limited portability
  • No easy local testing
  • No concurrency (only with 2nd gen)

And many other missing features (always on CPU, min instance, Committed used discount, Custom domain, gRPC/HTTP2/WebSocket support,…)

Google Cloud team told me that was a different developer experience.
Yes it’s true:

No container to create therefore no need to learn Dockerfile language

But that advantage is very weak compare to all the other constraints and limitations, and the easiness of today’s solutions to build standard/reliable/optimized containers for you.

In reality, Google Cloud creates automatically a container for you Cloud Functions thanks to Buildpacks.io, created by Google Cloud for that purpose and now open sourced and belonging to CNCF.

You can also leverage Buildpacks with Cloud Run by using the command gcloud run deploy directly. At the end, it’s only the addition of a webserver that is a true difference. Less than 10 lines of code in many languages:

Does it really matter?

Cloud Functions to Cloud Run migration

Because I see no advantage, I recommend you to start to use Cloud Run for ALL your use cases, and, if you have a few time, to migrate to Cloud Run your existing functions.

In both cases, you have to expose a webserver. In my case, I use Flask, and I added it as dependency in the requirements.txt file

Set the max concurrency to 1 if you have concurrency issues with Cloud Run. Cloud Functions 1st gen has concurrency set to 1.

HTTP functions migration

That Cloud Functions answer to HTTP request, it’s the simplest to migrate

Legacy Cloud Functions code

File main.py

def Hello(request):
return "Hello World"

Cloud Run migration solution

Rename your main.py to functions.py

Create that main.py file

import os
from flask import Flask,request
import functions

app = Flask(__name__)

@app.route('/')
def call_function():
return functions.Hello(request)


# For local execution
if __name__ == "__main__":
app.run(host='0.0.0.0',port=int(os.environ.get('PORT',8080)))

Note that you simply forward the request object to your fonction, nothing more.

If you use the Buildpacks solution with Python and not to use Dockerfile (use the command gcloud run deploy), you have to indicate the entrypoint of your container. For that, you have to add a Procfile like that

web: python3 main.py

Of course, you can customize that entrypoint with parameters, using Gunicorn or do whatever you want at startup.

Background functions migration

For that sample, I took a Cloud Functions triggered by Cloud Storage event, and migrated to Cloud Run invoked by Eventarc event on Cloud Storage.

Legacy Cloud Functions code

File main.py

def hellogcs(event, context):
[print(format('event: {} -> {}', key, event[key])) for key in event]
[print(format('context: {} -> {}', key, context[key])) for key in context]

Cloud Run migration solution

Rename your main.py to function.py

Create that main.py file

import os

from flask import Flask, request

import functions

app = Flask(__name__)


class Object(object):
pass


@app.route('/', methods=['POST'])
def call_function():
resource = Object()
resource.service = str.split(str.replace(request.headers['Ce-Source'], '//', '/'), '/')[0]
resource.name = str.split(str.replace(request.headers['Ce-Source'], '//', '/'), '/', 1)[1] + request.headers['Ce-Subject']
resource.type = request.get_json()['kind']

context = Object()
context.event_id = request.headers['Ce-Id']
context.timestamp = request.headers['Ce-Time']
context.event_type = request.headers['Ce-Type']
context.resource = resource

functions.hellogcs(request.get_json(), context)
return "ok, see logs"


# For local execution
if __name__ == "__main__":
app.run(host='0.0.0.0', port=int(os.environ.get('PORT', 8080)))

Note that you have to extract the event data from the HTTP request and split it in 2 variables. Context being an object, the code is a little bit more complex. But you write it once, and you can migrate all your functions then!
And here again, you have to use the
Procfile if you use BuildBacks solution

Common Dockerfile packaging

If you prefer to use a custom container instead of using Buildpacks, you can use that generic Dockerfile.

FROM python:3.10-slim

ENV PYTHONUNBUFFERED True

WORKDIR /app

COPY requirements.txt .

RUN pip3 install --no-cache-dir -r requirements.txt

COPY . .
ENV PORT 8080

CMD python3 main.py

And then deploy your code with a simple command

gcloud run deploy

Darwin evolution model

I was a big user of Cloud Functions because it changed my developer’s life. It’s still a good, reliable and scalable product. I have personally nothing against that product and the team spends a lot of effort to offer one of the best serverless product.

However, something better, stronger, easier and more portable exists now. It’s simply obvious to use it.
More and more developers are familiar with containers and it’s not an additional cost to use them now.
Using a single one product, lead to other advantages: You have to train your team on a single product, and you no longer have the question to choose between Cloud Functions or Cloud Run. Finally, processes and best practices are fewer (because of only one product) and you increase your overall efficiency.

Like the Darwin evolution model, it’s not the weaker that disappear, it’s only the most adapted to their environment that dominate and extinct the other species.

Will it be the same with Cloud Functions?

--

--

guillaume blaquiere
Google Cloud - Community

GDE cloud platform, Group Data Architect @Carrefour, speaker, writer and polyglot developer, Google Cloud platform 3x certified, serverless addict and Go fan.