Knative and Cloud Run, portability in action

guillaume blaquiere
Jan 23 · 7 min read

Cloud Run has been announced at Next 19 in San Francisco. The year before, Knative have been introduced by Google at the same event. Both promise the same thing: serverless and portability on any Kubernetes cluster where Istio and Knative are installed.

I had the chance to be on stage for this announcement at Next19, with a session title: Container once, Serverless anywhere

Servereless anywhere, really? Is it true? Possible?

Let’s validate this

Cloud Run

The GKE version is part of Cloud Run for Anthos which allow to deploy Cloud Run services in any compliant clusters managed by Anthos: On GKE, on premise, on other cloud provider.

Managed version

gcloud beta run deploy hello --region us-central1 \
--image gcr.io/cloudrun-hello-go/hello \
--platform managed --allow-unauthenticated

After few seconds, the service is deployed and the reachable URL is displayed. Test it!

curl https://hello-<project hash>.run.app/# Result
This created the revision hello-00001 of the Cloud Run service hello in the GCP project <project>

Good. Let’s deploy the same thing on GKE

Anthos with GKE version

Let’s start by deploying a GKE cluster compliant with Cloud Run.

gcloud beta container clusters create cloudrun-cluster \
--addons=HorizontalPodAutoscaling,HttpLoadBalancing,CloudRun \
--machine-type=n1-standard-2 \
--zone=us-central1-a \
--enable-stackdriver-kubernetes

Then deploy the service with gcloud command line

gcloud beta run deploy hello --cluster-location us-central1-a \
--image gcr.io/cloudrun-hello-go/hello \
--platform gke --connectivity external \
--cluster cloudrun-cluster

Time to test! But testing a Knative service on a Kubernetes cluster is not as easy as with managed version. There is 2 things to get:

  • The ingress gateway for getting the external IP. This represent the Loadbalancer deployed for accepting ingress traffic.
kubectl get svc istio-ingressgateway --namespace istio-system \
-o jsonpath='{.status.loadBalancer.ingress[0].ip}'
  • The deployment hostname. It is provided at the end of the deployment. It’s used to route the traffic to the correct service and pod.
kubectl get route.serving.knative.dev hello \
-o jsonpath='{.status.url}' | sed 's/http:\/\///g'

Now, put all together and request the endpoint like this

curl -H "Host: $(kubectl get route.serving.knative.dev hello \
-o jsonpath='{.status.url}' | sed 's/http:\/\///g')" \
$(kubectl get svc istio-ingressgateway --namespace istio-system \
-o jsonpath='{.status.loadBalancer.ingress[0].ip}')
# Result
This created the revision hello-<hash> of the Cloud Run service hello in the GCP project <project>

Great Cloud Run on GKE and Cloud Run managed are compliant. Fortunately (!!!), they are 2 Google Cloud services, and the same CLI command is used (with some changes for the service definition/specificity)

Universality of YAML

Kubernetes works with YAML file format for declaring all the elements of the cluster to the master node. And, of course, Knative also use this format for defining services.

Minimal YAML

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: hello
spec:
template:
spec:
containers:
- image: gcr.io/cloudrun-hello-go/hello

Note: you will find in many examples the metadata.namespace value set. It is optional. If not set, the current namespace is used when you deploy, or you can explicitly define it with --namespace param.

Cloud Run managed

Use the gcloud command line to apply the yaml. For Cloud Run managed version, there is no other way to interact with the fully managed cluster.

gcloud beta run services replace hello.yaml --platform managed \
--region us-central1

This command line applies the yaml file for creating or updating (if exists) the service but don’t allow to customize the access mode. The replace command simply applies the yaml configuration and not interact with Google IAM service to allow unauthenticated users. By the way, the test can be done following 2 solutions:

  • Perform an authenticated request by adding id token in the header
curl -H "Authorization: Bearer $(gcloud auth print-identity-token)"\
https://hello-<project hash>.run.app/
  • Allow all unauthenticated users with IAM command and then perform an unauthenticated request
gcloud beta run services add-iam-policy-binding hello-yaml \
--member allUsers --role roles/run.invoker --region us-central1
curl https://hello-<project hash>.run.app/

The result is the same

# Result
This created the revision hello-yaml-00001 of the Cloud Run service hello-yaml in the GCP project <project>

On managed platform managed, the hello.yaml file has been correctly applied and the service deployed. Go ahead with other platform

GKE with Cloud Run for Anthos

gcloud beta run services replace hello.yaml --platform gke 
--cluster-location us-central1-a --cluster k8s-cluster

But this time, I don’t want to be sticky to Google. It was required for Cloud Run managed version, because I didn’t have other solution to interact with the managed cluster.

However, this time, we have a Kubernetes cluster and we can use dedicated tool for it. kubectl is the standard CLI tool for interacting with Kubernetes clusters.

Kubernetes is a declarative model. Thereby, we apply configuration to the master node and it performs the appropriate actions.

kubectl apply -f hello.yaml

Now, the service is created and for testing it, we can reuse the previous GKE command line. Don’t forget to change the name of the service to hello-yaml

curl -H "Host: $(kubectl get route.serving.knative.dev hello-yaml \
-o jsonpath='{.status.url}' | sed 's/http:\/\///g')" \
$(kubectl get svc istio-ingressgateway --namespace istio-system \
-o jsonpath='{.status.loadBalancer.ingress[0].ip}')
# Result
This created the revision hello-yaml-<hash> of the Cloud Run service hello-yaml in the GCP project <project>

Perfect! But, this cluster GKE is not really agnostic of GCP because of Cloud Run add-on installed on it. Can we use something more neutral?

GKE with Istio and Knative

gcloud beta container clusters create k8s-cluster \
--machine-type=n1-standard-2 \
--zone=us-central1-a \
--enable-stackdriver-kubernetes

Before installing Knative, Istio is required

# Download Istio
curl -L https://git.io/getLatestIstio | ISTIO_VERSION=1.3.1 sh -
# Go into directory
cd istio-1.3.1
# Install Istio
for i in install/kubernetes/helm/istio-init/files/crd*yaml; \
do kubectl apply -f $i; done
# Activate the permissive mode (enough for the test)
kubectl apply -f install/kubernetes/istio-demo.yaml

Then, install Knative as on any Kubernetes cluster.

And, as before, use Kubectl to apply the service configuration

kubectl apply -f hello.yaml

Finally, test it

curl -H "Host: $(kubectl get route.serving.knative.dev hello-yaml \
-o jsonpath='{.status.url}' | sed 's/http:\/\///g')" \
$(kubectl get svc istio-ingressgateway --namespace istio-system \
-o jsonpath='{.status.loadBalancer.ingress[0].ip}')
# Result
This created the revision hello-yaml-<hash> of the Cloud Run service hello-yaml in the GCP project <project>

Great! But, still in Google Cloud environment. Is it a bias in these tests? Let’s go to test somewhere else.

All kubernetes cluster?

I followed this tutorial on Medium and it takes about 20 minutes to deploy a Kubernetes cluster, with a several of manual actions. You can also use eksctl for deploying your cluster with 1 line of CLI, easier and quicker but not via GUI.


When you cluster is deployed, and the nodes linked to your master, I mean when the command kubectl get nodes return 3 lines with READY status, you can install Istio and Knative in the same way as previously on GKE.

And, as before, use Kubectl to apply the service configuration

kubectl apply -f hello.yaml

And, test it.

curl -H "Host: $(kubectl get route.serving.knative.dev hello-yaml \
-o jsonpath='{.status.url}' | sed 's/http:\/\///g')" \
$(kubectl get svc istio-ingressgateway --namespace istio-system \
-o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
# Result
This created the revision hello-yaml-<hash> of the Cloud Run service hello-yaml

Note: Istio Ingress Gateway provides an hostname on EKS environment and not an IP like on GKE. The underlying implementation can slightly change from one provider to another one.

Serverless everywhere

Of course, in these examples and for easier tests, the hello container is platform independent and allows tp avoid the integration constraints: no IAM role is needed for reaching specific resources like databases, storages or other serverless products.

In any case, it’s awesome! Knative project is totally amazing and ensure a great compatibility. Kubernetes CLI commands with kubectl are the same for deploying and for requesting the service! “Container once, serverless anywhere” is true, and the portability, a reality!

It’s a good foundation for the future. Events, debugging and other important stuffs are coming in Kubernetes, Knative and Cloud Run platform. Stay tuned, next releases will be great!


The source code is available on Github. The code is a fork of Cloud-Run-Hello but without the HTML template.

Google Cloud - Community

A collection of technical articles published or curated by Google Cloud Developer Advocates. The views expressed are those of the authors and don't necessarily reflect those of Google.

guillaume blaquiere

Written by

GDE Google Cloud Platform, scrum master, speaker, writer and polyglot developer, Google Cloud platform 3x certified, serverless addict and Go fan.

Google Cloud - Community

A collection of technical articles published or curated by Google Cloud Developer Advocates. The views expressed are those of the authors and don't necessarily reflect those of Google.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade