Creating a developer portal for Cloud Run with sidecar containers and Apigee

Daniel Strebel
Google Cloud - Community
15 min readJul 26, 2023

The recently announced multi-container feature in Cloud Run opens up a range of interesting sidecar-based use cases such as logging and monitoring agents, outbound connection proxies for databases, or the ability to front a Cloud Run deployment with a reverse proxy. In this blog post we explore how we can use an Envoy sidecar in front of a Cloud Run container. We use the Apigee Adapter for Envoy as an Envoy filter plugin to add API management capabilities such as self-service developer onboarding via a developer portal and provide centralized analytics and metrics for APIs that are exposed via Cloud Run or via Apigee as the central API management platform.

The solution consists of:

  • A Cloud Run service that hosts a traditional RESTful web application and is fronted by a vanilla Envoy proxy.
  • An Apigee Envoy Adapter aka. Remote service that runs in GKE Autopilot and acts as a policy decision point PDP for accepting or rejecting calls to the Cloud Run service.
  • An Apigee API Platform that is used to manage the API lifecycle of the Cloud Run service. Apigee also offers a turn-key developer portal where developers can obtain access credentials to access an API.

Using these building blocks we can add API management capabilities to a Cloud Run service that enables the following user journey:

  1. An API Developer self-registers in the Apigee Developer Portal and obtains access credentials for the API
  2. They call the Cloud Run endpoint and provide the credential for authentication
  3. In Cloud Run the Envoy proxy container is intercepting the request and makes a gRPC to the Apigee Envoy adapter to authenticate the caller.
  4. The Apigee envoy adapter verifies the request’s credentials and identifies the corresponding API product as defined in Apigee.
  5. If the credentials are valid and the client hasn’t exhausted their call quota the Apigee Envoy adapter forwards the request to its co-located container in the Cloud Run service.

To provide step-by-step instructions and highlight some details along the way, we will proceed with the following steps:

  1. Assert prerequisites and define the basic configuration
  2. Deploy an unauthenticated version of the Cloud Run service
  3. Deploy the Apigee Adapter for Envoy
  4. Create an Envoy Config to be used as the authenticating sidecar
  5. Re-deploy the Cloud Run service with the authenticating sidecar
  6. Configure an API Product in Apigee to allow access to the Cloud Run service
  7. Create credentials for accessing Apigee in the Apigee Developer Portal and test the end to end flow

Assert prerequisites and define the basic configuration

The instructions in this article assume that you have completed the following prerequisites:

  • Apigee Runtime — Because the configuration of Apigee would go beyond the scope of this article we assume you already have an Apigee runtime deployed. The instructions in this article target Apigee X or hybrid but any Apigee version will work and the necessary cli instructions for the Envoy adapter can be adapted as mentioned in the documentation.
  • Permissions on the GCP Project — We will create a range of different GCP resources that require different levels of permissions. Please ensure that you have the required permissions or are able to obtain them when needed.

To get started we will need to ensure that we have our gcloud environment set up and are correctly logged in:

gcloud auth list
# If your Google Account isn't listed run the following command:
# gcloud auth login

We can then define some basic parameters for our little example:

PROJECT_ID=my_project # The project that hosts the example.
gcloud config set project $PROJECT_ID
NETWORK=apigee-cr-demo # A network name of your choice
REGION=europe-west1 # Region to host the Cloud Run service and the GKE Autopilot cluster

Deploy a basic version of the Cloud Run service

In the first step we want to deploy a version of our RESTful service without any sidecars to test the Cloud Run setup and get us started. Since the implementation of the actual service is outside the scope of this article we will use a basic httpbin container as a placeholder for a more useful web service. To get started we execute the following snippet to create a network, serverless connector, custom service account and finally our Cloud Run service.

Note: If you read the Cloud Run deployment instructions carefully you will notice that it includes the “ — allow-unauthenticated” flag which instructs Cloud Run to not require a user or service account credentials to execute the function. We do not require authentication at the level of Cloud Run because we will later use the Envoy adapter with an external authentication filter to perform a more fine grained access control.

# Enable the services
gcloud services enable run.googleapis.com vpcaccess.googleapis.com

# Create the network
gcloud compute networks create $NETWORK --subnet-mode=custom --project $PROJECT_ID
gcloud compute networks subnets create $NETWORK-serverless \
--network=$NETWORK \
--range=10.0.1.0/28 \
--region=$REGION \
--project=$PROJECT_ID
gcloud compute networks vpc-access connectors create $NETWORK --region $REGION --subnet $NETWORK-serverless
# Create a custom SA for Cloud Run
SA_NAME='demo-cloudrun'
SA_EMAIL="$SA_NAME@$PROJECT_ID.iam.gserviceaccount.com"
if [ -z "$(gcloud iam service-accounts list --filter "$SA_EMAIL" --format="value(email)" --project "$PROJECT_ID")" ]; then
gcloud iam service-accounts create $SA_NAME \
--description="Multi-Container Cloud Run Demo" --project "$PROJECT_ID"
fi
gcloud projects add-iam-policy-binding \
$PROJECT_ID \
--member="serviceAccount:$SA_EMAIL" \
--role='roles/secretmanager.secretAccessor'
CLOUD_RUN_NAME=apigee-envoy-cloudrun-example
gcloud run deploy $CLOUD_RUN_NAME --image docker.io/kennethreitz/httpbin \
--allow-unauthenticated --vpc-connector $NETWORK \
--service-account $SA_EMAIL --region $REGION --port 80

To validate our initial setup we can call the new Cloud Run endpoint:

CLOUD_RUN_URL=$(gcloud run services describe $CLOUD_RUN_NAME --region $REGION --format 'value(status.url)')
curl "$CLOUD_RUN_URL/json"

Which should give you a response body similar to this:

{
"slideshow": {
"author": "Yours Truly",
...
}

Note: If you get an error message indicating that there were some permission errors, please validate that the Cloud Run allows for unauthenticated invocations by giving the allUsers member the run.invoker permission.

Deploy the Apigee Adapter for Envoy

Our next step is to configure and deploy the Apigee Adapter for Envoy. The Apigee Adapter for Envoy is essentially an open source policy decision point that offers a gRPC service for Envoy’s external Authorization filter. To facilitate the configuration we first install the CLI tooling:

mkdir -p apigee-envoy4cloudrun && cd apigee-envoy4cloudrun  
PLATFORM=linux # or PLATFORM=macOS
TOOL='apigee-remote-service-cli'
DOWNLOAD_LINK=$(curl -s https://api.github.com/repos/apigee/$TOOL/releases/latest \
| grep "browser_download_url.*${PLATFORM}_64-bit.tar.gz" \
| cut -d : -f 2,3 \
| tr -d \" \
| xargs)
echo "Downloading $DOWNLOAD_LINK"
curl $DOWNLOAD_LINK -L -o "${TOOL}_${PLATFORM}_64-bit.tar.gz"
mkdir -p $TOOL
tar -xf "${TOOL}_${PLATFORM}_64-bit.tar.gz" -C $TOOL
rm "${TOOL}_${PLATFORM}_64-bit.tar.gz"

Because the Apigee Adapter synchronizes control plane artifacts we need to supply the corresponding information of how to reach the associated Apigee Environment:

ORG=my-org
ENV=my-environment
RUNTIME=https://my-env-group-hostname.com

With the Apigee information specified, we can use the remote service CLI to generate the configuration and create a remote-service-config folder that contains the configuration together with the remaining Kubernetes resources that are required to deploy and expose the gRPC service on an internal load balancer.

./apigee-remote-service-cli/apigee-remote-service-cli provision --organization $ORG --environment $ENV --runtime $RUNTIME --token $(gcloud auth print-access-token) -v > apigee-envoy-adapter-config.yaml

./apigee-remote-service-cli/apigee-remote-service-cli samples create -c ./apigee-envoy-adapter-config.yaml --template istio-1.12 --out istio-config

echo "🏭 Create Remote Service Config"
mkdir -p ./remote-service-config
cp ./istio-config/apigee-envoy-adapter.yaml ./remote-service-config/02-apigee-envoy-adapter.yaml
cp ./apigee-envoy-adapter-config.yaml ./remote-service-config/01-apigee-config.yaml
cat<<EOF>./remote-service-config/00-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: apigee
EOF
cat<<EOF>./remote-service-config/04-remote-service-ilb.yaml
apiVersion: v1
kind: Service
metadata:
name: apigee-remote-service-envoy-ilb
namespace: apigee
annotations:
networking.gke.io/load-balancer-type: "Internal"
labels:
app: apigee-remote-service-envoy
org: $ORG
env: $ENV
spec:
type: LoadBalancer
externalTrafficPolicy: Cluster
ports:
- port: 5000
name: grpc
selector:
app: apigee-remote-service-envoy
EOF
rm -r istio-config

Whilst we deploy our main application and the authentication sidecar on Cloud Run to benefit from the advanced scaling options, we will deploy our Envoy adapter on GKE autopilot. This allows us to keep the adapter running and benefit from its caching of the control plane resources. The deployment steps therefore start with creating the GKE autopilot cluster in a dedicated subnet of the VPC that we already have the Cloud Run serverless connector. With the cluster running we apply the configuration for the Apigee adapter for Envoy that we have created above.

CLUSTER_NAME=gke-autopilot-cluster

gcloud services enable container.googleapis.com --project $PROJECT_ID
gcloud compute routers create $NETWORK-router \
--network $NETWORK \
--region $REGION \
--project $PROJECT_ID
gcloud compute routers nats create $NETWORK-nat \
--router-region $REGION \
--router $NETWORK-router \
--nat-all-subnet-ip-ranges \
--auto-allocate-nat-external-ips
gcloud compute networks subnets create $NETWORK-gke \
--network=$NETWORK \
--range=10.0.16.0/20 \
--region=$REGION \
--project=$PROJECT_ID
gcloud container --project $PROJECT_ID clusters create-auto $CLUSTER_NAME --region $REGION --release-channel "regular" --enable-private-nodes --network $NETWORK --subnetwork $NETWORK-gke
gcloud container clusters get-credentials $CLUSTER_NAME --region $REGION --project $PROJECT_ID
kubectl apply -f ./remote-service-config/

Since we need to configure the Envoy adapter in the authorization filter we need to obtain the IP that GKE assigned to the internal load balancer.

export REMOTE_SERVICE_IP=$(kubectl get services \
--namespace apigee \
apigee-remote-service-envoy-ilb \
--output jsonpath='{.status.loadBalancer.ingress[0].ip}')

echo "Apigee Remote Service Adapter is running on ILB with IP: ${REMOTE_SERVICE_IP:-still provisioning - try again later}"

Note: If you see the message indicating that the load balancer is still being provisioned, please try again until you see an internal IP address in the output.

Lastly we want to use workload identity on to allow our Apigee Envoy adapter to send monitoring and logging information back to GCP. For that we create a Service Account in GKE and give it the necessary roles.

gcloud iam service-accounts create apgiee-envoy-adapter \
--project=$PROJECT_ID

gcloud projects add-iam-policy-binding $PROJECT_ID \
--member "serviceAccount:apgiee-envoy-adapter@$PROJECT_ID.iam.gserviceaccount.com" \
--role "roles/apigee.analyticsAgent"
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member "serviceAccount:apgiee-envoy-adapter@$PROJECT_ID.iam.gserviceaccount.com" \
--role "roles/logging.logWriter"
gcloud iam service-accounts add-iam-policy-binding apgiee-envoy-adapter@$PROJECT_ID.iam.gserviceaccount.com \
--role roles/iam.workloadIdentityUser \
--member "serviceAccount:$PROJECT_ID.svc.id.goog[apigee/apigee-remote-service-envoy]"
kubectl annotate serviceaccount apigee-remote-service-envoy \
--namespace apigee \
iam.gke.io/gcp-service-account=apgiee-envoy-adapter@$PROJECT_ID.iam.gserviceaccount.com

Create an Envoy Config to be used as the authenticating sidecar

With the Envoy adapter running, we can assemble our Envoy configuration that will drive the Envoy sidecar of the Cloud Run deployment.

The Envoy configuration routes all incoming traffic to the cloud-run-container cluster as as you can see in the route config in the excerpt below:

# Envoy Route Configuration that matches all hosts, paths and verbs

route_config:
virtual_hosts:
- name: default
domains: "*"
routes:
- match: { prefix: / }
route:
cluster: cloud-run-container
...
# Definition of the cloud-run-container as running on the same host on port 80
clusters:
- name: cloud-run-container
type: STRICT_DNS
load_assignment:
cluster_name: cloud-run-container
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: "127.0.0.1"
port_value: 80 # depending on the upstream service

It also configures Envoy’s external authorization filter to point to the gRPC service that runs behind the internal load balancer that we will configure via $REMOTE_SERVICE_IP placeholder as as you can see in the http filters in the excerpt below.

# External Authorization filter that calls out to the Apigee Envoy adapter

http_filters:
...
- name: envoy.filters.http.ext_authz
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.ext_authz.v3.ExtAuthz
transport_api_version: V3
grpc_service:
envoy_grpc:
cluster_name: apigee-remote-service-envoy
timeout: 1s
metadata_context_namespaces:
- envoy.filters.http.jwt_authn
# Definition of the Envoy Adapter
...
clusters:
...
- name: apigee-remote-service-envoy
type: LOGICAL_DNS
http2_protocol_options: {}
load_assignment:
cluster_name: apigee-remote-service-envoy
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: "$REMOTE_SERVICE_IP"
port_value: 5000

To automatically ingest the values for your Apigee runtime hostname and the Envoy Adapter ILB IP address, you can run the following snippet.

mkdir -p envoy-config
RUNTIME_HOSTNAME=$(echo ${RUNTIME/https:\/\//})

cat<<EOF>./envoy-config/envoy.yaml
# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This is for Envoy 1.16+.
static_resources:
listeners:
- address:
socket_address: { address: 0.0.0.0, port_value: 8080 }
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_http
route_config:
virtual_hosts:
- name: default
domains: "*"
routes:
- match: { prefix: / }
route:
cluster: cloud-run-container
http_filters:
# evaluate JWT tokens, allow_missing allows API Key also
- name: envoy.filters.http.jwt_authn
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.jwt_authn.v3.JwtAuthentication
providers:
apigee:
issuer: https://$RUNTIME_HOSTNAME/remote-token/token
audiences:
- remote-service-client
remote_jwks:
http_uri:
uri: https://$RUNTIME_HOSTNAME/remote-token/certs
cluster: apigee-auth-service
timeout: 5s
cache_duration:
seconds: 300
payload_in_metadata: https://$RUNTIME_HOSTNAME/remote-token/token
rules:
- match:
prefix: /
requires:
requires_any:
requirements:
- provider_name: apigee
- allow_missing: {}
# evaluate Apigee rules
- name: envoy.filters.http.ext_authz
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.ext_authz.v3.ExtAuthz
transport_api_version: V3
grpc_service:
envoy_grpc:
cluster_name: apigee-remote-service-envoy
timeout: 1s
metadata_context_namespaces:
- envoy.filters.http.jwt_authn
- name: envoy.filters.http.router
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
access_log:
# collect Apigee analytics
- name: envoy.access_loggers.http_grpc
typed_config:
"@type": type.googleapis.com/envoy.extensions.access_loggers.grpc.v3.HttpGrpcAccessLogConfig
common_config:
transport_api_version: V3
grpc_service:
envoy_grpc:
cluster_name: apigee-remote-service-envoy
log_name: apigee-remote-service-envoy
clusters:
# main cloud run application
- name: cloud-run-container
type: STRICT_DNS
load_assignment:
cluster_name: cloud-run-container
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: "127.0.0.1"
port_value: 80 # depending on the upstream service
# define cluster for Apigee remote service
- name: apigee-remote-service-envoy
type: LOGICAL_DNS
http2_protocol_options: {}
load_assignment:
cluster_name: apigee-remote-service-envoy
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: "$REMOTE_SERVICE_IP"
port_value: 5000
common_lb_config:
healthy_panic_threshold:
value: 50.0
health_checks:
- timeout: 1s
interval: 5s
interval_jitter: 1s
no_traffic_interval: 5s
unhealthy_threshold: 1
healthy_threshold: 3
grpc_health_check: {}
connect_timeout: 0.25s
# define cluster for Apigee JWKS certs
- name: apigee-auth-service
connect_timeout: 2s
type: LOGICAL_DNS
dns_lookup_family: V4_ONLY
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: apigee-auth-service
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: "$RUNTIME_HOSTNAME"
port_value: "443"
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext
sni: "$RUNTIME_HOSTNAME"
EOF

To separate the envoy configuration from the Envoy image, we store the configuration in a secret in Google Cloud’s Secret Manager. Alternatively you could also extend your Envoy image that has the configuration added to it.

gcloud services enable secretmanager.googleapis.com

gcloud secrets create envoy_config \
--replication-policy="automatic"
gcloud secrets versions add envoy_config --data-file="./envoy-config/envoy.yaml"

Deploy a Cloud Run service with an authenticating sidecar

With the configuration in place, we can now create a second revision of our Cloud Run service. This time we include an Envoy proxy as a sidecar to our httpbin service and use the Envoy configuration we just assembled to ensure that any incoming request passes the external authorization filter.

Some parts to highlight here are that we make use of Cloud Run’s new Multi-Container functionality by including two entries in the containers array. We also declare the dependency of the httpbin service on the authenticating sidecar by adding the “run.googleapis.com/container-dependencies” annotation to the service YAML. Lastly we mount the “envoy-conf-secret” secret that contains our Envoy configuration such that it is mounted as a volume on the Envoy container. This way the Envoy configuration is picked up automatically and starts intercepting incoming HTTP requests.

To ensure the variables are set correctly you can use the following heredoc to generate the Cloud Run Service YAML.

cat<<EOF>cloudrun-service.yaml
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: $CLOUD_RUN_NAME
labels:
cloud.googleapis.com/location: $REGION
annotations:
run.googleapis.com/launch-stage: BETA
spec:
template:
metadata:
annotations:
run.googleapis.com/container-dependencies: "{httpbin: [envoy-apigee]}"
run.googleapis.com/vpc-access-connector: $NETWORK
spec:
serviceAccountName: $SA_EMAIL
containers:
- image: docker.io/envoyproxy/envoy:v1.20-latest
name: envoy-apigee
env:
- name: ENVOY_LOG_LEVEL
value: trace
ports:
- name: http1
containerPort: 8080
resources:
limits:
cpu: 100m
memory: 128Mi
volumeMounts:
- name: envoy-conf-secret
readOnly: true
mountPath: /etc/envoy
- image: docker.io/kennethreitz/httpbin
name: httpbin
env:
- name: PORT
value: '80'
resources:
limits:
cpu: 500m
memory: 256Mi
volumes:
- name: envoy-conf-secret
secret:
secretName: envoy_config
items:
- key: latest
path: envoy.yaml
EOF

Since we now have a declarative description of our Cloud Run service, our deployment instructions have become as simple as executing the following command.

gcloud run services replace cloudrun-service.yaml --project $PROJECT_ID

For additional information on the YAML based service definition please see the Cloud Run documentation page.

To test our setup up until this point we can execute the following request:

curl "$CLOUD_RUN_URL/json" -I

If you get an error message like the one below, don’t panic! This works as intended. What is happening at this point is that the authorization filter forwarded the request to the Envoy adapter who determined that the request didn’t contain the required credentials to be admitted.

HTTP/2 403 
x-cloud-trace-context: b0a897e8765cf72ffb1f9439f0127007;o=1
date: Fri, 02 Jun 2023 11:42:06 GMT
content-type: text/html
server: Google Frontend

Bonus: If you want to see what’s happening under the hood at the Envoy Adapter and how the deny action came about, you can start a new terminal and follow the service’s logs with the following command:

kubectl logs -f -n apigee -l app=apigee-remote-service-envoy

Configure an API Product in Apigee to allow access to the Cloud Run service

Because an example of a denied authentication is really only useful if we also provide evidence that the positive path works as well, we now have to prove an API access using valid credentials. For this we will be using the Apigee Management APIs to create an API product, Developer and App. In a real-world scenario you could build some onboarding flows on top of these APIs or use the turn-key developer portal as described in the last step of this article.

TOKEN=$(gcloud auth print-access-token)

CLOUD_RUN_HOSTNAME=$(echo ${CLOUD_RUN_URL/https:\/\//})
curl -H "Authorization: Bearer ${TOKEN}" -H "Content-Type:application/json" "https://apigee.googleapis.com/v1/organizations/${ORG}/apiproducts" -d \
"{
\"name\": \"cloud-run-api-product\",
\"displayName\": \"Cloud Run API Product\",
\"approvalType\": \"auto\",
\"attributes\": [
{
\"name\": \"access\",
\"value\": \"public\"
}
],
\"description\": \"API Product for Cloud Run Example\",
\"environments\": [
\"${ENV}\"
],
\"operationGroup\": {
\"operationConfigs\": [
{
\"apiSource\": \"$CLOUD_RUN_HOSTNAME\",
\"operations\": [
{
\"resource\": \"/\"
}
],
\"quota\": {}
}],
\"operationConfigType\": \"remoteservice\"
}
}"
curl -H "Authorization: Bearer ${TOKEN}" -H "Content-Type:application/json" "https://apigee.googleapis.com/v1/organizations/${ORG}/developers" -d \
'{
"email": "test-user-cloud-run@google.com",
"firstName": "Thomas",
"lastName": "Tester",
"userName": "testingcloudrun"
}'
curl -H "Authorization: Bearer ${TOKEN}" -H "Content-Type:application/json" "https://apigee.googleapis.com/v1/organizations/${ORG}/developers/test-user-cloud-run@google.com/apps" -d \
'{
"name":"cloud-run-test-app",
"apiProducts": [
"cloud-run-api-product"
]
}'

With the Apigee artifacts created we can finally get to the test that we have worked towards for quite some time. We use the newly created developer app that has access to the Cloud Run service as a credential to access our Cloud Run service. The Apigee Adapter for Envoy allows for API key or OAuth2 tokens to be used as credentials. For simplicity’s sake we use the API key here:

API_KEY=$(curl -H "Authorization: Bearer ${TOKEN}" -H "Content-Type:application/json"   "https://apigee.googleapis.com/v1/organizations/${ORG}/developers/test-user-cloud-run@google.com/apps/cloud-run-test-app" | jq -r ".credentials[0].consumerKey")
curl $CLOUD_RUN_URL/json -H "x-api-key:$API_KEY"

This should again result in the same json response that we have received in the initial unauthenticated invocation of Cloud Run.

{
"slideshow": {
"author": "Yours Truly",
...
}

Create credentials for accessing Apigee in the Apigee Developer Portal (Bonus)

Instead of using the Apigee APIs to create the developer and app resources as done in the previous section, Apigee also provides an out of the box developer portal option. Using the developer portal potential consumers of the API can register their consuming apps themselves and directly obtain credentials to access them.

Follow the instructions in the Apigee documentation to create a developer portal and add your new Cloud Run backed API to the API catalog.

The developer journey then looks as follows:

A developer accesses the developer portal and selects the API product they are interested in.

They then create a new application and select the API products they have identified in the previous step and want to create credentials for.

With the application created they now have access to the API key and secret that allows them to access the API as described before.

Summary and next Steps

This article demonstrated how the new multi-container support in Cloud Run can be used to implement a sidecar pattern in the form of an authenticating Envoy proxy. We configured our Envoy proxy to use an external policy decision point in the form of the Apigee adapter for Envoy to decide if incoming requests should be accepted or denied. Lastly we explained how Apigee can be used to add self-service onboarding capabilities to Cloud Run and to bring Cloud Run services into a broader API management context.

If you are interested in trying the multi-container support for Cloud Run yourself, check out the release announcement with many more use case descriptions. For another example and a detailed walkthrough on how to use sidecars in Cloud Run to report custom metrics to Google Cloud managed service for Prometheus you can head over to this tutorial in the Cloud Run documentation. Lastly, if you’re interested in the broader picture of how the latest features in Cloud Run are moving serverless forward, then make sure you check out this video.

Special thanks to Geir Sjurseth and Richard Seroter for your highly appreciated input and comments on drafts of this post.

--

--