Combining Cloud Run and GKE for Effortless Management

Julio Diez
Google Cloud - Community
5 min readJun 10, 2024
Photo by Growtika on Unsplash

When it comes to orchestrating containerized applications, Google Kubernetes Engine (GKE) and Cloud Run are two powerful offerings from Google Cloud that cater to different needs and scenarios. Both platforms simplify the deployment and management of containerized workloads, yet each shines in distinct ways, making the decision of which to use a matter of matching the right tool to the right job.

Google Kubernetes Engine: The Powerhouse for Complex Workloads

GKE is a managed Kubernetes service that delivers all the benefits of Kubernetes without the operational overhead. It is particularly advantageous for organizations looking to run complex, large-scale applications. With GKE, you can harness the full power of Kubernetes, leveraging its robust features for automation, scaling, and self-healing. This makes it ideal for workloads that demand high availability, intricate networking, and fine-grained control over resource management.

Cloud Run: The Simplicity of Serverless for Microservices

For use cases that involve event-driven workloads, particularly those with unpredictable traffic patterns, Cloud Run is an excellent choice. Consider an e-commerce site with a surge in traffic during holiday sales, or a data processing job that runs at irregular intervals. In these scenarios, Cloud Run’s ability to scale instantly in response to demand ensures performance remains steady without over-provisioning resources.

For customers who want to avoid the burden of infrastructure management or are just getting started with containers, Cloud Run offers a streamlined solution. Its serverless architecture means you don’t need to worry about the complexities of managing servers or clusters. This allows you to focus on developing your application. You deploy your container, and Cloud Run manages everything else, making it an ideal option for teams looking to accelerate their development cycles and reduce operational costs. This includes auto-scaling your applications up or down to zero based on demand, ensuring you only pay for what you use.

The Best of Both Worlds: GKE and Cloud Run with Kong API Gateway

In this article, I will demonstrate how to harness the combined power of GKE and Cloud Run, using Kong API Gateway as the central management tool. By deploying Kong on GKE, you gain access to advanced API management features such as traffic control, security enforcement, and real-time monitoring. This setup allows you to take full advantage of GKE’s robust container orchestration alongside Cloud Run’s effortless scaling for stateless applications, providing a versatile and cost-effective solution for managing your workloads.

The Cloud Foundation Fabric repository contains a Terraform blueprint to deploy all the components discussed in this article. While most of the code is in Terraform, the Kong configuration is conveniently in YAML format allowing for flexible customization.

# values-cp.yaml
# Do not use Kong Ingress Controller
ingressController:
enabled: false

image:
repository: kong/kong-gateway
tag: "3.6.1.4"

# Mount the secret created earlier
secretVolumes:
- kong-cluster-cert
...

Deploying Kong API Gateway on GKE

The image below shows the architectural overview of what we are going to deploy.

First, we deploy the Kong API Gateway following the instructions in the official documentation. The recommendation is to configure Kong Gateway to use separate control plane and data plane deployments, called hybrid mode. In this mode, Kong Gateway employs mutual TLS (mTLS) to secure the control plane/data plane communication.This is achieved by generating and sharing a certificate between both components.

Once Kong is up and running on GKE, you can integrate it with various workloads deployed within the same cluster. This integration streamlines your API management, allowing you to enforce consistent policies and monitor traffic across all services. Whether you are running microservices, databases, or other applications, Kong on GKE ensures seamless communication and high availability.

Leveraging Cloud Run for Dynamic Workloads

In addition to deploying workloads on GKE, Cloud Run offers a unique advantage for handling dynamic and spiky workloads. Cloud Run automatically scales your stateless containers up and down, providing a serverless environment that only charges for active request handling. This makes it perfect for applications with unpredictable traffic patterns, such as user authentication services or event-driven APIs.

To set up this architecture, an internal Application Load Balancer (ALB) is used as a front-end for Cloud Run while Cloud DNS is used to customize the HTTPS endpoint where Kong can route requests. The internal ALB is configured with a certificate created using the Google Cloud Certificate Authority Service. A root CA certificate is generated, and a certificate signed by the CA is provisioned in the ALB. Subsequently, the root CA certificate is provisioned in Kong to ensure secure communication with the ALB and Cloud Run.

Finally, you need to configure Kong HTTP routing via its Admin API to point to the internal ALB and Cloud Run. All of this provisioning and configuration is automatically done in the Terraform blueprint to ease deployment and testing. For a production-ready installation please refer to the official Kong Gateway documentation.

Conclusion

The beauty of this setup lies in the seamless integration of Cloud Run with Kong API Gateway. Kong can route requests to both GKE-based services and those running on Cloud Run, creating a unified API management layer. This hybrid approach allows you to optimize resource utilization and cost-effectiveness, ensuring that each workload is deployed in the most suitable environment. By centralizing API management with Kong, you maintain control and visibility over all your services, regardless of where they are running. Whether you’re dealing with consistent high-traffic applications or services with variable demand, this setup provides the flexibility and reliability needed to support modern application requirements.

--

--

Julio Diez
Google Cloud - Community

Strategic Cloud Engineer at Google Cloud, focused on Networking and Security