Gateway to Kubernetes…

Sander Alberink
Appsbroker CTS Google Cloud Tech Blog
6 min readNov 16, 2023
AI generated image of an cyberpunk gateway to kubernetes

As part of my excursions into IPv6 support on Google Cloud I was triggered by a colleague on some load balancer support options for Kubernetes clusters. One of the contestants was the new Gateway API (now GA, available on Kubernetes v1.24 and above). As I had my testbed already setup and wanted to see how this would work, I whipped up a quick PoC on my cluster.

What does Gateway API attempt to accomplish

The Gateway API can be viewed as a superset of the Ingress load-balancing functionality in Google Cloud. The existing Ingress API has a few downsides that lead to complexity in managing larger clusters or multi-clusters. The most important one IMHO is that it places control of a infrastructure resources in the hands of application developers without an ability to share these infrastructure resources across application teams. On Google cloud, for example this could lead to a large number of Global Cloud Load Balancers, one for every Ingress defined by an application team. This complicates hosting different applications on the same domain for example.

Gateway API tackles these and other use-cases by splitting the Ingress in two: the Gateway (potentially managed by a different infrastructure team) and the actual HTTP routes (managed by the application developers). In addition to the HTTP ingress supported by Google cloud, this adds support for gRPC, TCP and UDP ingresses. The last 2 options are not yet supported by Google's implementation of the Gateway API, but I assume that support for these options will come eventually in the TCP/UDP global load balancer.

How to build a Gateway?

What do we need to get started with this? Well, right off the bat we need a cluster capable of running the Gateway API. Luckily, on Google Cloud this is as simple as enabling the gateway on a supported cluster version (1.24+ for GKE Standard, 1.26+ for GKE Autopilot). Some other restrictions also apply.

   gcloud container clusters update <clustername> \
--gateway-api=standard \
--location=<cluster region or zone>

We can check if the Gateway API is enabled on a cluster by checking for the presence of the GatewayClass CRD by running kubectl get gatewayclass which should yield output similar to the following:

NAME                             CONTROLLER                  ACCEPTED   AGE
gke-l7-global-external-managed networking.gke.io/gateway True 16h
gke-l7-regional-external-managed networking.gke.io/gateway True 16h
gke-l7-gxlb networking.gke.io/gateway True 16h
gke-l7-rilb networking.gke.io/gateway True 16h

This shows the load balancers the Google Cloud Gateway implementation makes available, with a selection of global and regional, external and internal load balancers.

Now that we have our cluster prepped, lets deploy the gateway part of the equation:

kind: Namespace
apiVersion: v1
metadata:
name: infra
---
kind: Gateway
apiVersion: gateway.networking.k8s.io/v1beta1
metadata:
name: external-http
namespace: infra
spec:
gatewayClassName: gke-l7-global-external-managed
listeners:
- name: http
protocol: HTTP
port: 80
allowedRoutes:
kinds:
- kind: HTTPRoute
namespaces:
from: Same
- name: https
protocol: HTTPS
port: 443
allowedRoutes:
kinds:
- kind: HTTPRoute
namespaces:
from: All
tls:
mode: Terminate
options:
networking.gke.io/pre-shared-certs: apenrots
addresses:
- type: NamedAddress
value: ipv6-lb
------
kind: HTTPRoute
apiVersion: gateway.networking.k8s.io/v1beta1
metadata:
name: redirect
namespace: infra
spec:
parentRefs:
- namespace: infra
name: external-http
sectionName: http
rules:
- filters:
- type: RequestRedirect
requestRedirect:
scheme: https

This bit of YAML accomplishes the following:

  • The gateway is deployed in the infra namespace
  • It implements a HTTP-to-HTTPS redirect for all traffic
  • It allows contribution of HTTPRoutes from all deployed kubernetes namespaces on the cluster
  • And it terminates TLS using a named certificate

Next up, a pod and service contributing a route:

apiVersion: v1
kind: Pod
metadata:
name: site-v1
labels:
app.kubernetes.io/name: site-v1
spec:
containers:
- name: whereami
image: us-docker.pkg.dev/google-samples/containers/gke/whereami:v1.2.21
ports:
- containerPort: 8080
env:
- name: METADATA
value: "site-v1"
---
kind: Service
apiVersion: v1
metadata:
name: site-v1
spec:
selector:
app.kubernetes.io/name: site-v1
ports:
- port: 8080
protocol: TCP
targetPort: 8080
---
kind: HTTPRoute
apiVersion: gateway.networking.k8s.io/v1beta1
metadata:
name: site-v1
spec:
parentRefs:
- kind: Gateway
name: external-http
namespace: infra
hostnames:
- "google-ipv6.apenrots.info"
rules:
- backendRefs:
- name: site-v1
port: 8080

The pod and service spec are simple, the interesting part is the HTTPRoute document. It contributes a route to the named Gateway external-http in the infra namespace, for the hostname google-ipv6.apenrots.info and attaches it to the service site-v1 . Simple enough, isn’t it? Let’s amp it up a little:

kind: Namespace
apiVersion: v1
metadata:
name: cts
---
apiVersion: v1
kind: Service
metadata:
name: site-v2
namespace: cts
spec:
selector:
app: site
version: v2
ports:
- port: 8080
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: site-v2
namespace: cts
spec:
replicas: 2
selector:
matchLabels:
app: site
version: v2
template:
metadata:
labels:
app: site
version: v2
spec:
containers:
- name: whereami
image: us-docker.pkg.dev/google-samples/containers/gke/whereami:v1.2.21
ports:
- containerPort: 8080
env:
- name: METADATA
value: "site-v2"
---
kind: HTTPRoute
apiVersion: gateway.networking.k8s.io/v1beta1
metadata:
name: site-v2
namespace: cts
spec:
parentRefs:
- kind: Gateway
name: external-http
namespace: infra
hostnames:
- "google-ipv6.apenrots.info"
rules:
- matches:
- path:
value: /site2
filters:
- type: URLRewrite
urlRewrite:
path:
type: ReplacePrefixMatch
replacePrefixMatch: /
backendRefs:
- name: site-v2
port: 8080

Here we deploy a service and a deployment in their own cts namespace, and we are contribution a HTTPRoute to reach it. The route is attached to the named gateway again, but it will respond to requests to /site2 . In addition, the request URL will be rewritten to remove the /site2 path prefix before forwarding it to the site-v2 service. Nice!

Gateway drugs…

We can further spice things up some more by adding additional Google Cloud goodness such as Cloud Armor and tweaks to our backend config by associating a GcpBackendPolicy with the gateway. Let's start by creating a Cloud Armor and a SSL policy:

gcloud compute security-policies create cts-cloud-armor-policy
gcloud compute security-policies update cts-cloud-armor-policy --enable-layer7-ddos-defense
gcloud compute ssl-policies create cts-ssl-policy --profile=MODERN

Then lets create a backend policy as follows:

apiVersion: networking.gke.io/v1
kind: GCPBackendPolicy
metadata:
name: cts-backend-policy
namespace: infra
spec:
default:
securityPolicy: cts-cloud-armor-policy
timeoutSec: 40
targetRef:
group: ""
namespace: cts
kind: Service
name: site-v2

This will attach a backend configuration to the service site-v2 , with an adjusted timeout of 40 second and our newly created Cloud Armor policy attached to it.

Note: this is not yet visible in Google Cloud Console, you’ll have to check through kubectl. You can check this through kubectl describe gcpbackendpolicies.networking.gke.io --namespace infra .

Lastly, lets add the SSL config we just created to the gateway:

apiVersion: networking.gke.io/v1
kind: GCPGatewayPolicy
metadata:
name: infra-gateway-policy
namespace: infra
spec:
default:
sslPolicy: cts-ssl-policy
targetRef:
group: gateway.networking.k8s.io
kind: Gateway
name: external-http

Migrating to Gateway API

the Kubernetes SIG Network working group has made a conversion tool available that will allow easy in-place conversion of an Ingress to the equivalent Gateway. However, it is currently not supported for Google Cloud load balancers but hopefully this will be remedied soon.

Gateway to the future

I hope these examples give some insight into what is possible using the Gateway API on Google Cloud. I haven’t touched upon any of the multi-cluster Gateway goodness that has also landed in GA on Google cloud, nor on the possibilities for regional or internal load balancing. The Gateway API specification has even more interesting use cases in store, but non-HTTP related functionality remains to be implemented in Google Cloud. Google has not made an announcement when the remaining functionality will be implemented, but it’s my expectation that we will see some preview functionality soon enough. And now back to my IPv6 playground…

--

--