Expose Kubernetes Applications using Gateway API

Patrik Hörlin
Predictly on Tech
Published in
5 min readDec 25, 2023

The long lasting Ingress resource has been with us since early days of Kubernetes and it has always been somewhat of a pet peeve. The main issue I found with this resource is the way it combines what are typical infrastructure, cross platform, concerns with application and team specific needs.

Gateway for public access to your Kubernetes applications
Photo by Victor Lu on Unsplash

Luckily, the new Gateway resource solves this issue for us. Let’s have a look at it.

The problem with Ingress

Here is a definition of an Ingress that exposes the echoserver service.

apiVersion: v1
kind: Namespace
metadata:
name: ingress-gw-api
---
apiVersion: compute.cnrm.cloud.google.com/v1beta1
kind: ComputeAddress
metadata:
name: lb-ingress-ip
namespace: ingress-gw-api
annotations:
cnrm.cloud.google.com/project-id: predictly-conf-frontend-lab
spec:
addressType: EXTERNAL
description: GKE load balancer IP
location: global
networkTier: PREMIUM
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: lb-ingress
namespace: echoserver
annotations:
kubernetes.io/ingress.global-static-ip-name: "lb-ingress-ip"
spec:
tls:
- secretName: lb-ingress-predictly-se
rules:
- host: lb-ingress.predictly.se
http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: echoserver
port:
number: 8080

Applying this would first create an external IP for us to use as frontend for the classic load balancer that is created as part of the Ingress. Secondly, it would register that all calls made to https://lb-ingress.predictly.se over both HTTP and HTTPS would be sent to echoserver.

The main problem with this is how it mixes what are infrastructure related concerns, managing global public IP, with application specific concerns, exposing an application on a given path.

Ideal would be to be able to split these different concerns into different deployment units, managed by different teams and with independent life cycles. This would reduce the risk of one team accidentally removing access to all of the production workload.

There are other problems as well such as having to define the Ingress in the same namespace as the workload, making it hard to share.

Gateway API

Using the new Gateway API, we can separate the two concerns of infrastructure and application much better. Instead of having one resource that governs all aspects, we can split it into distinct units.

Gateway API architecture, from GKE official documentation

I will not spend too much time here discussing each concept, in summary

  • Google provides a handful of different GatewayClass implementations
  • A platform administrator sets up Gateway instances using one of the implementations
  • Application developer (Service Owner above) defines HTTPRoute that exposes their services through a specific Gateway

By leveraging this resource, we can see how we split the different concerns of exposing applications in a much better way. A single application developer now has a much smaller area of concern and cleaner focus on their deliverable.

Setting it up

To use the Gateway API we must first enable it, in this example I am using Terraform.

# GKE cluster
resource "google_container_cluster" "euw3_gke1" {
provider = google-beta
name = "euw3-gke1"
project = var.GOOGLE_PROJECT_ID
location = var.region

...

gateway_api_config {
channel = "CHANNEL_STANDARD"
}
}

This update is safe to do on an existing cluster, it does not force a complete rebuild. Once done, we can see which GatewayClasses that are available

patrik@euw3-admin-host:~/ingress$ kubectl get gatewayclasses
NAME CONTROLLER ACCEPTED AGE
gke-l7-global-external-managed networking.gke.io/gateway True 6d12h
gke-l7-gxlb networking.gke.io/gateway True 6d12h
gke-l7-regional-external-managed networking.gke.io/gateway True 6d12h
gke-l7-rilb networking.gke.io/gateway True 6d12h

We can now go ahead and create our first Gateway, i.e. the resource that will allow external traffic into our cluster

apiVersion: v1
kind: Namespace
metadata:
name: ingress-gw-api
---
apiVersion: compute.cnrm.cloud.google.com/v1beta1
kind: ComputeAddress
metadata:
name: gateway-ip
namespace: ingress-gw-api
annotations:
cnrm.cloud.google.com/project-id: predictly-conf-frontend-lab
spec:
addressType: EXTERNAL
description: GKE gateway IP
location: global
networkTier: PREMIUM
---
kind: Gateway
apiVersion: gateway.networking.k8s.io/v1beta1
metadata:
name: gw-ingress
namespace: ingress-gw-api
annotations:
networking.gke.io/certmap: gw-ingress-demo
spec:
gatewayClassName: gke-l7-global-external-managed
listeners:
- name: https
protocol: HTTPS
port: 443
allowedRoutes:
kinds:
- kind: HTTPRoute
namespaces:
from: All
addresses:
- type: NamedAddress
value: gateway-ip

This resource declares a listener on port 443, i.e. a HTTPS listener that will use certificates from certificate map gw-ingress-demo. The load balancer will be global using IP Anycast so that a client will terminate at their closest Google location and the traffic will then travel on Google’s internal network.

Since we don’t want our application teams to expose their applications using HTTP, we have removed that listener. Finally, we have allowed for teams to attach HTTPRoutes in any namespace to this Gateway instance by the allowedRoutes segment.

At this point, we can see the resource in Google Cloud console

Gateway ingress in Google Cloud console

Exposing a Service

Now that the Gateway has been set up, we can let our teams start using it. First, let’s look at the actual deployment and service

apiVersion: v1
kind: Namespace
metadata:
name: echoserver
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: echoserver
namespace: echoserver
spec:
selector:
matchLabels:
app: echoserver
replicas: 2
template:
metadata:
labels:
app: echoserver
spec:
containers:
- image: k8s.gcr.io/e2e-test-images/echoserver:2.5
imagePullPolicy: Always
name: echoserver
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: echoserver
namespace: echoserver
annotations:
cloud.google.com/neg: '{"ingress": true}'
spec:
selector:
app: echoserver
ports:
- protocol: TCP
port: 8080
targetPort: 8080

A namespace, a deployment and a service. Notice that Google still supports Network Endpoint Groups for more efficient networking from the load balancer to the pod.

kind: HTTPRoute
apiVersion: gateway.networking.k8s.io/v1beta1
metadata:
namespace: echoserver
name: echoserver
spec:
parentRefs:
- kind: Gateway
name: gw-ingress
namespace: ingress-gw-api
hostnames:
- "gw-ingress.predictly.se"
rules:
- backendRefs:
- name: echoserver
port: 8080

With this configuration, all requests to https://gw-ingress.predictly.se is directed towards the echoserver service. We can also see that this HTTPRoute want’s to attach to the Gateway named gw-ingress in the ingress-gw-api namespace.

patrik@euw3-admin-host:~/ingress$ curl https://gw-ingress.predictly.se -k


Hostname: echoserver-d46bc6b9-h59qv

Pod Information:
-no pod information available-

Server values:
server_version=nginx: 1.14.2 - lua: 10015

Request Information:
client_address=35.191.13.201
method=GET
real path=/
query=
request_version=1.1
request_scheme=http
request_uri=http://gw-ingress.predictly.se:8080/

Request Headers:
accept=*/*
host=gw-ingress.predictly.se
user-agent=curl/7.88.1
via=1.1 google
x-cloud-trace-context=b4f6789785edd4c896e1663d6536e9fb/14557885252934923619
x-forwarded-for=35.198.181.226,34.128.166.39
x-forwarded-proto=https

Request Body:
-no body in request-

We can examine the HTTPRoute in the GKE console

Gateway in GKE console

The corresponding load balancer instance that is what actually handles the incoming traffic.

Load balancer in Networking console

Conclusion

By leveraging the new Gateway API, we can securely split the concerns of exposing our applications to external parties between different teams in our organisation.

The Gateway API also has support for many other features such as HTTP to HTTPS redirects, CDN caching etc. More on this in upcoming articles.

--

--