NGINX Ingress or GKE Ingress?

Glen Yu
4 min readJan 23, 2022

--

There are tons of ingress controllers out there in the Kubernetes ecosystem, so how do we know which one is right for you? In this article I plan to describe two popular options for Google Kubernetes Engine (GKE) and help ease your decision making process.

What is “Ingress”?

An Ingress is an API object that manages external access to services in a cluster, typically HTTP(s) and is made up of two components:

  1. Ingress resource that provides the routing rules
  2. Ingress Controller that implements said rules

Each Ingress Controller has its own feature set and settings that are typically configured through annotations. For the remainder of the article I will simply refer to the deployment of both components as “Ingress”.

Deployment setup

For this example I will be using the fake-service container from Nic Jackson. It is a very versatile and lightweight container capable of setting up multiple microservice HTTP endpoints all from the same image!

  • web.yaml (and a similar one for payments.yaml and currency.yaml for a total of 3 microservices):

The cloud.google.com/neg: ‘{"ingress": true}' annotation can be ignored for now. What this annotation describes is the default behavior of GKE Ingress and will not apply to the NGINX Ingress example.

NGINX Ingress

NGINX Ingress has been around a long time and many of us have probably even used an NGINX reverse proxy at some point. NGINX is very popular so you probably will not be surprised to learn that there are a few NGINX-based Ingress Controllers out there, but the one I will be focusing on in this article is the one maintained by the Kubernetes team, ingress-nginx.

I will be using Helm to install the Ingress Controller:

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install nginx-ingress ingress-nginx/ingress-nginx

This will provision an External TCP/UDP Network Load Balancer and once you have an external IP, you can apply the Ingress rules:

Here is an example that takes advantage of NGINX’s rewrite feature which will take mysite.example.com/hello/api/path and rewrite it to mysite.example.com/api/path and route the traffic to the payments backend service:

NOTE: to test the ingress rules, you will need to include the correct host header in your curl command, otherwise it will hit the defaultBackend instead.

{
"name": "payments",
"uri": "/api/path",
"type": "HTTP",
"ip_addresses": [
"10.0.0.14"
],
"start_time": "2022-01-20T18:32:54.233376",
"end_time": "2022-01-20T18:32:54.233479",
"duration": "102.974µs",
"body": "Response from payments",
"code": 200
}

GKE Ingress

Unlike NGINX, there is no setup required for a GKE Ingress Controller, so you just have to apply the rules. Below is an example of a GKE Ingress rule:

NOTE: You may notice that I used Prefix and Exact pathType in the NGINX Ingress rules but ImplementationSpecific in GKE Ingress ones. There is no particular reason for this; ImplementationSpecific is the default and matching is up to the controller. If you find that your ingress rules are not matching the way you expect, you can choose to specify it explicitly.

At a glance, you may think it does not look that much different from the NGINX version — and you would be right. What differs is what lies underneath. Remember the cloud.google.com/neg: ‘{"ingress": true}' annotation in the Kubernetes Service? This creates an NEG after the Ingress is created, but what exactly is an NEG?

Network Endpoint Groups (NEGs)

NEGs is a feature you can only find in GKE. Traditionally, the load balancer would load balance the traffic to VM backends that serve an application. However with microservice architecture, just because traffic was sent to a node, it does not mean that the pod which provides the service is also on that particular node, often leading to extra hops being required resulting in added latency.

(If you would like to dive deeper into GCP’s load balancers, I recommend this article from a fellow GDE)

Image from Google

Google introduced NEGs as a way to enable container-native load balancing. Through NEGs, which are integrated with the ingress controller running on GCP, the load balancer gains visibility into the pods. This allows the the load balancer to route traffic to — and perform health checks against — the pods directly! To look at this from a traditional point of view, NEGs are akin to the Managed Instance Groups (MIGs) of old and Kubernetes pods are the VMs of old.

Which Ingress Controller is right for me?

The answer to this question boils down to whether you need the rewrite (or other) functionality that only NGINX Ingress provides as there is no rewrite in GKE Ingress.

With GKE Ingress, there is no setup required and with NEGs you get improved latency — win-win! By default, an external HTTP(S) load balancer will be provisioned by ingress.class: “gce”, but you can choose to deploy an internal HTTP(S) load balancer instead by setting ingress.class: “gce-internal”.

Additional features with GKE Ingress

Because GKE Ingress is tightly integrated with your Google Cloud VPC network, there is a long list of additional features you can add in a BackendConfig which your Kubernetes Service can reference in its annotations. Through the BackendConfig, you can attach features such as Cloud CDN, Cloud Armor security policies, Identity-Aware Proxy (IAP), etc. See here for a complete list of features.

--

--

Glen Yu

Cloud Engineering @ PwC Canada. I'm a Google Cloud GDE, HashiCorp Ambassador and HashiCorp Core Contributor (Nomad). Also an ML/AI enthusiast!