Kubernetes ingress: use the GCE built-in nginx controllers in GKE [2/3]

Damiano Giampaoli
omni:us
Published in
5 min readJul 26, 2018

In our first post about the Kubernetes ingress, we compared 2 different ingress implementations: GCE vs nginx, highlighting the pros and cons of using one controller instead of the other.

In this post, we are going to describe all the steps required to setup a kubernetes ingress using the Google Kubernetes Engine (GKE), which provides a default ingress-controller support that relies on Google Container Engine (GCE) support.

GKE and GCE are components of the Google Cloud Platform (GCP). So, if you are running ingress on AWS or on premises, the code reported here will not work. But having a look at it could be interesting anyway, if you are new to ingress.

Let’s start with a diagram:

  • the yellow ovals are the components we have to configure ourselves, by applying k8s manifest files
  • k8s manifest files are represented by the white sticky notes
  • The grey boxes represent GKE and its provided ingress-controller
In yellow the components to configure, in grey the support from GKE

On paper, the only needed step with GCE-controller is to create the ingress configuration itself. But in reality, there are a bunch of other things that must be checked and taken into account, in order to have a fully operational ingress.

This article assumes that you have already set what is represented with the Backend’s yellow oval. That is,

  • a group of webapp or microservices running as pods that you want to expose to the internet
  • all the pods have a related NodePort service exposed

NOTE that the service of type LoadBalancer can’t be used with the current (April 2018) GCE ingress impementation

and it explains how to create a basic HTTP API layer for an enterprise system running on Kubernetes.

What does “ingress-controller implemented with GCE” mean?

An ingress can be seen as the kubernetes abstraction of a L7 Load Balancer (where L7 refers to the level 7 of the ISO/OSI model).

But as explained in the overview section of the GCE ingress controller Readme[4] there’s no single resource in GCE representing a L7 Load balancer. So, expect to find some new Global Forwarding Rules and Instances Groups in your GCP dashboard.

So, long story short, what needs to be done?

Here is a todo list to follow in order to successfully create an ingress:

Base step: You need to have some backends to expose using an ingress… if you don’t have them yet, there’s no reason to use an ingress here. If you want a quick basic environment to test, create a new 1.8.x cluster and run this manifest.

Then:

  1. Create a basic ingress with forwarding rules
  2. Be sure to have properly configured the healthcheck services on the backends
  3. The default timeout is 30 seconds. In order to increase it, you have to use the GCP console

… and don’t forget that after each change to the ingress configuration, you have to wait up to 10 minutes before having it fully operational. In the meantime, you can get the 503 error calling the endpoints managed.

Let’s explain each step in detail now.

1. Apply a basic config with forwarding rules only

In order to create the scenario to test our ingress, we have created a small test system.

The test system manifest files can be found at omni:us on github and it is composed of 3+1 backends:

  • 3 are explicitly configured by us: two instances of echo servers called echo-server and echo-server-2, as well as, an http server called test-webserver
  • 1 more backend is then created by the GCE ingress-controller: the default backend, handled by k8s and used when the path doesn’t match any of the 3 rules specified.

Below, we show only the manifest file used to deploy the ingress configuration and its related forwarding rules to our test system. After having connected to our k8s cluster, run the file below called ingress-manifest.yaml:

#$ kubectl apply -f ingress-manifest.yaml

Then, to monitor the ingress status, keep a terminal open on:

#$ watch kubectl describe ingress

The describe command summarizes the status of the deployed ingress. Take note of the url-map value because we will use it later for monitoring through GCP dashboard.

2. Set healthcheck services on the backends

In order to use the ingress feature of kubernetes as reverse proxy, all of the backends need to provide a health service which responds “200” when the host base path is called.

Without these services, kubernetes marks the backend as UNHEALTHY, and attempting to hit it through the ingress we get a 502 error as reported in [1][2].

In the last picture of the previous paragraph, we see that all 4 backends are listed in the “describe ingress” command, but one out of the 4 is marked as Unknown. Kubernetes still has to mark it as HEALTHY or UNHEALTHY.

This constraint can be tedious:

  • Imposing to implement a health service on the backend by design, sounds like a good choice. Anyway, sometimes when there is a need to deploy quick-and-dirty containers to run specific experiments, it can so happen that you forget to add the health service.
  • For sure, also imposing its interface (response strictly on base path) is too much.

3. Configure the timeouts

Each backend related to a reverse proxy rule has its own timeout setting.

By default, that timeout is set to 30 seconds. At first glance, it sounds like a solidly well-chosen pre-setting value. However, we hit a bottleneck during testing when some of our scientific processes took more than 30 seconds.

Day after day, we dealt with REST services that trigger a chain of deep learning models, that might end up taking more time than we initially estimated for. Sometimes, this could be as long as a few minutes.

Have a look at this [3] article to read how to change the timeout using the GCP web interface.

Run some requests

Okay, now ingress seems to be working. So let’s run some requests with Apache ab against those endpoints and see what happens.

The test cluster I created has only two very small nodes but I don’t expect great performances under load.

For each of the 3 backends run some requests like

#$ ab -n 100 -c 3 http://35.186.233.211/echo

and playing a bit with the number of concurrent requests -c

After running some benchmarks, go to the Load Balancer section on the GCP dashboard and search for the load balancer using the url-map id we got at end of Step 1.

k8s-um-default-ingress — e940260f95a267b9
Chart of the number of requests per second supported by the system, grouped by backend

Conclusions

In our previous post, we gave the nginx ingress-controller a thumbs up, but we would still like to keep an eye on this, especially to see if certain limitations are to be removed. Then, this would present itself as a nifty tool in the toolbox of a tech team that loves Kubernetes (Just like us at omni:us!).

Check the GCE ingress-controller github project Readme [4] for all of the documented limitations and future improvements.

References

[1] Read the section Remarks in
https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer

[2] About the 502 HTTP error returned by an unhealty backend: https://stackoverflow.com/questions/42101808/ingress-gives-502-error

[3] Configure ingress timeouts from the GCP web dashboard
https://stackoverflow.com/questions/36200528/how-to-configure-ingress-request-timeouts-on-gke

[4] Potential improvements list for ingress-gce
https://github.com/kubernetes/ingress-gce

--

--