Global ingress in practice on Google Container Engine — Part 1: Discussion

Christopher Grant
Google Cloud - Community
6 min readSep 18, 2017

In this article I’ll cover a variety of challenges I faced and solutions I figured out when deploying a real app to a Global Federated cluster using GCE ingress controller. I’ve covered how to setup Global Kubernetes in 3 steps in a separate article. This article will focus on how to use it once it’s setup. In part 1 I’ll discuss the concepts, and in part 2 we’ll do an end to end deployment with real code.

Contents:
- Simple Example
- Variations
- Global IP
- Annotations
- Health Checks
- Node port type and port
- Path contexts
- Cluster balancing

Having multiple deployed services respond under one domain name is a common practice in larger applications. With Kubernetes you can expose Deployments as independent Services, using ClusterIPs, NodePort and LoadBalancers. You can also expose multiple Services as a single virtual entity using Ingress resources.

In theory Ingress resources are straight forward and easy to use, but in practice there can be a steeper learning curve. In this article we’ll review the basics of creating an Ingress resources and some quirks you’ll encounter in real life.

Simple Example

There are three main resources involved with this process: the deployment, it’s service and the ingress itself. Lets review a simple Hello World ingress.

From the Kubernetes documentation for Ingress resource we can see many of the key elements

Ingress Example Yaml from docs

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /foo
backend:
serviceName: s1
servicePort: 80
- path: /bar
backend:
serviceName: s2
servicePort: 80

Just covering whats here, any requests to host foo.bar.com will be processed by the rules contained in that block, foo.bar.com/foo will route to service s1 on port 80, requests to foo.bar.com/bar will route to service s2 on port 80.

Variations not explicitly listed in the docs

All Hosts
If you don’t want to deal with the hostname, you can remove the host value and just the path rules will be evaluated for any and all hosts / IPs.

Here’s an example with that

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /foo
backend:
serviceName: s1
servicePort: 80
- path: /bar
backend:
serviceName: s2
servicePort: 80

Default Backend
The docs discuss a Single Service Ingress option and without rules. You can combine this with the path rules to define both your own default backend with additional rules. Heres an example for that.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
backend:
serviceName: testsvc
servicePort: 80
rules:
- http:
paths:
- path: /foo
backend:
serviceName: s1
servicePort: 80
- path: /bar
backend:
serviceName: s2
servicePort: 80

Here any request to this ingress with /foo with go to service s1, any requests for /bar will route to s2, and all others will route to testsvc

Its important to note that if you don’t define a default backend, Kubernetes will create one for you behind the scenes. Also the backend created for you exists only in one cluster You’ll see this when looking at the load balancer backends create in GCP later.

Global IP

This is another important point for the global ingress. You’ll need to explicitly create and use a Global IP from google. The default ephemeral IPs created are only regional and won’t be able to support backend services from different regions.

I spent a very long time with this one, DON’T MISS IT

From the command line create a global IP

gcloud compute addresses create ingress-ip --global

Then in your ingress.yaml reference it as an annotation like so:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
annotations:
kubernetes.io/ingress.global-static-ip-name: ingress-ip
ingress.kubernetes.io/rewrite-target: /
spec:
backend:
serviceName: testsvc
servicePort: 80
rules:
- http:
paths:
- path: /foo
backend:
serviceName: s1
servicePort: 80
- path: /bar
backend:
serviceName: s2
servicePort: 80

Ingress Annotations

In the metadata section of the Ingress definition various annotations can be provided to help kubernetes understand your intentions better.

Ingress Controller
Depending on where and how you deploy kubernetes you may choose what will actually execute on the ingress yaml definition you provide. Many of the documents refer to nginx as the ingress controller, but for this example I’ll be showing how to use native GCE.

While not required, its a good practice to add an annotation noting which controller you’re intending to use. This is helpful if there are multiple options within a given environment as an example

Because I want to use GCE for this ingress controller I’ll specify it explicitly using the kubernetes.io/ingress.class: “gce” annotation

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
annotations:
kubernetes.io/ingress.class: "gce"
ingress.kubernetes.io/rewrite-target: /
spec:
backend:
serviceName: testsvc
servicePort: 80
rules:
- http:
paths:
- path: /foo
backend:
serviceName: s1
servicePort: 80
- path: /bar
backend:
serviceName: s2
servicePort: 80

Health Checks

By default the ingress will setup loadblancer health checks by pinging the root of your service. This is important to know in case you choose not to route “/“ on your service. To ensure your app registers its health correctly either provide a “/“ route or Configure Liveness and Readiness Probes for your needs.

I ended up just leaving the root context in the app for simplicity

NodePort

Here’s another item that made complete sense after working through it. First off using the GCE ingress controller requires a service exposed with NodePort, it can’t be just ClusterIP. Secondly, when deploying in a federated cluster, the node ports for your container need to be the same in all clusters so we need to define it explicitly. By default each container would provide a random port for type NodePort, but in a federated model, we need them to be the same so the health check is accurate. Once up and running you’ll see the health check polling the same port on all nodes for your app.

To define this, on the service definition for our deployment we’ll specify the exact values we want. Here’s an example

apiVersion: v1
kind: Service
metadata:
name: s1
labels:
app: app1
spec:
type: NodePort
ports:
- port: 80
nodePort: 30041
selector:
app: app1

You’ll need to define a different node port for each service so there are no collisions

Path Contexts

This was probably the most annoying issue I faced and only reared it’s head with a real app. So far all the demos and hello world apps have worked fine. /foo routes to svc1, and /bar to svc2. When I deployed a real app however things weren’t as clear. I would get the main page for my services but everything else would revert to the default load balancer. Come to find out there is a [quirk with ingress](https://github.com/kubernetes/contrib/issues/885) in that Nginx and GCE ingress controllers don’t work the same.

Basically on Nginx /foo is looking for anything with a prefix of /foo where the gce controller see it as an explicit mapping. To fix this we need to add * mappings to our rule paths in the ingress yaml as follows:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
annotations:
kubernetes.io/ingress.class: "gce"
## ingress.kubernetes.io/rewrite-target: /
spec:
backend:
serviceName: testsvc
servicePort: 80
rules:
- http:
paths:
- path: /foo
backend:
serviceName: s1
servicePort: 80
- path: /foo/*
backend:
serviceName: s1
servicePort: 80
- path: /bar
backend:
serviceName: s2
servicePort: 80
- path: /bar/*
backend:
serviceName: s2
servicePort: 80

By adding the additional mappings any requests for /foo or /foo/baz/bar will correctly route to service s1 and requests for /bar or /bar/baz/foo will route to service s2

Cluster Balancing

For the most part kubernetes will try to balance the clusters so apps are evenly distributed across clusters, but you reflect your intentions using the federation.kubernetes.io/deployment-preferences: annotation as follows

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app1
spec:
replicas: 4
template:
metadata:
annotations:
federation.kubernetes.io/deployment-preferences: |
{
"rebalance": true,
"clusters": {
"east-cluster": {
"minReplicas": 1
},
"west-cluster": {
"minReplicas": 1
}
}
}
labels:
app: app1
spec:
containers:
- name: app1
image: myrepo/appi:v7
ports:
- containerPort: 80
resources:
requests:
cpu: 100m
memory: 100Mi

The annotations above are asking kubernetes to rebalance and keep a minimum of one replica in east and a minimum of one replica in west.

Conclusion

Kubernetes ingress controllers are a powerful tool. With little extra insight they become a simple resource to manage as well. Utilizing the GCE controller type allows you to implement ingress quickly and easily on Google Container Engine with no additional resources needed.

I hope this was helpful and be sure to checkout the setup guide and end to end demo.

--

--