Kubernetes Nginx Ingress Controller for On-Premise Environments

Cagri Ersen
7 min readOct 29, 2019

--

In my previous post, I’ve mentioned that I would write about nginx ingress controller integration for on-premise kubernetes clusters. If you have such a cluster, you probably wants to run an ingress controller in order to be able expose your apps over http(s), since ingress resources are one of the most clever way to achieve such a thing.

So here’s the details:

1. What is Ingress ?

As it explained in kubernetes documentation, Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.

In other words when you want to expose your applications that reside in the k8s cluster, you configure an ingress resource that defines which inbound traffic will be go to which application.

A typical flow of an ingress is as below:

ingress

As you can see, it’s a load balancer that routes specific traffics to the specific targets in an automated fashion. You just create an ingress resource with desired definitions, it does its magic for you.

In this way you can expose as many services as you want with the same IP address. Since, this is a L7 load balancer, it supports host based (ie: app1.domain.com) or path based (i.e: /basket) redirections and SSL termination etc.

Note: Ingress is not the only option to expose applications to outside of the cluster. But since, this is not a kubernetes networking post, I won’t cover everything in here. If you want to learn more about this topic, reading https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0 will be a good start.

A form of an ingress resource manifest is something like below:

In this example, any http requests that contain cafe.example.com as hostname will be redirected two separated service, based on their paths. While, http://cafe.example.com/tea will be routed to tea-svc service, http://cafe.example.com/coffe will be hit to coffe-svc.

2. What is Ingress Controller ?

As its name implies, ingress controller is a component that responsible to manage ingress resources. In other word, if you want your ingress resources to work, you need to have an ingress controller on your cluster.

The most cloud provider kubernetes services, have an implementation of an ingress-controller based on their load balancer solution default. For example when you run your cluster on Google Kubernetes Engine (GKE), the default ingress controller manages Google HTTP/S Load Balancers. Whenever you create an ingress resource, GKE ingress controller spins up a GLB to handle the routes that defined in the ingress resource. So if you are on a cloud provider, you probably don’t need to think about it; but if your environment is on-premise, then you have to implement an ingress controller for your cluster.

Though, there are many ingress controllers out there, one of the most popular ones is nginx ingress controller imho. So we’ll integrate it in this post.

3. Nginx Ingress Controller

Basically, nginx ingress controller manages a bunch of nginx pods in order to do automatic nginx configurations that defined by ingress resources. When you create an ingress, the controller translates the ingress definitions as nginx configuration parameters and applies them to the nginx pods. So it actually does automatic nginx configurations based on your ingress resources.

There are two different versions of the controller. One is maintained by the kubernetes community and published from a github repo. (its documentation sits on https://kubernetes.github.io/ingress-nginx/). The other one is maintained by the Nginx community that can be reached from https://github.com/nginxinc/kubernetes-ingress.

Although, we’ll use kubernetes version of the controller in this story, you can use any of them based on your requirements. If you want to take a look at their differences, just check the comparison document.

3.1. How it works ?

To setup nginx ingress controller on your cluster, all you need is to run a k8s manifest provided by the project. So, for a classic integration with very default configurations you can run the manifest published at https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/service-nodeport.yaml

This manifest creates a namespace for ingress controller resources, then creates some configmaps to hold default nginx configurations, a service account and related RBAC resources and a deployment to run nginx pods, also it creates a service for nginx deployment to route traffic vianodePort.

Actually, this setup is OK and there is nothing wrong with it. But if we use a DaemonSet instead, then we don’t have to use an additional nodePort typed service, since we can use hostPort parameter with a DaemonSet resource. In order to do this, I’ve just cloned the official manifests to my bitbucket repository and converted it to use DaemonSet.

If you check the related manifest, you can see our definition:

As you know, a DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage collected.

Also in the ports section we define that we want to use hostPort, so that the pods will be reachable via the node’s IPs. In other words, our nginx pods will be accessible via nodeIP:80 or nodeIP:443 notation.

Note: Again, if you don’t know much about nodePort and hostPort differences, please check https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0

3.2 Integration

OK lets create the necessary resources to integrate the controller in our cluster.

First, clone the repo:

git clone https://bitbucket.org/secopstech/nginx-ingress-controller.git

Then, apply the manifests that sit in the configs directory:

kubectl apply -f nginx-ingress-controller/configs/

After few seconds, all resources needed by ingress-controller should be created in ingress-nginx namespace.

To check them, run:

kubectl get all -n ingress-nginx

Output should be something like:

As you see, we got 6 nginx ingress controller pods (because my test environment has 6 nodes in total). Also there is another pod to be used as default backend which is a service that handles all URL paths and hosts the nginx controller doesn’t understand (i.e., all the requests that are not mapped with an Ingress).

3.3 Test the controller

In order to check the controller functionality, we create a test application which resides in test-app directory of the repo. This folder has some manifests to deploy a simple web service with an ingress definition.

Lets look at this example ingress manifest:

This will create two ingress resources to handle routes for web.domain.com and api.domain.com requests. It also provides SSL termination support for our services (with a self signed certificate for *.domain.com)

Now, create the resource by using kubectl:

kubectl apply -f nginx-ingress-controller/test-app/

And check its status:

So we got two deployments, services and other related resources for our test-app. If your nginx ingress controller is up and running, you can now access your web and api pods via https://api.domain.com and https://web.domain.com URLs. (Note that, our SSL terminations use a self signed certificate, so your browser gonna complain about it which you can ignore.

So, ingress-controller setup is finished. If you want to delete test-app deployment, just delete the ingress-test namespace with kubectl delete ns test-ingress

4. Considerations

In this section, I want to point out some production considerations.

4.1 DaemonSet with nodeSelector

In our example, we use a DaemonSet for nginx-ingress-controller, thus our all nodes run a nginx-ingress-controller pod. As I said before, with this setup, our ingress resources would be accessible from all node’s IP addresses.

For example my k8s node’s ip addresses are 10.10.10.1{1,2,3,4,5,6} and api.domain.com or web.domain.com services are reachable from all of these IPs. So, I can use an external load balancer for this domains and distribute the traffic to all of these 6 nodes.

If you have few k8s nodes, it won’t be a problem, but as your cluster grows over time, load balancer management and compute resource planning might become more difficult. (You probably don’t want to run dozens of nginx ingress controller pods in total.) So, at this point using DaemonSet with “one pod per node” logic might not be a suitable way to run an ingress-controller.

In this situation, you can consider to run your pods on specific nodes by using nodeSelector parameter in your DaemonSet. For example, if we want to run our nginx pods on our three nodes only; we can label these nodes and tell DaemonSet to run the pods on the nodes which have the label.

To do this, add a label to your nodes:

kubectl label nodes node1 ingress-controller-node=true
kubectl label nodes node2 ingress-controller-node=true
kubectl label nodes node3 ingress-controller-node=true

Then change the DaemonSet manifest spec something like below:

As you noticed, we add the label as nodeSelector to container spec section. After applying this manifest, nginx pods run on our first three nodes.

4.2 Nginx Custom Configurations

When a nginx pod is starting, its configuration file is injected through a configmap; and the default configmap provided by the project is something like below:

As you can see, there is no any nginx configuration parameters in it. While, nginx pods have a default nginx configuration that comes from the image, sometimes you might want to adjust the configuration level by your needs. To do this, you can add your specific parameters within the data section:

In this example, we set some proxy related config parameters our ConfigMap. So whenever a nginx pod starts, it will consider these parameters as well.

Note: The above parameters are published as an example and they are not for production use.

All supported parameters are documented on https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/

Also, if you want to configure an ingress resource individually, you can use Annotations in ingress resource manifests as explained on https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/

--

--