Using NGINX Ingress Controllers on Kubernetes on CentOS 7

Goran Osim
The Startup
Published in
7 min readJun 18, 2019

Disclaimer: This is my first post on Medium. I’ll try not to make it a word salad.

Kubernetes is pretty awesome. It changed the game in how we deploy, scale and maintain applications. While my own tenure with Kubernetes isn’t that long, it does not take an expert to realize that the learning curve can be steep. There are many fantastic people creating all manner of guides, tutorials and examples using it. Many of these examples tend to utilize Cloud Providers because of how easy and quick it is to get a cluster up and running and to begin deploying.

But what if you don’t have the luxury of a cloud provider? Many people have requirements of running Kubernetes outside of the big three cloud providers or alongside a bare-metal cluster. I’ve learned the hard way that deploying Kubernetes, and the many additional available add-ons, isn’t as trivial as some of the tutorials and guides can make it appear. This difficulty is what cloud providers ease on their customers and charge their fees for.

The Problem

Deploying on Bare-metal, you typically don’t have a Load Balancer built into Kubernetes. If you wish to expose an application to users outside of your clusters, you’re forced to use NodePort services which, apart from leaving you with less than eye-pleasing addresses for your applications, has your users hit your Kubernetes worker machines directly at a specified port. Depending on your deployment and customer requirements this also potentially raises some security concerns. So how do we route traffic into our Kubernetes cluster?

The Solution

In comes the Kubernetes Ingress API Object to save the day. The Ingress is an API object that manages external access to the services in a cluster. It can provide load balancing, SSL termination and name-based virtual hosting.

The Ingress is only half of the solution though and by itself has no real effect. You need an Ingress Controller to actually execute the routing you define in an Ingress. While there are a number of Ingress controllers to choose from, Kubernetes formally supports only ingress-gce/GLBC and ingress-nginx. We’ll actually be using the NGINX Kubernetes Ingress.

Big Picture

Forgive my terrible handwriting

This diagram shows our goal workflow.

  1. Our clients will hit any number of application URLs that we have, which will all route to one or more HAProxy machines.
  2. The HAProxy machine will route and load balance all requests it receives to our Kubernetes worker nodes.
  3. Our worker nodes each have an NGINX Ingress Controller running and the incoming requests get routed to the controller.
  4. The Ingress Controller looks at the incoming request, tries to match it to an established ingress resource definition it has and routes it to the correct service.

Seems simple enough, right? Let’s get into actually doing it!

Prerequisites

You’re going to need a couple of things before we get started. As I’m already past my “Word Salad” clause, I won’t cover how to get everything on this list in this post.

  1. A working Kubernetes cluster consisting of at least one Master Node and one Worker Node.
  2. One machine to be used as the HAProxy.
  3. Optional: A domain with which you can create subdomains for your application. I’ll be using one to show-case this example more easily.

Disclaimer: I’ll be hosting this example on AWS for ease of deployment but I wont be using any of the AWS Load Balancing so as to show you how to use NGINX Ingress Controllers when these solutions are not available to you.

The Ingress Controller

Here’s my environment. I’ve set my cluster up using kubeadm.

Let’s clone the nginx ingress controller.

From here, we’re simply going to follow the installation instructions.

Create a namespace and a service account for the Ingress controller:

Create a secret with a TLS certificate and a key for the default server in NGINX. If you have your own certificates you can use them here. I’m going to use the self-signed cert:

Create the NGINX ConfigMap

I have RBAC enabled in my cluster, as will most people, so we also need to run the following.

We’ve laid the ground work for the controller and now we’re ready to deploy it. You can choose to deploy it as a Deployment or a DaemonSet. I’ll be choosing to deploy as a DaemonSet.

And we’ll make sure that it’s running properly.

HAProxy

So it’s running? Great! Like everything else in Kubernetes, to be able to route traffic to the Ingress controller, you have to expose it. If you chose to deploy the controller as a Deployment, you can expose it via NodePort. If you’re using DaemonSet, like I am, ports 80 and 443 of the Ingress controller container are mapped to the same ports of the node where the container is running so we do not need to create anything to enable routing to the controller.

Next, we need to install and configure HAProxy.

Edit the /etc/haproxy/haproxy.cfg file to point HAProxy at your worker machines. HAProxy will forward all requests it receives to the worker machines. Here’s my config file. Note: You will need to change the IP address of your worker(s) from the ones I have listed to correspond to your machines.

And now we’re ready to start HAProxy.

Routing Traffic

The following section can be done purely with IP addresses but I’ve made a domain for this write-up: showmeyour.codesto represent a domain a company might have. We need a subdomain to represent an application that we’ve deployed. This subdomain will direct our users to our application. I have my domain pointed at Amazon’s Route53 Service. Route53 here is just acting as my DNS and allowing me to route traffic to my HAProxy Instance.

Here I’ll make a subdomain, example.showmeyour.codes and create an A Record that will point at my HAProxy machine’s IP address.

You could do this for any number of additional applications that are in your Kubernetes environment. They would all point to the same HAProxy instance. Depending on your traffic and requirements, you might need more HAProxy instances.

Sample Application

We need an application for us to actually route traffic to. I’m going to be very boring and just do a simple NGINX deployment that will give us the “Welcome!” page to serve as our application.

Let’s deploy it and create a service for it.

Finally, we need an Ingress for the application that we just created. The Ingress will define the route for the domain that we just created and point it at our application. You’ll have to change your host to whatever your domain or IP address is.

Checking our work

All right! Let’s test our hard work out and try hitting our app’s subdomain in our browser.

Nice! It worked. How about one more? I’m going to create the awesome subdomain. I’ll use another NGINX image, nginxdemos/hello to display a different landing page with some more information. For consistency, I’ll give you the config files again, and the commands to create it but leave out the Route53 subdomain creation.

All right, let’s hit our new subdomain in the browser.

Success!

That pretty much covers it! We now have a way to add more subdomains that correspond to new applications we deploy in our Kubernetes cluster. We are able to expose applications running in our cluster without the use of Cloud Provider Load Balancing solutions, without letting our users hit a port on our worker nodes directly and instead utilize DNS and HAProxy to handle our traffic.

I hope you’ve found this guide informative, it definitely was fun for me to learn! Please feel free to leave any questions or feedback in the comments!

--

--

Goran Osim
The Startup

I like Kubernetes but it doesn’t always like me.