Nginx-Ingress on private clusters
The last days I ran into the problem of having to setup a nginx-ingress on a private cluster of Red Hat VMs. As I am quite new to kubernetes and operations I wanted to share my exeperiences hoping that they might be valuable for you.
- Working Kubernetes Cluster with 3 RedHat unix nodes
- A node.js GraphQL server running in the cluster but not reachable from the outside
- A motivated developer in front of his computer
- Reach the GraphQL server from the outside (first HTTP, then HTTPS)
- Only the necessary code checked into version control
- Use Nginx-Ingress, as it supports HTTP and TCP based routing (opposed to traefik)
As we want to have to write / maintain only the configuration that is important to us we chose to rely on helm for our pod specifications. Helm calls a unit of configuration chart. A chart can consist of multiple pods / services / ingresses / config maps / secrets which are written in a go template syntax in order to add in global information easily.
If you are not familiar with helm, please read through their quickstart guide, I’ll wait here.
Ok, done? I am glad you took the time. Let’s go!
So first we need to install the nginx-ingress by requiring it as dependency of our chart in the requirements.yaml.
If you install the chart now and if get all pods of all namespaces you see one nginx pod running. Now we only need to configure it to our needs. Here is the point that I found to be hard to understand: There are two resources for you to find out what to do:
- ngnix-ingress docs: this will give you information on what to configure the ingress in general.
- nginx-ingress chart readme: this will give you information on what options are available for configuration.
This might very well differs from yours, but I found it to be useful to have examples at hand. The configuration of helm charts (if you define them as a dependency) is done in the containing charts
First thing we wanted to have is that we wanted to have our cluster reachable on every node, so when the DNS entry changes or one node dies we can still reach the cluster. To have it reachable from outside the cluster, besides defining an ingress we also need to enable the hostNetwork option.
Las but not least we want the endpoint records in the controller to be the same as on the ingress service, therefore we need to enable the publishService option. The complete config looks like this:
Ingress controller won’t help if we have no ingress defined, so lets write a definition for it:
Now you need to add the host key to your
values.yamland our graphql server should be available under
/graphql.The cluster is now also reachable under HTTPS, as we specified a secret which contains the TLS certificate and key (you may set it up like this) in the ingress definition. The nginx-ingress picks these up in terminates the SSL configuration, so we are good to go.
Maybe you rely on TCP / UDP services, I just want to show you real quick how you may specify them: