Kubernetes Ingress with NGINX

François Fernandès
Jan 29 · 7 min read
Image for post
Image for post
Photo by Foo Bar on Unsplash

Modern software architectures often consist of many small units (e.g. Microservices). Some serve internal purposes but others have to be exposed to a broader audience. This could be within a private network or even over the internet. But how do you securely and consistently expose those services? Especially considering that assigning an IP to each service to be exposed, might not be possible, due to the limited number of available IP addresses. And what about security? Should every single service be accessible by the outside world?

Classical system architectures resolved these questions by using firewalls and reverse proxies, that routed requests to the appropriate target, located in a secure private network. But these configurations were often manually maintained. Changing oder adding routing rules often took a long time for those to be applied.

Now imagine, that you could write your own routing rules and those get applied in seconds and that those rules could even be part of the application source code base. Even more, endpoints like REST-APIs, Static content or dynamic web frontends, could be exposed through one IP address, possibly serving content for multiple domain names.

This is exactly what Ingress does and where it shines.

What is an Ingress?

Ingress is a Kubernetes resource type, that can be applied just like other resources. Its purpose is to define routing cluster-external requests to cluster-internal services. An Ingress will map URLs (hostname and path) to cluster-internal services.

Let’s assume that a service named frontend-svc should be made available under the domain sample-service.example.com . This would require an Ingress resource definition like the following:

At the first glance, the example above might seem relatively complex. But once you understand the structure of the Ingress resource, it is very simple. Let’s split the example into smaller sections:

Header (Lines 1–4)
This is the common header for all Kubernetes resources. It identifies the resource as kind: Ingress and some metadata.

Rules (Lines 6 & 7)
This is the actual core of the Ingress definition. This section defines how incoming requests shall be mapped, based on the requested hostname. In our example, the Ingress definition targets the hostname sample-service.example.com .

Path Mapping (Lines 9–13)
The path mapping specifies how request paths shall be mapped to the actual backends. Backends are Services, deployed in the cluster, identified by their name and a port.

An Ingress definition might even consist of multiple rules and rules with multiple paths. Let’s pretend, that we would like to expose the REST API, that the frontend is based on, under /api, backed by the service backend-svcon port 8081:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: digitalfrontiers-sample-ingress
spec:
rules:
- host: sample-service.example.com
http:
paths:
- path: /
backend:
serviceName: frontend-svc
servicePort: 80
- path: /api
backend:
serviceName: backend-svc
servicePort: 8081

This Ingress definition will route requests on /api to http://backend-svc:8081/api. It is important to note, that the full path will be preserved by default. Given path mapping with path: /api/v1/resources/ and a request for /api/v1/resources/ab0394a, will result in a call with the full path (/api/v1/resources/ab0394a) on the backend.

Unlike other resources, Kubernetes has no built-in support for Ingresses. This might sound strange, considering I described Ingress as a Kubernetes native resource. The Ingress is just a description of how routing should be performed. The actual logic has to be performed by an “Ingress Controller”.

Ingress Controller

The Ingress is the definition of how the routing should be done. But the execution of those rules has to be performed by an “Ingress Controller”. Due to this, creating Ingress resources in a Kubernetes cluster won’t have any effect until an Ingress Controller is available.

The Ingress Controller is responsible of routing requests to the appropriate services within the Kubernetes cluster. How an Ingress Controller executes this task, is not explicitly defined by Kubernetes. Thus an Ingress Controller can handle requests in a way that works best for the cluster.

The Ingress Controller is responsible to monitor the Kubernetes cluster for any new Ingress resources. Based on the Ingress resource, the Ingress Controller will setup the required infrastructure to route requests accordingly.

NGINX Ingress Controller

Further on, we’ll focus on the NGINX Ingress Controller, being an Ingress Controller implementation based on the popular NGINX http-engine. Although there are many different Ingress Controller implementations, the NGINX based implementation seems to be the most commonly used one. It is a general purpose implementation that is compatible with most Kubernetes cluster deployments.

This is a simplified representation of the actual deployment, but sufficient to understand the most important concepts. The two most obvious parts are the Ingress Controller itself and an associated Service of type LoadBalancer. The Service will receive a public IP under which the Ingress Controller will be made available. Requests to this IP will be handled by the Ingress Controller and forwarded to the actual services according to the Ingress resources.

Note that there is no DNS handling in any way within this structure. Mapping domain names to the load balancer IP, is out of scope of the NGINX Ingress Controller. Typically this will be configured outside the cluster. The most common configuration is to create a wildcard mapping for all subdomains to that particular IP. As an example, let’s pretend you’re working on a web application that shall be reached under the domain my-app.com . In this case, a DNS record should be created that resolves all DNS queries for my-app.com (like www.my-app.com, services.my-app.com …) to the IP assigned for the Service mentioned above, that points to the NGINX Ingress Controller. Now all requests will be handled by that IP and the NGINX Ingress Controller will perform the routing.

Deploying the NGINX Ingress Controller

The deployment of the NGINX Ingress Controller is pretty straightforward. It consists of two parts:

  • Base deployment of the Ingress Controller, required for all Kubernetes clusters
  • Provider specific deployment, depending on the actual provider. Here we’ll focus on the generic deployment. (Especially AWS and bare metal clusters require additional configuration, which has been well documented in https://kubernetes.github.io/ingress-nginx/deploy/)

General Process

The deployment is straightforward and uses existing resource definitions that can be applied to the cluster. Even though the resource definitions are created by a trustworthy source (they are part of the github Kubernetes organisation), external resources should always be inspected. This will not only ensure, that no unexpected content will be deployed, but additionally give you a basic understanding of how deployed content is structured.

In all cases where I simply write a kubectl apply -f https://<some-url>.yaml, you should translate that to the following sequence:

curl https://<some-url>.yaml                            #1
kubectl apply --dry-run -f https://<some-url>.yaml #2
kubectl apply -f https://<some-url>.yaml #3
  1. Use curl to download and inspect the resource
  2. Before applying the resource, perform a dry run. This will validate the resource and tell you what will be deployed to the cluster.
  3. Perform the actual deployment

With those general precautions out of the way, let’s stat with the actual deployment.

Step-By-Step Deployment

As stated before, the deployment consists of two parts. We start off by deploying the base components required for all Kubernetes cluster types. The deployment can be done using the following command:

kubectl apply-f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.27.0/deploy/static/mandatory.yaml

This deploys the biggest part of the whole infrastructure and thus creates a number of resources in the Kubernetes cluster:

namespace/ingress-nginx created
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
serviceaccount/nginx-ingress-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-role created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
deployment.apps/nginx-ingress-controller created
limitrange/ingress-nginx created

The most important part is the NGINX Ingress Controller Pod. You can watch the controller becoming available using

kubectl get pods —-all-namespaces —-watch -l “app.kubernetes.io/name=ingress-nginx”

The only missing piece is how the Ingress Controller will be exposed. As stated before, in the generic case this will be achieved by a service of type LoadBalancer . This is the last step in the deployment of the Ingress Controller:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.27.0/deploy/static/provider/cloud-generic.yaml

A a result, the LoadBalancer service will be deployed and receive the external IP address. To receive the public IP, inspect the service in the ingress-nginx namespace:

kubectl -n ingress-nginx get service

It might take Kubernetes a moment to request and assign an external IP to your service. But as a result you’ll see an output similar to the following:

Once the EXTERNAL-IP is assigned, the NGINX Ingress Controller is ready to serve.

Conclusion

This is a short introduction to Ingress and the Ingress Controller, giving you a basic understanding of the concept. Once you’ve wrapped your head around the concept, Ingress resources are an efficient way to expose your services.

Digital Frontiers — Das Blog

Dies ist das Blog der Digital Frontiers GmbH & Co.

Thanks to Benedikt Jerat

François Fernandès

Written by

Senior Solution Architect at digitalfrontiers, Member of the Board at PDF Association

Digital Frontiers — Das Blog

Dies ist das Blog der Digital Frontiers GmbH & Co. KG (http://www.digitalfrontiers.de). Hier veröffentlichen wir zu Themen, die uns interessieren und bewegen.

François Fernandès

Written by

Senior Solution Architect at digitalfrontiers, Member of the Board at PDF Association

Digital Frontiers — Das Blog

Dies ist das Blog der Digital Frontiers GmbH & Co. KG (http://www.digitalfrontiers.de). Hier veröffentlichen wir zu Themen, die uns interessieren und bewegen.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store