Kubernetes Ingress — explained

Part 1

Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available. — According to official Kubernetes documentation :

ref: https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/

What that means, is Kubernetes is an orchestration tool, to manage distributed systems resiliently. It provides support for scaling and failover for your application, provides deployment patterns among other features.

Image for post
Image for post
ingress controller

Kubernetes Ingress:

An API object that manages external access to the services in a cluster, typically HTTP. Ingress may provide load balancing, SSL termination, and name-based virtual hosting.

Let us take an example of an online business platform.

Image for post
Image for post
Online business platform (PIC1)

In the above image, you can see that the online platform BL POD is deployed in the deployment layer pod, and NoSQL is deployed in another. To establish a connection between the database and the BL, we have deployed a DB service of type clutserIP (internal, not exposed to the internet).

However, just deployment of the clusterIp for the database won’t be enough to access the application deployed in the deployment layer. We have to expose this deployment to the internet by another service of type Node Port. which will provide us a port ( the port number is always more than 30000).

Now with the External IP and the port number of the service we will be able to access the application.

HTTP://<external Ip address>:<port>/{{context root}}

Though this approach works for small testings, checking a specific API for the developers, this will not be very practical for the user, as the user has to mention the IP address and port number every time to access the application.

This approach also poses a challenge when we add more endpoints like :

/order, /wishlist, /customer-support, /home etc .

when we deploy these pods in this cluster, the architecture will look like below.

Image for post
Image for post
multiservice image: multiple services to be exposed to the internet (PIC2)

Now we have 3 different external IPs and ports, which are assigned for different services deployed in the cluster.

So, How do we resolve this issue? How do we accept all requests via a static endpoint (DNS) and route the traffic to different APIs based on routing rules or some logic?

There are a variety of solutions, DevOps can implement to cater to these situations.

For example, we can set up a proxy in between the DNS and your service to send traffic to the designated node port as shown below.

Image for post
Image for post
proxy server setup on-prem in your data center (PIC3)

OBP- Online Business platform

If your application is set up in a public cloud environment. The setup will look like below:

The N load balancer will send the traffic to OBP, which can distribute the traffics to one or more pods.

Image for post
Image for post
on Public clouds (PIC4)

When we deploy the Kubernetes Services to a public cloud-like GCP. It created the OBP load balancer service as well as a network load balancer with static IP, which can be mapped with the DNS address. Upon receiving the request from the internet, it will forward these requests to the configured node ports.

But we still have not yet still catered the different context routes pointing to different deployments in the same cluster (PIC2).

Now if we apply the same concept of PIC4 to set up the new deployment, we will have multiple network load balancers with different IP addresses shown below:

Image for post
Image for post

To cater to these issues, we can add another proxy above the NLB. But the design and architecture become too complex and too expensive because of your multiple NLBs and SSL certificates, etc. This also poses a challenge to maintain the infrastructure, because the number of moving parts become too many.

Kubernetes solution for these scenarios is Kubernetes ingress. This Kubernetes component is a single definition file, which can reside inside a definition YAML, and it provides a single entry point to different APIs through different PATHs. This also allows us to implement SSL in our Kubernetes cluster.

Take a look at the below picture:

Image for post
Image for post
Ingress Controller (PIC5)

In this picture, you can see we have deployed a service called Ingress Controller. Notice that this ingress controller is inside the Kubernetes cluster. Not outside.

But what is an Ingress Controller? How does it work?

If you consider the picture PIC3, you will notice that we have deployed a proxy server. That proxy server can be Nginx, HAProxy, etc.

The ingress controller works in the same way, but with some additional features to it by Kubernetes itself. The list of ingress controller that can be installed in the Kubernetes cluster can be found here :

https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/

The most commonly used ingress controllers are :

  1. Nginx
  2. Istio
  3. AKS Application Gateway Ingress Controller
  4. AWS ALB Ingress Controller
  5. GCE load balancer

Ingress Controller has a set of rules, that is deployed, is called Ingress resource.

these controllers are deployed via the declarative definition files such as deployment.YAML.

References:

Below links will help you understand the ingress controller in-depth If you want to know more about the fundamentals.

Written by

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store