How to expose the Kubernetes Application

Bartłomiej Poniecki-Klotz
weles-ai
Published in
9 min readNov 24, 2022
Securely expose Kubernetes Applications to the outside world

Completing the coding of the Kubernetes Application is only the first step to having users log in and try it — now it’s time to expose it. Especially in greenfield projects, the decision of how to expose your applications to the world is extremely important, because it directly impacts user experience, security and team velocity.

In this article, we explore three ways to expose applications deployed on Kubernetes and talk about the architecture drivers of each. In the demo part, we configure a cluster and expose an application using Ingress.

The Kubernetes cluster can be as small as a single node on your laptop and as big as production-grade Kubernetes deployment hosting thousands of micro-services across the data centre.

We can always use port-forwarding to access Pods or Services, right? Jokes aside, the three ways to expose applications in Kubernetes are:

  • Node Port
  • Load Balancer
  • Ingress

Node Port

Node Port is one of the Service types in Kubernetes. Upon creation, the Kubernetes Control Plane selects a random port from the range of 30000–32767 and opens this port on all Nodes of the Kubernetes cluster. This way requires minimum to zero configuration on the Kubernetes side. In a secured enterprise environment where all ports are closed by default, you need to do changes to the firewalls/iptables configuration on each Node.

Accessing Kubernetes application via Node Port

Pros:

  • simplicity
  • no need for additional software or hardware components

Cons:

  • need to expose the cluster to a public network
  • the need to “manage the ports” so each consumer connects to the proper port
  • need to reconfigure Kubernetes node machines at each application exposure
  • No way to abstract crosscutting concerns like enforcing HTTPS

Node Port alone can be used with great success in a small cluster for internal use. For production-grade Kubernetes clusters with a public-facing application, we prefer other ways to expose the application.

Load Balancer

Load Balancer is another type of Kubernetes Service. Kubernetes cluster creates a public-facing component on the creation of the Service. The Cloud Controller Manager (CCM) embeds cloud-specific knowledge and uses it to create needed resources. Public cloud providers can provide a managed Kubernetes cluster like EKS or AKS. There when you create a Service of the type Load Balancer the CCM creates also the Networking Load Balancer which is the only part of the solution exposed publicly.

Accessing Kubernetes application via Load Balancer

Pros:

  • supports hardware or software Network Load Balancers
  • Load Balancer is the only public-facing component

Cons:

  • creation of multiple Load Balancer objects/machines, each of Load Balancers should be deployed in HA setup
  • additional payment for Load Balancers in the cloud

Load Balancers are one of the most frequently used ways to expose applications. The CCM works well for public and private cloud deployments. Unfortunately, the cost of multiple Network Load Balancers is significant and in some deployments can be higher than the cluster itself.

Ingress

The ingress is the way to expose multiple applications on the same endpoint using the Application Load Balancer. In public cloud-provided Kubernetes clusters the CCM creates the Application Load Balancer outside of the cluster and manages these resources for you. Even production-grade Kubernetes cluster usually uses only one or two Load Balancers (public-facing and internal).

In bare metal deployment, you are responsible for creating and handling the Ingress Controller. The Ingress Controller is the entry point to the cluster and needs to be scaled according. In the public cloud, the Application Load Balancer is managed by the provider — you only pay for the time and usage.

Accessing Kubernetes application via Ingress

The Ingress Controller is usually a Kubernetes application exposed using Network Load Balancer. The most commonly used Ingress Controller implementation is NGINX Ingress Controller which works as a reverse proxy. Additionally, it watches the changes to the resources like Ingres Rules or Virtual Servers. When a new Ingress Rule is created, the Ingress Controller generates a new configuration and applies it on the reverse proxy. This process can take up to a few minutes. The Ingress Rule should be created in the same namespace as the application it is exposing.

Pros:

  • Load Balancer is the only public facing component
  • Load Balancer can be shared between multiple applications
  • Possibility to abstract crosscutting concerns like enforcing HTTPS in a single place

Cons:

  • Additional component Ingress Controller needed which adds compute requirements and latency to requests
  • The Ingress Controller needs to be monitored and scaled accordingly.

The most widely used pattern in production-grade Kubernetes clusters. It mitigates the issue of having multiple Load Balancers and their costs. The trade-off is that Ingres Controller needs to be monitored, scaled and treated as a part of core infrastructure because it can be a single point of failure for multiple applications in the cluster.

Demo

We build a Kubernetes cluster with Ingress and Load Balancer in front of it. The Load Balancer provides external IP for the Ingress Controller and Ingress Controller is responsible for mapping forwarding requests to proper service inside of the cluster based on the Ingress Rule. Simple, but the powerful setup that shows how Ingress Controller works.

The demo works on Ubuntu 22.04 with Microk8s as a Kubernetes distribution.

Kubernetes cluster

Install Microk8s

$ sudo snap install microk8s --classic
microk8s (1.25/stable) v1.25.3 from Canonical✓ installed

Configure the Ingress Controller and Load Balancer in the K8s cluster. Set the range of IPs that can be used by MetalLB as cluster external IPs.

$ microk8s enable ingress metallb:10.64.140.43-10.64.140.49
Infer repository core for addon ingress
Infer repository core for addon metallb
Enabling Ingress
...
Ingress is enabled
Enabling MetalLB
...
MetalLB is enabled

Check if Ingress Controller was created successfully

$ microk8s kubectl get po -n ingress
NAME READY STATUS RESTARTS AGE
nginx-ingress-microk8s-controller-2fzv8 1/1 Running 0 3h2m

The log of the Ingress Controller shows it started. The Ingress Controller log is a great source of information when troubleshooting incorrectly working rules or checking the underlying server response.

$ microk8s kubectl logs nginx-ingress-microk8s-controller-2fzv8 -n ingress
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: v1.2.0
Build: a2514768cd282c41f39ab06bda17efefc4bd233a
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.19.10

-------------------------------------------------------------------------------

...
I1118 08:30:13.142513 7 nginx.go:299] "Starting NGINX process"
I1118 08:30:13.142608 7 leaderelection.go:248] attempting to acquire leader lease ingress/ingress-controller-leader...
I1118 08:30:13.143352 7 controller.go:166] "Configuration changes detected, backend reload required"
I1118 08:30:13.171231 7 leaderelection.go:258] successfully acquired lease ingress/ingress-controller-leader
I1118 08:30:13.171334 7 status.go:84] "New leader elected" identity="nginx-ingress-microk8s-controller-2fzv8"
I1118 08:30:13.276578 7 controller.go:183] "Backend successfully reloaded"
I1118 08:30:13.276709 7 controller.go:194] "Initial sync, sleeping for 1 second"
...

Create the Service Load Balancer for the Ingress Controller.

$ cat <<EOF | microk8s kubectl create -f -
apiVersion: v1
kind: Service
metadata:
name: ingress
namespace: ingress
spec:
selector:
name: nginx-ingress-microk8s
type: LoadBalancer
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 443
EOF
service/ingress created

Check if MetalLB provided external IP for the ingress controller

$ kubectl get svc -n ingress
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress LoadBalancer 10.152.183.201 10.64.140.43 80:31761/TCP,443:31879/TCP 5m35s

Take note of the external IP. Thanks to this IP you connect to the exposed application. We use it in the demo section.

Deploy application

We need an application to expose. Do not use NGINX for testing because it is easy to mistake the error response from Ingress Controller with an error response from the application. We are using the httpd server with the “It works” response. We deploy the Pod and Service type ClusterIP. The ClusterIPs are internal to the cluster.

$ microk8s kubectl create ns test
namespace/test created
$ microk8s kubectl create deployment demo --image=httpd --port=80 -n test
deployment.apps/demo created
$ microk8s kubectl expose deployment demo -n test
service/demo exposed

Let’s check the model response using port-forwarding. We use the port forwarding command to expose the “demo” service’s port 80 on localhost port 8080. We access the application in the browser — http://localhost:8080

$ microk8s kubectl port-forward svc/demo -n test 8080:80
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
Access the service using port-forwarding — “It’s works”

Expose application using Ingress Rule

At this stage, the demo application is not accessible outside of the cluster. We expose it by creating an ingress rule in the “test” namespace.

In clusters there are multiple ingress classes available, each ingress controller can be exposed in a different subnet. We can use the default ingress class by not defining the “ingressClassName” property. This way, we risk that default ingress class change and we will not notice it. This can cause the exposure of internal applications. It's good practice to set it.

$ microk8s kubectl get ingressclass 
NAME CONTROLLER PARAMETERS AGE
public k8s.io/ingress-nginx <none> 6h22m
nginx k8s.io/ingress-nginx <none> 6h22m

In the last step, we create the Ingress Rule using “public” as the ingress class name. We expose a service called demo and it’s port 80. We are not defining the expected URL but you can read here how to do this.

$ cat <<EOF | microk8s kubectl create -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: demo-ingress
namespace: test
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
ingressClassName: public
rules:
- http:
paths:
- path: /demo
pathType: Exact
backend:
service:
name: demo
port:
number: 80
EOF

The address is attached to the demo-ingress rule after around a minute. If it’s not, check in the controller pod logs to see why it happened. The log messages there essential for troubleshooting ingress rule issues.

$ microk8s kubectl get ingress -n test
NAME CLASS HOSTS ADDRESS PORTS AGE
demo-ingress public * 127.0.0.1 80 49s

Now it’s time to see our application exposed on the external IP. If you need to check it, go to the Load Balancer service in the ingress namespace and get its External IP.

Access the service using Ingress — “It’s works”

The biggest difference between Ingress and Network Load Balancer is that you can expose multiple applications on the same endpoint thanks to the reverse proxy.

What happens when the path is incorrect or our rule definition was incorrect?

Access unknown URL — Ingress Controller “404 Not Found” response

In case of errors, we look at the Ingress Controller log to see if our rule was ingested and synced by Ingress Controller. We can also find information about processed requests, Ingress Controller decisions and backend application response codes. It helps a lot during troubleshooting.

I1119 18:01:58.177613       7 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"test", Name:"demo-ingress", UID:"bd242cdb-4b41-45ca-b5af-bc411b182357", APIVersion:"networking.k8s.io/v1", ResourceVersion:"3535", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
192.168.0.191 - - [19/Nov/2022:18:04:02 +0000] "GET /demo HTTP/1.1" 200 45 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:107.0) Gecko/20100101 Firefox/107.0" 348 0.001 [test-demo-80] [] 10.1.100.4:80 45 0.000 200 cbb86581d0152303fa7eb8c7e5ff71e1

Troubleshooting your Kubernetes issues is challenging. You can use AI terminal tools to help you with it.

Keep exploring

In this article, we look at the three ways to expose the Kubernetes application and the architecture drivers behind it.

  • Node Port — simple, great for single node deployments, Pod eviction can change the IP of the application
  • Load Balancer — simple thanks to Kubernetes native integration with cloud providers, each application requires a separate Load Balancer so it can cost a lot when exposing multiple applications and paying per Load Balancer
  • Ingress — enhanced version of Load Balancer allowing to use of a single Load Balancer and reverse proxy to expose multiple applications, Ingress Controller is a crucial component and needs to be carefully monitored and scaled according to usage.

I encourage you to expose your own application and experiment with Node Ports, Load Balancers and Ingress Controllers.

Here are some ideas of what you can add to your Ingress demo:

Play around, have fun and never stop exploring.

For more MLOps Hands-on guides, tutorials and code examples, follow me on Medium and contact me via social media.

--

--