Exposing Container Applications in AKS: Service type Load-balancer

Sijomon ND
4 min readFeb 17, 2024

--

PART-1

Photo by Deva Darshan on Unsplash

Introduction

Exposing an application container in Kubernetes is about making your application running in a container accessible and reachable to users or other services within and beyond the Kubernetes cluster. Configuring how you expose your application requires careful consideration. Kubernetes offers mechanisms like Services, Ingress and Load Balancers to guide incoming traffic to your containerized application securely and efficiently. This series will focus on a few methods of exposing applications in AKS clusters.

In Part 1 of the series, we explore connecting to an application via The Azure Load Balancer. I will walk you through setting up service for ALB, also we will discuss some Pros and Cons.

In Part 2, I will provide an overview of the ingress controllers and how they can be leveraged for possible use cases and limitations in an AKS cluster. While there are many ingress controller options I will be using NGINX Ingress Controller.

Part 3 provides a similar walk-through on AGIC which helps AKS to utilize Azure’s own Application Gateway Layer& Load-balancer to expose your application to the Internet.

I have added all the artifacts that we are going to use in below gist

https://gist.github.com/sijomon/7f5b8a886d380f3dc3d6bb8086fd5682

Load-Balancer

To expose an application in Azure Kubernetes cluster using an external load balancer, you need to create a Kubernetes service of type LoadBalancer. This will automatically provision a public IP address and a load balancer rule for your application. Please keep in mind this is a Layer4 Load-balancer.

1. Create a namespace called demo

kubectl create namespace demo
namespace/demo created

2. Create a deployment that runs your application pods

apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-app
spec:
replicas: 3
selector:
matchLabels:
app: demo-app
template:
metadata:
labels:
app: demo-app
spec:
containers:
- name: demo-app
image: gcr.io/google-samples/node-hello:1.0
ports:
- containerPort: 8080

3. Apply the deployment to your cluster by running the following command:

kubectl apply -f demoapp.yml
deployment.apps/demo-app created

The deployment has created pods

kubectl get pods -n demo -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
demo-app-854657487c-8jv5v 1/1 Running 0 54s 10.244.0.10 aks-agentpool-22188564-vmss000000 <none> <none>
demo-app-854657487c-j7bsb 1/1 Running 0 54s 10.244.1.14 aks-agentpool-22188564-vmss000001 <none> <none>
demo-app-854657487c-s2hkc 1/1 Running 0 54s 10.244.0.9 aks-agentpool-22188564-vmss000000 <none>

But it’s not reachable from outside the cluster. That’s where a Load Balancer comes in.

4. Let’s define the load balancer service with a file called demoapp-service.yaml.

apiVersion: v1
kind: Service
metadata:
name: demo-app
spec:
type: LoadBalancer
selector:
app: demo-app
ports:
- protocol: TCP
port: 80
targetPort: 8080

5. Apply the service to your cluster by running the following command

kubectl apply -f demoapp-service.yaml
service/demo-app created

Within a minute, the load balancer should be confirmed up and running. Wait for the service to get an external IP address assigned by Azure.

6. You can check the status of the service by running the following command:

kubectl get service demo-app

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demo-app LoadBalancer 10.0.85.127 20.70.0.191 80:30121/TCP 3m7s

7. Browse to external IP http:// 20.70.0.191

Voila! azure served your application to you. But it is showing insecure.

Nobody want to load a insecure website, time to discuss pros and cons of this implementation.

Cons.

Azure Load Balancer runs on level 4 of the OSI model for TCP and UDP, so it lacks intelligent, content-based traffic routing mechanisms for URL or HTTP traffic.

Since AKS load balancer is on L4 it doesn’t support SSL offloading, you have to either handle your SSL termination on the service layer or depend on another application for SSL termination, which is a tedious task as you scale your microservices.

Cost involved, Load Balancer comes with a price tag. If you keep exposing each service via Azure Load-Balancer your finances will be in trouble.

Pros.

As you saw in the blog it is easy to configure. AKS integration allocates a public IP for each service you expose.

Gives you the flexibility to choose your protocol stack, many Ingress controllers don’t support all OSI stack protocols,in those cases, L4 Load-balancer is very useful, especially for telecom workloads.

Continue PART2

--

--