HAProxy — Smart way for load balancing in Kubernetes

Sujit Thombare
5 min readMay 11, 2022

--

The business continues to demand the highly available and scalable new services, which we need to be able to release very quickly. Kubernetes helps to meet business expectation with container orchestration, management and scaling system and deploy microservices or micro frontend applications.

Why Kubernetes ?

Docker containers provides best way to package your microservices and deploy it on production server, but every application has lots of microservices and managing each microservices (container) for scaling is challenging task. Here Kubernetes comes to rescue us with :

  1. Container and Storage orchestration
  2. Auto Scaling
  3. Self-healing mechanism on failure
  4. Secrets and configuration management
  5. Load balancer

Here, I want to elaborate more on how Kubernetes handle load balancing and way to implement smart load balancer with HAProxy server.

Load Balancer
Load balancing is a core networking solution used to distribute traffic among multiple backend services efficiently. Load balancers improve application availability and responsiveness and prevent server overload.

Load balancer

There are two main approaches for load balancing in Kubernetes:

A. NodePort:
NodePort provides feature to expose Kubernetes services on external network by opening port and mapping it with internal port. NodePort will internally create ClusterIP with internal port to access services from cluster environment, also expose external port mapped with internal port:

NodePort : Way of layer-4 level load balancing

The YML file for NodePort services :

apiVersion: v1
kind: Service
metadata:
name: nodeport-service-name
spec:
type: NodePort
selector:
app: service-app-name
ports:
- port: 8080
targetPort: 8080
nodePort: 32007

NodePort Service facilities with layer-4 level routing. It only read IP address and based on routing table, it will redirect to respective Kubernetes pods. However, almost every application demands for smarter way for load balancing by URL or Subdomain like /api request should go to microservices and /web-app should redirect to web applications pod. This need factors by Ingress Controller.

B. Ingress Controller

Ingress controller is the smarter way to handle load balancing in cluster. It works at layer-7 level routing, It can redirect requests based on requested URLs. Actually Ingress controller sits in-front of other services and works as a Smart Router for application cluster.

There are many service providers to create Ingress controller in Kubernetes cluster. Here I want to explain how we can install Ingress controller with HAProxy Server.

HAProxy Server (High Availability Proxy Server)

Load Balancing with HAProxy Server

HAProxy is popular open source load balancer, reverse proxy software. the most common advantages is to improve performance and reliability by distributing traffics within application server.

Install Ingress Controller with HAProxy Server

Prerequisites
1. Kubernetes Cluster
2. Docker environment

I have considered Ubuntu operating system for further installation

Step 1 — Install helm package manager
We can install helm using below scrips. Refer official documents for more support.

curl https://baltocdn.com/helm/signing.asc | sudo apt-key add -
sudo apt-get install apt-transport-https --yes
echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm

Step 2 — Install HAProxy Server

Install HAProxy server using helm package manager. below are the scripts for same.

helm repo add haproxytech https://haproxytech.github.io/helm-charts

Refresh list of charts by

helm repo update

Now register ingress controller using helm

helm install myingress-controller haproxytech/kubernetes-ingress

You can verify installation using

kubectl get services

In output, Ingress controller will provide 3 different ports , 80 for HTTP , 443 for https and 1024 for dashboard which will gives us idea about session, pods and monitoring of cluster. You can use respective port with external public IP (basically public IP of machine where you are executing scripts) like :

HTTP URL — http://<external IP>:30996
HTTPS URL — https://<external IP>:32627
Dashboard URL — http://<external IP>:30308

Dashboard

Step 3 — Install Services with cluster IP
As we don’t want to expose service port to outside world, we need to deploy services as ClusterIP services. below are the YML files for services.

# service1.yml#DeploymentapiVersion: apps/v1
kind: Deployment
metadata:
name: micro1-deployment
labels:
app: micro1-hapi-app
spec:
replicas: 3
selector:
matchLabels:
app: micro1-hapi-app
template:
metadata:
labels:
app: micro1-hapi-app
spec:
containers:
- name: micro1-hapi-app
image: <microservice-docker-image-name>
ports:
- containerPort: 8080
---# service
apiVersion: v1
kind: Service
metadata:
name: lb-service-1
spec:
selector:
app: micro1-hapi-app
ports:
- port: 8080

create 2nd service yaml file as like below

# service2.yml#DeploymentapiVersion: apps/v1
kind: Deployment
metadata:
name: micro2-deployment
labels:
app: micro2-hapi-app
spec:
replicas: 3
selector:
matchLabels:
app: micro2-hapi-app
template:
metadata:
labels:
app: micro2-hapi-app
spec:
containers:
- name: micro2-hapi-app
image: <microservice-docker-image-name>
ports:
- containerPort: 8080
---# service
apiVersion: v1
kind: Service
metadata:
name: lb-service-2
spec:
selector:
app: micro2-hapi-app
ports:
- port: 8080

Now deploy both services in Kubernetes cluster :

kubectl apply -f service1.yml
kubectl apply -f service2.yml

Step 4 — Install ingress controller for aforesaid services
Now we need to create ingress controller for above 2 services. It has all configuration where we have specify route and respective backend services. create below ingress controller and deploy same on Kubernetes cluster.

#ingress.ymlapiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-controller
annotations:
kubernetes.io/ingress.class: haproxy
spec:
rules:
- http:
paths:
- path: /v1/api
pathType: Prefix
backend:
service:
name: lb-service-1
port:
number: 8080
- path: /web-app
pathType: Prefix
backend:
service:
name: lb-service-2
port:
number: 80

Apply ingress controller with below command

kubectl apply -f ingress.yml

Now we have successfully deployed HAProxy ingress controller on Kubernetes server. We can check output on aforesaid external IP and port number as like below :

Backend services on /v1/api route
Frontend service on /web-app route

Conclusions

We have discussed only basic setup and configuration with HAProxy server. This server gives us lots of configuration to manage workload based on different path as well as sub-domain. Also enabling HAProxy server, we have enhanced power of Kubernetes cluster. Now we don’t require to expose all services with NodePort, as we have facility to manage smart load balancing from cluster itself.

--

--