Load Balancer: On K8s

Mansi Dadheech
4 min readJun 24, 2020

--

Suppose we have an operating system running an apache webserver on it. If a client wants to connect with webserver, he have to send IP and Port.
There might be possibility that one OS can’t accommodate 1000s of clients and due to this client won’t get proper connectivity.
Threshold is like limit we can set to a webserver like 95% use of RAM and CPU. For this we need a proper monitoring tool which can monitor whether the OS reached to its threshold or not.
In AWS there is service of this kind,used for monitoring known as Cloud Watch.

As load increases we can launch replicas of that OS.

As we have 3 OS now the challenge is we have to provide three different IPs to the clients which is not feasible.So we never provide Ips to the client. Here we have an intermediate program between client and server which can communicate to server on behalf of client that is known as Reverse Proxy.

That program is known as Load balancer or FrontEnd Server.
Those operating systems are known as BackEnd Server.

How FrontEnd Server come to know where to send requests?
When we launch OS like IP2 we have to contact to FrontEnd Server and register IP2 as BackEnd Server.

Registering manually is known as Static Entry.
But it is quite tedious,as whenever a OS launch we have to first check ip and then update it.

Registering dynamically i.e. whenever an OS launch load balancer come to know automatically about that OS. known as Discovery.

EndPoint: It is like one end of a communication channel i.e. any client wants to interacts with another system then the static ip and port are known as endpoints.This the way to manage the server and it know how many backend servers are running .

Usually clients request send to backend server by using round robin algorithm.

For EC2 load balancing service is known as Elastic Load Balancer.
And load balancing in k8s is known as Service.

K8s Service

In Kubernetes we launch pods and by using controller we can relaunch it if any failure occur. But this relaunching can change the IP. We can resolve this issue by using Service.

There are 3 ways to implement Service:

  • ClusterIP: Inside cluster if we don’t want that our IP change , for that we can connect client to endpoint URL and this is known as ClusterIP.
    It is like load balancer but it don’t allow outside connectivity. It is isolated only support inter connectivity.
  • NodePort: It has the same set up as clusterIP but it also allow outside connectivity by NATing and PATing.
  • Load Balancer: It is used when we have multi node cluster.

Creating Service:

To create service we create a file named service.yml:

By running this code we get a service- “loadbalancer”:

If MySQL pod fails and when it relaunch so between mysql and wordpress port we create a service of clusterIP kind so that outside client can connect to our database.

Now we create the above setup.
For this first we create :

In the above code we are creating service,PVC for mysql pod to create a permanent database for wordpress clients.

Now we have to create WordPress setup:

Kustomization: It is a file where we declare sequence and secret keys .

In this file we added mysql database password and the sequence of running our pods.

We run this file as:

Thanks For Reading and Thanks to Vimal Sir!!

--

--

Mansi Dadheech

Exploring various disciplines to broaden my own pursuits✨.