Balance is the key!
Amongst everything going on in your life, at some point you might have realized how balance is important and is a key to everything. Having a balance in life helps us to stay healthy physically as well as mentally. No, I am not trying to give my wisdom talks here. I am just trying to pave the path towards introducing this technique called “Load Balancing” used extensively in cloud computing.
What is load balancing? Let me tell you, load balancing is a process of distribution of network traffic and requests over multiple servers (computing units) in a pool of servers also called as server farm. I learned about load balancing while I was researching for writing a literature survey about it. A load balancer does this work for us.
A load balancer can be one of the three things listed below:
- A physical device running on specialized hardware or a virtual instance running on a software.
- Deployed in application delivery controllers (ADCs) designed to improve the performance.
- Able to execute a load balancing algorithm for handling the traffic of the network.
So what makes load balancing so important? The current modern high traffic websites must serve all the million requests from users or clients for a particular text, image, video, or some data in a fast and reliable manner. A best approach might be increasing the number of servers in the farm, but it’s not efficient. Here our hero, load balancers come into picture. They act as the “traffic cop” sitting in front of your servers and routing the requests received to the servers that are capable of doing the task i.e. have a lesser load on them. This maximizes the speed and utilization of the capacity of the server farm as a whole. I did mention about balance helping the health, here load balancing makes sure that no server is overworked and thus ensuring the health of the server. Not only that but the responsibility of frequent health checks of the server farm is taken by the load balancer. Hence load balancing delivers the performance and security necessary for sustaining the complex IT infrastructure.
Hardware vs Software load balancers
Hardware load balancers are:
- High-performance appliances that are capable of securely processing multiple gigabits of traffic from various types of applications.
- They might contain built-in virtualization capabilities.
- That allows for more flexible multi-tenant architectures and isolation to tenants.
Software load balancers are:
- Capable of replacing load balancing hardware while giving equivalent functionality and surpassing flexibility.
- May run on common hypervisors, in containers, or as Linux processes with minimal overhead on bare-metal servers.
- highly configurable depending on the use cases and thus can save space and reduce hardware costs.
Any load balancer will follow a particular algorithm to control the network traffic on the server farm. These load balancing algorithms are of two types:
Static load balancing algorithm:
Looking at the word ‘static’ in the name we can guess that it depends on parameters that lack any movement or does not consider the current state of the system. These types of algorithms require the prior information about the system. Here the servers finish off the given tasks and submit to the remote server, signaling that they are ready for more tasks.
Static load balancing has a drawback that once load assigned to the server it cannot be transferred to another server. One more major drawback is that the algorithm keeps assigning loads without considering the current state of the server. Examples of static load balancing algorithms are: Round Robin algorithm, Threshold algorithm, Central Manager algorithm, and Randomized algorithm.
Dynamic load-balancing algorithm:
Contrary to static load balancing algorithm, dynamic load balancing algorithms closely monitors the traffic in the network and assigns the load to the servers accordingly. The word ‘dynamic’ implies that the output depends on the current state of the system. This algorithm comprises of three strategies:
- Transfer strategy- It decides which tasks are eligible for transferring to other nodes for efficient processing.
- Location strategy- Has a queue data structure for keeping track of the servers with no or fewer tasks. This strategy allocates such servers to execute a transferred task.
- Information strategy- It is the information center of the whole network or a server farm. Hence is responsible for providing location and transfer strategies to each server.
This algorithm has three forms of control executed by it: Centralized, Distributed, and Semi-Distributed. In centralized control, one node holds the right of distributing the load among the other nodes of the network. In distributed control, the responsibility of load allocation is on every node in the network. In semi-distributed, the nodes are divided into clusters and these clusters each have a central node that monitors load distribution among its cluster.
Some examples of such algorithms are: Central queue algorithm, local queue algorithm, and least connection algorithm.
In our beautifully structured 7-layer OSI model, load balancing usually happens between the 4th and 7th layer.
Layer 4(L4): It is a load balancer that works on the transport layer. It directs traffic based on data from network and transport layer protocols, such as IP address, UDP and TCP ports. They can make routing decisions based on this data. It can also perform NAT(Network Address Translation).
Layer 7(L7): This load balancer acts at the highest layer i.e. the application layer. It adds content switching to load balancing. This allows routing decisions based on parameters like HTTP header, uniform resource identifier, SSL session ID, and HTML form data.
GSLB (Global Server Load Balancing): It extends L4 and L7 capabilities to servers present in different geographic locations thus enabling large volumes of data can be distributed efficiently.
Yes, now we know why balancing is so supreme. It’s just essential everywhere be it toggling between work and relaxation, learning to ride a bicycle, or just getting relevant search information from the World Wide Web.