Reverse Proxy & Load Balancer

Şafak ÜNEL
3 min readOct 13, 2021

--

Level: Beginner

Reverse Proxy

Reverse proxy is a type of proxy server that retrieves resources on behalf of a client from one or more servers. This accepts a request from a client, forwards it to a server that can fulfill it, and returns the server’s response to the client.

In horizontally scalable systems, there is a constant issue of client having to reach out to multiple instances of an application.

Let’s check with an example. Our application has multiple instances and any instance can be used for processing client requests. This means that clients can potentially reach to any of these instance below.

Without Reverse Proxy

This is only possible if clients can use particular DNS address of origin servers to access application. But this is not scale our system. If there was one application instance which is serving the client request, client can use that particular address, but if we have multiple machine where client can reach, then it is not a not a practical situation for us to distribute all these addresses by manual to clients, because we may like to scale our application and we may like to increase the number of instances.

In order to solve that problem, we will use a new component in between, and that’s called reverse proxy. So, any request that has to go to the server actually goes through a reverse proxy.

Reverse Proxy

In this case, clients do not need to know the DNS names of all servers. It will only be enough to know the DNS of the reverse proxy. Reverse proxy will forward the incoming request to the relevant server.

Large websites and content delivery networks use reverse proxies, together with other techniques, to balance the load between internal servers.

Make no mistake, a reverse proxy can be used even if it’s just a web server or application server. You can think of the reverse proxy as a website’s “public face.” You will find the advantages below:

  • Distribute the requests to multiple origin servers
  • Prevent the attacks like DDoS, because the attackers won’t be able to hit the origin servers of the application, just the proxy server
  • Compressing server responses before returning them to the client, reduces the amount of bandwidth they require, which speeds their transit over the network
  • Provides the SSL encryption
  • Content caching

Load Balancer

A load balancer distributes incoming client requests among a group of servers, in each case returning the response from the selected server to the appropriate client.

Unlike reverse proxy, deploying a load balancer makes sense only when you have multiple servers.

As the volume of request to our applications increases, it will be difficult for us to handle it efficiently on a single server. We can find a solution to this problem by scaling our system horizontally. This means adding more machine to our pool of resources.

These servers usually have the same content and the task of the load balancer is to distribute the workload to these servers to make the best use of the capacity, prevents overload on any server, and results in the fastest possible response to the client.

At the same time, load balancers make health checks on servers and send requests only to healthy servers. In this way, they have an effect on increasing user experience and reducing failures.

There are two kinds of load balancers. One is hardware base load balancer, and the other one is software base load balancer.

Most popular software based load balancers:

  • Ngnix
  • Apache
  • Haproxy

--

--