Run your application on multiple servers and rely on Nginx load balancer

ali jalali
3 min readMay 3, 2020

--

As you know, one of the top trends in software engineering is application scaling.

scale application on the server

Imagine you have an application that responds to 1000 requests per second. What will you do if a number of requests grow to 10000? Is it efficient to add new resources to the server?

In software architecture, we have two types of scaling. One of them is vertical scaling and the other one is horizontal scaling. Vertical scaling means that you add more resources to the server and the server can respond to more requests. horizontal scaling is adding some servers to your system with normal resources to increase the power of application by deploying the application on multiple servers.

horizontal and vertical scaling comparison

So what is the best solution? Is it the best solution to add resources?

Nginx is the solution

Nginx is a powerful web server used for any type of application and has many features for deploying your projects. It can be configured as a proxy server or HTTP cache system but one of the important topics handled with Nginx is load balancing.

So what is load balancing and how it could be the solution for software scaling?

Load balancer

If we decide to scale our applications, it is not an efficient way to add new resources like RAM or CPU so instead of this difficult and expensive way we add new simple servers to run and scale our application in a horizontal way.

How Nginx can help us?

Assume that you have an application. First, you need to run your app on different servers. Each app has a different IP to connect.

Now Nginx is needed to distribute all requests. It could be run on a different server or on the same server with another port for connection.

How to configure my Nginx server?

Server configuration for load balancing:

In this configuration, our application is running on 3 different servers each one has it’s own IP and port for connection.

  • All configurations have to declared in HTTP configuration box
upstream LB{
server 86.96.26.3:5000;
server 103.203.65.2:5000;
server 27.50.18.42:5000;
}

Proxy setting:

server {
listen 8080;
location / {
proxy_pass http://LB;
}
}

Weighted servers

So if you have 3 servers and one of them has more resources than others is it okay to distribute requests equally?

In this situation, we should distribute requests across servers by declaring the weight filed.

upstream LB{
server 86.96.26.3:5000 weight=3;
server 103.203.65.2:5000;
server 27.50.18.42:5000;
}

In this configuration from 5 requests coming to the proxy server, 3 requests go to server 1 and one request for each other servers.

Type of load balancers in Nginx configuration:

1- round-robin: Default load balancer type from every 3 requests, the first one goes to the first server, the second one to the second server, and so on.

2- least_conn: This configuration makes the proxy server send each request to the server that has the least connections now. This configuration is very efficient to scale your application without weight defining.

3- ip_hash: ip_hash configuration type is used for socket servers that each person has it’s own connection and has to work with that server until disconnect.

HOPE THIS ARTICLE BE USEFUL.

Thanks for reading.

--

--

ali jalali

Software engineer and experienced Web developer. Expert in NOSQL databases and linux servers.