[PoC3] NGINX in front of an NGINX

Mert Çalışkan
2 min readJul 15, 2015

--

To validate a high-throughput system design, I’m executing a series of PoC and already tested NGINX under heavy-load request and seemed to me it’s OK to use it as a load balancer. So the product will have NGINX face all requests and delagates it to a server array probably.

To demonstrate this, I put an NGINX in front of an NGINX and sent requests to that one with http-requester. I used the AMI ami-a8221fb5 and the instance type m4.xlarge for creating all 3 instances.

yeah, I like keynote and creating simple sketches with it :)

For configuring the NGINX in instance B to delegate the request to instance A, you need to edit nginx.conf and add parts that are wrapped by bold words.

http {
upstream backend {
server <instance-a-internal-ip-address>;
}
}
server {
location / {
proxy_pass http://backend;
}
}

http-requester is invoked with the parameters below (Exactly the same with the rest of the PoCs).

java -jar http-requester-0.0.1.jar 10.0.0.112 80 1000000 1000000 200

The resulted numbers are saturated almost at ~11k/sec, which is pretty low compared to standalone NGINX installation placed under fire.

6011 req/sec
10723 req/sec
10621 req/sec
10623 req/sec
10565 req/sec
10624 req/sec
10614 req/sec
10621 req/sec
10599 req/sec
10623 req/sec
10606 req/sec

The CPU was not the bottleneck for instance A and B, probably network utilization was not as expected but I need to figure out the root cause of that.

--

--

Mert Çalışkan

Opsgenie Champion at Atlassian. Oracle Java Champion. AnkaraJUG Lead. Author of Beginning Spring & PrimeFaces Cookbook.