[PoC1] NGINX under attack with heavy HTTP requests

Mert Çalışkan
3 min readJul 15, 2015

--

A US Army officer, despondent about a deadly mistake he made, investigates a female chopper commander’s worthiness for the Medal of Honor.

When I look back I see that it’s been almost 20 years since I watched this title at theatre, wow time flies… But one thing never changes, you need to show courage under fire! :) now it’s NGINX’s turn to do the same.

For getting my high throughput system design validated, the one that should handle 30k/sec HTTP requests (REST services to be precise), I’ve started executing a series of PoC on Amazon. It will be a wide variety of implementations, I’ll start from dead simple ones to sophisticated framework integrations. So the result will be a series of posts hopefully.

To test the high load, I thought that I should be serving a simple html file for starters like 20k/sec times for say. So this is numbered as PoC 1. It uses an NGINX installation that serves its default bundled index.html. The reason why I’m using NGINX is I chose it as load balancer in the final product. And I can say that it’s pretty fast even with the default configurations. For having NGINX installed on your EC2 instance, you can follow the steps stated here. I’ll stick with AMI id ami-a8221fb5 on m4.xlarge type machine given in the installation instructions.

To generate the high load HTTP request, I’ve implemented a custom http-requester. It’s a Java client and contains the maven-shade-plugin so just invoke clean install goals to create yourself a fat jar. Its usage is as follows:

java -jar http-requester-0.0.1.jar <ip-address> <port> <rate-limit> <queue-size> <thread-count>

where ip-address and port would be the values of NGINX server. rate-limit is the limiter value for the requests that will be sent out in a second. You can say like do not exceed 20k/sec request for instance. Guava is used for its implementation. queue-size states the size of the queue that threads will be consuming from. thread-count defines with how many threads that the requests will be generated.

After bundling the http-requester, I installed it on NGINX server. With the parameters given below, I got some promising results.

java -jar http-requester-0.0.1.jar localhost 80 1000000 1000000 200

The output of the http-requester was:

5979 req/sec
14182 req/sec
18744 req/sec
27450 req/sec
39255 req/sec
39320 req/sec
39345 req/sec
38269 req/sec
38801 req/sec
40115 req/sec
38701 req/sec
40216 req/sec
40730 req/sec
38581 req/sec
37245 req/sec

So I almost reached 40k/sec on localhost performance, which is cool! You see that the number saturated at some point. It’s an expected result while trying performance of throughput.

My second try-out was to install http-requester on another EC2 instance and see what would network utilization bring. I used the same parameters and the results were given as follows:

5824 req/sec
14935 req/sec
36618 req/sec
36537 req/sec
36836 req/sec
36385 req/sec
36858 req/sec
35467 req/sec
22570 req/sec
22597 req/sec
22599 req/sec
22591 req/sec

Looks like numbers saturated at 22k/sec, looks like somebody is slapping me in the face, it’s because there is drop after seeing a constant 36k/sec. I couldn’t find the exact reason for this but I’m suspecting that Amazon is not giving you what’s being promised from time to time.

--

--

Mert Çalışkan

Opsgenie Champion at Atlassian. Oracle Java Champion. AnkaraJUG Lead. Author of Beginning Spring & PrimeFaces Cookbook.