[PoC5] Handling 20k/sec requests on a Netty-based RESTEasy web service proxied through NGINX

A young cop must prevent a bomb exploding aboard a city bus by keeping its speed above 50 mph.

With the last try-out, I was stuck with 9k/sec requests on REST web services proxied via an NGINX installation. And yet one installation of NGINX was enough to handle 22k/sec requests. Also for only requesting REST web services hosted on Netty, I was able to handle 21k/sec requests on an M4.XLarge EC2 machine.

My configuration should act better to handle the rush hour for sure. How could one forget the acting of Keanu Reeves? It’s obvious that more configuration is needed to be the Jack Traven! :)

So the configuration should either be done on the NGINX side or the Linux side. Investigating through the case, it occurred to me that connections between the NGINX and the upstream server (in our case, Netty-based REST web service machine) need to be tuned with Keep-alive status.

But hey, what’s the deal with Keep-alive?

We first need to understand/refresh the architecture of 3-way handshake mechanism of TCP. The flow of the handshake is given as follows:

  1. Client sends a SYNC packet.
  2. Server sends back SYNC-ACK packet, which stands for synchronise acknowledgement. It’s stating that server acknowledged the SYNC packet that it received.
  3. Client sends an ACK packet back to server.

This flow states that a TCP connection is established and active on client and server. When the network latency between the client and server considered, say 10ms, to establish a TCP connnection it would take 3 x 10ms = 30ms.

If we take an occasional web-page, it would probably contain around 30 artifacts, say from images to css or js files. So establishing TCP connections for one single page would take 900ms, almost a second. What if the network latency is around 100ms? In that case we’ll have to wait 10 seconds for establishing the connections and that doesn’t even contain retrieving the content of the artifacts.

Keep-alive is a configuration that was introduced with HTTP/1.1 that allows to use the same TCP connection for HTTP requests. Keep-alive connection is also known as persistent connection. So If I enable it, my REST web service should perform better under heavy load since there won’t be any utilization for establishing TCP connections of requests.

Keep-alive can be enabled using the “Connection: Keep-Alive” HTTP header. In order to enable this in NGINX, we need to do some configuration in the upstream part of nginx.conf (See code snippet below).

I used the AMI ami-a8221fb5and the instance type m4.xlarge for creating all 3 instances given as follows:

netty-rest-simple application was executed at instance A. NGINX was installed on instance B. http-requester was executed on instance C. I edited nginx.conf of instance B and added the parts that are wrapped by bold words.

http {
upstream backend {
server <instance-a-internal-ip-address>:8080;
keepalive 8;
server {
location / {
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Connection “Keep-Alive”;

The number 8 defines the number of idle keepalive connections to an upstream server that remain open for each worker process. 8 open connections will be enough for our try-out. Since m4.xlarge has 4 CPUs and value of the worker_processes key is auto, we’ll have 4 worker process configured within NGINX. So 8 x 4 = 32 connections is enough to handle the load.

There is no need to do any configurations in /etc/sysctl.conf for this PoC.

My client http-requester was invoked with the parameters below (Exactly the same with the rest of the PoCs).

java -jar nginx-client-0.0.1-SNAPSHOT.jar 80 1000000 1000000 200

The output numbers were pretty satisfying this time and saturated at 20k/sec.

1596 req/sec
2363 req/sec
2523 req/sec
2653 req/sec
2956 req/sec
3025 req/sec
3361 req/sec
3478 req/sec
3551 req/sec
3823 req/sec
3989 req/sec
3861 req/sec
3964 req/sec
3859 req/sec
3823 req/sec
3931 req/sec
3876 req/sec
3926 req/sec
3740 req/sec
3910 req/sec
4021 req/sec
4226 req/sec
4383 req/sec
4547 req/sec
4746 req/sec
5055 req/sec
5748 req/sec
6097 req/sec
6834 req/sec
7389 req/sec
7445 req/sec
8121 req/sec
8262 req/sec
8714 req/sec
9065 req/sec
9387 req/sec
9706 req/sec
9704 req/sec
8764 req/sec
13310 req/sec
13728 req/sec
14003 req/sec
13799 req/sec
14711 req/sec
15487 req/sec
14645 req/sec
14989 req/sec
15352 req/sec
14808 req/sec
14582 req/sec
14710 req/sec
14461 req/sec
14994 req/sec
15297 req/sec
14910 req/sec
14238 req/sec
15651 req/sec
16742 req/sec
17489 req/sec
18355 req/sec
18846 req/sec
19513 req/sec
20959 req/sec
19577 req/sec
20456 req/sec
20874 req/sec
20869 req/sec
20652 req/sec

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.