Rohit Gupta
Jul 31 · 5 min read
Nginx Logo

For a startup, before scaling infrastructure either horizontally or vertically we need to make sure that current resources are being used properly, and there is no bottleneck in performance due to the application configuration. The primary aim of the engineering team is to ensure that minimal resources are used in the day to day running of any system designed and deployed.

We had faced a similar issue where our deployed system was being used to serve over a million users on a daily basis with spurts of users coming in a sporadic manner. This meant that only deploying multiple servers or scaling them wouldn’t be an optimal solution.

This blog post is about tuning Nginx to improve performance i.e. to increase RPS (requests/sec) of an HTTP API. Here, I have tried to talk about the optimizations that we made to the deployed system so that it can handle tens of thousands of requests per second without having to incur a huge cost overhead.


Scenario: We needed to run an HTTP API (written in Python using flask) proxy passed with Nginx, and high throughput was needed. Also, the content of API will change at the interval of one day.

optimization
/ɒptɪmʌɪˈzeɪʃ(ə)n/
:noun:
the action of making the best or most effective use of a situation or resource.

We used the supervisor to run WSGI Server with the following configurations:

  • Gunicorn with Meinheld workers
  • Number of Workers: CPU Count * 2 + 1
  • Bind to Unix socket instead of IP, it’s slightly faster.

So finally, the supervisor command looked something like this:

gunicorn api:app --workers=5 --worker-class=meinheld.gmeinheld.MeinheldWorker --bind=unix:api.sock

We tried optimizing Nginx config(s) and tested what worked best for us.

And, to benchmark the API we used wrk with the following command:

wrk -t20 -c200 -d20s http://api.endpoint/resource

Default configuration

First, we load tested the API without any changes and got the following stats:

Running 20s test @ http://api.endpoint/resource
20 threads and 200 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 192.48ms 274.78ms 1.97s 87.18%
Req/Sec 85.57 29.20 202.00 72.83%
33329 requests in 20.03s, 29.59MB read
Socket errors: connect 0, read 0, write 0, timeout 85
Requests/sec: 1663.71
Transfer/sec: 1.48MB

Default config update

Let’s update default config of Nginx i.e. nginx.conf at /etc/nginx/nginx.conf

nginx.conf (/etc/nginx/nginx.conf)

After the changes in the Nginx configuration(s), we should run the config test

sudo nginx -t

If the test is successful we are good to restart Nginx, to see the changes

sudo service nginx restart

With this much of configuration, we load tested the API and got the following result:

Running 20s test @ http://api.endpoint/resource
20 threads and 200 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 145.80ms 237.97ms 1.95s 89.51%
Req/Sec 107.99 41.34 202.00 66.09%
42898 requests in 20.03s, 39.03MB read
Socket errors: connect 0, read 0, write 0, timeout 46
Non-2xx or 3xx responses: 2
Requests/sec: 2141.48
Transfer/sec: 1.95MB

These configurations reduced timeouts and increased RPS (requests per second) but not much.

Adding Nginx cache

Since, in our case, the content of the endpoint will be refreshed at the interval of one day, this looks good situation to cache the API response.

But, with caching comes its invalidation… one of the two hard things.

There are only two hard things in Computer Science: cache invalidation and naming things.

— Phil Karlton

We opt the minimal solution of purging the cache directory with a cronjob, after content updated in the downstream system.

So further all the heavy lifting will be done by the Nginx, so now we gotta make sure Nginx is supercharged…

To add caching in Nginx we would need to add few directives in our Nginx app’s configuration file.
Before that we need to create a directory to store the cache data:

sudo mkdir -p /data/nginx/cache

Changes in Nginx app’s configuration:

Cache proxy pass (nginx’s app conf)

With this change in configuration, we load tested the API and got the following result:

Running 20s test @ http://api.endpoint/resource
20 threads and 200 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 6.88ms 5.44ms 88.91ms 81.36%
Req/Sec 1.59k 500.04 2.95k 62.50%
634405 requests in 20.06s, 589.86MB read
Requests/sec: 31624.93
Transfer/sec: 29.40MB

So we got nearly about 19x performance boost by adding caching.

Nginx cache in RAM

Let’s go one step further, currently, our cache data is being stored on the disk what if we store that data in the RAM, in our case response data is limited and doesn’t have large response size.

So first, we need to create a directory where ram cache will be mounted.

sudo mkdir -p /data/nginx/ramcache

To mount the created directory in RAM with tmpfs use following command:

sudo mount -t tmpfs -o size=256M tmpfs /data/nginx/ramcache

This mounts /data/nginx/ramcache in RAM allocating 256mb.

If you feel you want to unmount, simply:

sudo umount /data/nginx/ramcache

And to re-create cache directory into RAM automatically after the reboot, we need to update the/etc/fstab. Add the following line into it:

tmpfs /data/nginx/ramcache tmpfs defaults,size=256M 0 0

Note: we also need to update the value of proxy_cache_path with the ramcache directory (/data/nginx/ramcache).
After updating the configuration, we load tested the API again and got the following result:

Running 20s test @ http://api.endpoint/resource
20 threads and 200 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 5.57ms 5.69ms 277.76ms 92.94%
Req/Sec 1.98k 403.94 4.55k 71.77%
789306 requests in 20.04s, 733.89MB read
Requests/sec: 39387.13
Transfer/sec: 36.62MB

Storing cache in ram gave significant improvement almost 23x.

Buffered access logging

We store the access log of proxy passed application, we can store the log first in the buffer and write to the disk only:

  • if the next logline does not fit into the buffer
  • if the buffered data is older than specified by the flush parameter

This will reduce the frequent writes happening with every request. To do this we simply need to add buffer and flush parameters with the appropriate value in access_log directive:

Buffer log before writing to disk

So as per the above config, initially access logs will be written to the buffer and will be written to the disk only when the buffer size reaches 256 KB or the buffered data become older than 10 seconds.
Note: Here combined is the log_format name.

After performing load testing again, we got the following result:

Running 20s test @ http://api.endpoint/resource
20 threads and 200 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 4.21ms 3.19ms 84.83ms 83.84%
Req/Sec 2.53k 379.87 6.02k 77.05%
1009771 requests in 20.03s, 849.31MB read
Requests/sec: 50413.44
Transfer/sec: 42.40MB

This has significantly increased the RPS about 30x from the initial stage.

Conclusion

In this blog post, we discussed the process of optimizing Nginx config(s) to improve RPS. The RPS (requests/second) was increased from 1663 to ~50413 (~ 30x increase) and this provided us high throughput.
Tuning default settings can really improve the performance of a system, we’ll end this post with a quote;

Make It Work. Make It Right. Make It Fast.
Kent Beck

Resources

Staqu Dev Logs

While we work on cutting edge products for enterprises and audience at large, this publication sheds some insights into the rapid prototyping, code marathons and optimizations that Staqu’s engineers constantly churn out!

Rohit Gupta

Written by

Engineer ▪️ Pythonista 🐍 ▪️ Explorer ▪️ https://github.com/guptarohit

Staqu Dev Logs

While we work on cutting edge products for enterprises and audience at large, this publication sheds some insights into the rapid prototyping, code marathons and optimizations that Staqu’s engineers constantly churn out!

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade