Traffic Shaping with Nginx Rate Limiting

If you are building or managing a website and are worried about the cyber-attack or website hacking with brute force attack where the hacker will try different combinations of username/password to get a working credential set, then your website could be prone to risk of security. And it is time to think of solutions for security.

There are different ways in which the security shield can be applied at login level. One among those solutions is Rate Limiting via Nginx as a reverse proxy server.

Rate limiting allows to slow down the rate of incoming requests and even requests can be denied beyond a specific threshold.

Using Nginx as a reverse proxy to your application will serve the purpose.

An example of a typical Nginx reverse proxy config:

upstream testapp {
server 0.0.0.1:9000;
}
server {
...
location /login/ {
proxy_pass https://testapp;
proxy_http_version 1.1;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
}
}

To enable rate limiting simply add the following line to the top-level of your config file:

limit_req_zone $binary_remote_addr zone=login:5m rate=10r/s;

This creates a shared memory zone called “login” to store a log of IP addresses that access website where rate limit is applied. 5 MB (5m) will give us enough space to store a history of 160k requests. It will only allow 10 requests per second (10r/s).

Note: Only integer values should be used. To set the limit to half a request per second, you’d use 30r/m (30 requests per minute).

To put this limit to work, we use the limit_req directive. You can use this directive in either of http {}, server {}, or location {} blocks based on the level at which you want to apply rate limits.

Follow the below example to apply rate limits at location level –

location /login/ {
limit_req zone=login_zone burst=5;
proxy_pass http:// testapp;
proxy_http_version 1.1;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
}

Here we’re rate limiting the /login/ URL.

You can imagine the meaning of burst like a kind of queue. It means that if you exceed the rate limit, the following requests are delayed, and only if you have more requests waiting in the queue than specified in the burst parameter, will you get a 503 error .

The burst argument tells Nginx to start dropping requests if more requests than specified burst value queue up for that second.

If you don’t want to use this queue (i.e. deliver a 503 immediately if someone exceeds the rate limit), you must use the nodelay option:

location /login/ {
limit_req zone=login_zone burst=5 nodelay;
proxy_pass http:// testapp;
proxy_http_version 1.1;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
}

The following configuration will limit the processing rate of requests coming from a single IP address and, at the same time, the request processing rate by the virtual server:

limit_req_zone $binary_remote_addr zone=perip:10m rate=1r/s;
limit_req_zone $server_name zone=perserver:10m rate=10r/s;

Reload Nginx with your new config and you’re good to go.

/etc/init.d/nginx reload

To verify the functioning works, try the below code:

for i in {0..10}; do
curl -k https://www.google.co.in | head -n 1 | cut -d $' ' -f2; done

You will see 200 responses once per second until the no.of requests equal the rate value set and probably some 503 responses when the queued requests exceeded the burst value.

You could use performance test tool like jMeter to trigger ’n’ no.of requests in one second and capturing more accurate test results.

Nginx allows to limit requests and shape the traffic at various levels like — server level rate limits, location wise rate limits and global rate limits, applying rate limits based on request header values, etc.

Keep a watch over the blog for more posts in the series, i.e.,
* server level rate limits (limits requests per host)
* location wise rate limits (limit certain requests per path) and global rate limits (limits at server level)
* applying rate limits based on request header values
* How to use jMeter to analyse and capture the results of ratelimits.

References: