Juggling HTTP requests with Nginx – Part 2 location directive

Bipul Jain
3 min readDec 1, 2019

--

Introduction

This post is in continuation of the part 1 link. Here I will talk about using an error page and a few other tags and how to ensure that your service always returns 200.

Assumptions :

This has been tested on where you have one monolithic app for all your services but you have defined a set of machines for different services or sets of endpoints.

A simple flow would look like

Here All app servers are instances of the same process types.

To isolate different services, a set of nodes handle a set of particular request types.

Scenario:

Now consider nodes under upstream product1 starts taking too much or is unable to handle the load and after some time starts rejecting the requests.
Forex: In rails along with passenger, requests start failing when there is a queue overflow, or in case of jetty/java the request just times out.
In such a scenario, we still wouldn’t like the user to see 5xx’s here if you see upstream product2 and product 3 are sitting handling their standard load.

So is it possible that if we handle the failures so that we can reroute the request to other upstream servers and handle the request so that the end result is still the same.
It’s similar to how our body starts using our fat to survive when there is not enough energy in the system. It shouldn’t happen for long but helps the body survives.

Some of the lingos that you need to more upon location directive, (named location)
Sets configuration depending on a request URI.

http://nginx.org/en/docs/http/ngx_http_core_module.html#location

and proxy_intercept_errors directive
Determines whether proxied responses with codes greater than or equal to 300 should be passed to a client or be intercepted and redirected to Nginx for processing with the error_page directive.

http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_intercept_errors

Nginx Config:

upstream service_1{
server unix:/webapps/portfolio/run/gunicorn.sock;
}

upstream service_2{
server unix:/webapps/jagdambay/run/gunicorn.sock;
}


server{
listen 80;
server_name juggler2.bipuljain.com;

location /api/product_1/ {
error_page 500 503 504 @service1_fallback;
proxy_intercept_errors on;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_pass http://service_1;
}

location /api/product_2/ {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_pass http://service_2;
}

location @service1_fallback {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
access_log /webapp/portfolio/logs/fallback_access.log;
proxy_pass http://service_2;
}

}

Explanation:

Here the /api/product_1/ endpoint is designed to return 500 on some of the requests and return a JSON response {“status”:”ok”} on 200 and response {“status”:”error”} on 500.

And there is another upstream defined service_2 which is always in a healthy state and bound to return 200 on all the access.

Here on /api/product_1/I have used error_page on and proxy_intercept_errors on, here if the upstream server returns any of the error codes 500, 503 or 504 Nginx takes control of the request and reroute the request to the location block named service1_fallback which again proxy_pass to service_2.

There is also logging enabled when this happens, so add alerts on log growth of these.

There is another blog by Nginx which talks about a similar case.

https://www.nginx.com/blog/capturing-5xx-errors-debug-server/

There is more to this series.

In the next part, I will talk about how you can system defensive and protective against such issues.

--

--