Why you should consider a Nginx reverse proxy for your Microservice

Oliver Mascarenhas
Code Uncomplicated
Published in
4 min readMay 5, 2022

A few use-cases to consider, a Nginx reverse proxy in front of your Microservice

Nginx Reverse Proxy

So far we’ve looked at setting up our API, testing it and monitoring it. Now we take it a step further with Nginx. All the use-cases described below can be achieved within our Go application with some amount of code, however web servers like Nginx are very efficient at these tasks and can do them with a few lines of configuration.

Setting up Nginx as a reverse proxy to an API server is a fundamental use-case. Reverse proxying help us with controlling:

  • Accessibility to endpoints
  • Routing
  • SSL termination (Not discussed in this post)
  • Load Balancing
  • Throttling / Rate limiting

Reverse proxy / URI routing

Using the proxy_pass directive in Nginx, we can control which URIs are accessible. In this example, the swagger documentation endpoints we generated in Part 1 will not be accessible via Nginx.

With the help of nested location directives we can route to different API backends or different versions of the same backend as illustrated in the snippet below.

The nesting allows us to specify common properties like log location and proxy headers. Additionally, we could portray multiple microservices as a single service to the API clients.

Minimal server configuration

The way Nginx works, requests received from the client are stopped at the web server and a new request is sent to the backend. This results in loss of http headers, in order to make these headers available to the backend, we use the proxy_set_header directive.

Load balancer

Modern cloud vendors provide SLAs around “availability”, it’s common to read about the 4 nines or 99.99% of availability. Having a load balancer in front of your API is the first step to achieve this. Firstly, it helps distribute the burden of processing among multiple instance of the application and more importantly, clients can continue to access your service in case one of the backend instances goes offline due to errors or upgrades.

The upstream directives allows us to specify a group of servers among which requests will be load balanced. Round-robin is the default load balancing method when nothing is specified. The other methods available in the open source version of Nginx are:

  • Leas Connections — least_conn
  • IP Hash — ip_hash
  • Generic Hash — hash $request_uri

Using max_fails and, fail_timeout we can configure the number of failed attempts before deeming a server offline and the time after which a connection should be reattempted. The modified server configuration snippet is as follows.

Load-balancing

Error pages

Web servers like Nginx have default error pages, however when working with APIs it’s essential to send a response that's appropriate for the API client's consumption.

Another strategy is to map all 404 errors to 400, this prevents unauthorized clients from discovering the API structure. We can even go a step further and map all 5xx errors to a 503 and prevent server side exceptions and stack traces from propagating to the client.

Yes, you can configure this in your microservice, and we have done so in Part 1 of this series. However, configuring it at the Nginx level serves as a catch-all. It also ensures you have a consistent response across micro-services.

Rate Limiting

It is essential to set up rate limits to protect your downstream application from being overwhelmed with requests. In today’s cloud environments, all services have rate limits set, you may have come across the “TooManyRequestsException” when using AWS services like S3.

Nginx implements the leaky bucket algorithm for rate limiting. We start by defining a request limit zone limit_req_zone and then use the limit_req directive within the location block we wish to limit.

The Nginx documentation recommends including burst and the nodelay parameters to the limit_req directive for most deployments.

Final configuration

Bringing it all together

All the configuration explained above is set up via docker. You can find the entire example here https://github.com/oliversavio/youtube-vid-code/tree/nginx/go-api-starter.

I will make a video explaining the Docker specific configurations sometime in the near future. If that’s something you’re looking forward to, please subscribe to my YouTube channel and hit the bell icon for instant notifications of a new video.

References & Further reading

Support

Any support would be appreciated if you find my content helpful.

--

--

Oliver Mascarenhas
Code Uncomplicated

Designing and developing scalable and fault tolerant data pipelines and platforms | https://olivermascarenhas.com/