Center-pivot irrigation (30.089890°, 38.271807°) an adequate illustration of the topic — Buy the magnificent Dailyoverview book here.

Restructuring a monolith 101: Nginx proxy by subfolder

MrManafon
Homullus
4 min readJul 3, 2020

--

Its really a very simple issue. When you attempt delegating the work of a single orthodox monolith, without downtime and greenfield approach, you quickly realize that one of the most basic (and simplest to solve) problems is “How can i create multiple small gateways that handle different paths, without the users noticing?”.

Skip to bottom to see the configuration.

If you had a large Python or Symfony monolith, you might want to split it up into smaller chunks, or spin up multiple instances that are load balanced based on directory (for example a separate, speedy, machine for admin API queries).

Lets look at it differently for a moment. Think of a simpler but analog example. Forget all you know about microservices and microfrontends. We have a very old, say Joomla, application on our hands. And as was often the case with these apps, its probably manually hosted on some Hetzner box, has no version control nor disaster recovery protocols. A bunch of developers used it over the last 10 years, and one of them installed (as is often the case) a forums system like IPS.

The fastest thing we can do to make performance better, is to split these two obviously separate apps that somebody shoved onto the same machine — into two separate machines. This not only allows us a fair bit of wiggle room in terms of horizontal scaling the system, but additionally allows us to control them as completely separate systems.

You can do this manually, but just as an example, i have started by separating the IPS code into its own repository, and properly containerised it into a set of containers that follows SRP. Yay, we instantly got even more scaling and almost a CICD. I can now easily update PHP version and turn on JIT, customize opcache, set up Redis, use Maria instead of MySQL, update Node, customize Varnish layer.

Yay, we’ve opened a pandoras container. The good kind, i mean.

Heres the problem tho. How do i reroute the parent nginx based on folder?

Good question, since our new little server has its own nginx, and listens to port 8080, we have to reroute all traffic coming into the old gateway’s /forum/ url, towards this new server.

If i edit the old server’s nginx configuration’s server block, we can easily add a new location block that only and explicitly covers this subfolder:

server {
server_name homullus.com;
listen *:443 ssl http2;
listen *:80;
listen [::]:80;
# more old blocks
# ...
# ...
location ^~ /forum/ {
proxy_pass http://123.456.789$request_uri;
}
}

As you can notice above, i have matched any uri that starts with /forum/ and instead of pointing towards a local directory, i have pointed towards an external IP address (of the other server).

If we look for a static file at /forum/favicon.ico we should see the static file that only exists on the forum’s server.

How can we make it play with PHP?

Another good question. As with other backend languages, PHP apps have to know what URI was requested as well as additional details like port, protocol, hostname etc. This is actually very simple, lets proxy some variables from the data that only the gateway knows. Edit the same block:

location ^~ /forum/ {
proxy_pass http://123.456.789:8080$request_uri;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-Proto https;
}

These are the most used variables right here. Literally what i mentioned above. Now, when we hit any dynamic page under the /forum/ folder, PHP will match the route easily.

Pro use case: Works with one server, but in local, my Docker changes IPs all the time.

Here is a nasty little trick, if we hardcode the IP within proxy_pass, it wont change, obviously. If we write some domain name instead of IP, for example http://forumserver.homullus.com it still won’t work, because Nginx caches the DNS resolution result. So if the forum server changes its IP, Nginx won’t care and will still try to hit the old IP.

set $upstream_forum forumserver.homullus.com;
proxy_pass http://$upstream_forum:8080$request_uri;

Instead of using it directly, we can store the server name into a variable and use the variable. Now, nginx will still cache the resolved IP, but if it disappears, it will attempt to resolve it all over again each time. Of course i would advise you to set up a propper load balanced nginx on a static IP, but hey, this is primitive, takes 2 minutes and works until you get better.

Docker Trivia: by default, docker holds its own DNS resolver at 127.0.0.11 so if you want to hit it instead, just specify:

resolver 127.0.0.11;

Now our Nginx will always use the appointed DNS server first, before moving on to the WAN.

Heres another appropriate illustration. A very good book, available over here.

Finally, lets see the end product

Thats it. The overhead is insanely small and optimised, since Nginx is in fact a very good load balancer. You probably got larger problems than overhead.

location ^~ /forum/ {
set $upstream_forum forumserver.homullus.com;
proxy_pass http://$upstream_forum:8080$request_uri;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-Proto https;
proxy_redirect off;
proxy_connect_timeout 90s;
proxy_read_timeout 90s;
proxy_send_timeout 90s;
proxy_buffering off;
proxy_buffer_size 128k;
proxy_buffers 100 128k;
}

This was very easy, and allows you to re-structure your application into multiple smaller servers/apps that don’t have to use the same languages, containers, servers, anything. As long as it can do HTTP, it can be proxied via the gateway.

--

--