Raspberry Pi: Cloudflare Tunnel: Design High Availability in Services

Life-is-short--so--enjoy-it
5 min readFeb 21, 2024

Achieving High Availability of Services by adding redundancy

Raspberry Pi: Cloudflare Tunnel: Design High Availability in Services

Intro

I have been working on building an infrastructure in my private network that can expose Web Services to the Internet.

To expose the Web Services in my private network to the Internet, I picked to use Cloudflare Tunnel. To me, it looks safer than using my current non-static public IP that is assigned by ISP.

During the POC, the Cloudflare Tunnel worked well except for the single point of failure issue. If the Cloudflare Tunnel daemon dies, then the external traffic can’t be routed properly. In short, the intended services are not reachable.

Introduced Cloudflare Tunnel replicas

To rescue the Cloudflare Tunnel from the single point of failure ( SPF ), the free version of HA (high-availability) feature in Cloudflare Tunnel was added.

I used Cloudflare Tunnel replicas over Load Balancer because it was free.

The Cloudflare Tunnel replica doesn’t provide advanced load balancing, but the Cloudflare Tunnel replica at least provides failover or fallback if the current active Cloudflare Tunnel dies.

At most, 100 Cloudflare Tunnel replicas can be created.

Those Cloudflare Tunnel replicas can be run on multiple VMs. In my case, I brought up two Raspberry Pi 4 and ran Cloudflare Tunnel replica on each Pi.

Cloudflare Tunnel replicas use one Tunnel.

With this design, if one of the either Cloudflare Tunnel daemon or the physical Raspberry Pi dies, then the other Raspberry Pi will take over the load.

NOTE: Cloudflare Tunnel replica is more meant to support the failover. Therefore, this replica approach will work until a Cloudflare Tunnel replica can serve the whole incoming traffic.

Cloudflare Tunnel: High Availability with Replicas — https://medium.com/@life-is-short-so-enjoy-it/raspberry-pi-cloudflare-tunnel-high-availability-ha-with-replicas-13eaddb016df

Redundancy in Backend Applications

High Availability is also required on the Backend Application Services as well. Multiple reasons require the service redundancy here.

  1. service maintenance
  2. machine failure
  3. service failure
  4. etc.

In my case, I brought up the Backend Application Services on the two Raspberry Pi 5 that I recently purchased from the local store. Each Raspberry Pi 5 hosts multiple Backend Applications in Docker.

Two replicas for Backend Application Services

How to Route Traffics from Cloudflare Tunnel to Backend App?

The simplest approach to route the incoming traffic from Cloudflare Tunnel to the Backend App can be using the localhost network. It is simple and this was my first approach during the POC.

If the below Cloudflare Tunnel config is used, then the incoming traffic to Cloudflare Tunnel daemon will be redirected to the Backend App in the localhost.

This approach works, but there could be two major issues.

  1. Hotspot App. The incoming traffic will head to a specific Backend App, so a specific App will be overloaded.
  2. Not able to scale horizontally.
tunnel: <tunnel-id>
credentials-file: /etc/cloudflared/tunnel_cred.json

# ref: https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/configure-tunnels/local-management/configuration-file/
ingress:
- hostname: wowamazon.party
service: http://127.0.0.1:8080
- hostname: "*.wowamazon.party"
service: http://127.0.0.1:8080

- service: http_status:404
Cloud Tunnel: Running App in the same machine where Cloudflare Tunnel Daemon runs

Added Nginx as Load Balancer and Reverse Proxy

To handle the two issues ( hotspot and scalability ), I added Nginx into the two Raspberry Pi 4 where Cloudflare Tunnel daemon ran.

The previous Cloudflare Tunnel was reused. With this config, all incoming traffic goes to the Nginx in localhost.

And, the Nginx does two things

  • Load Balancing in Backend App ( with Passive Health Check )
  • Reverse Proxying

This design with the redundancy can support the service uptime although

  • One of the Cloudflare Tunnel daemon died.
  • One of the Backend App ( or even the machine ) died.

This design can mitigate the single point of failure ( SPF ) issue that I initially talked about.

In terms of the Health Check, since I use the free version of Nginx, there is no Active Health Check, so I used the Passive Health Check like the config below.

tunnel: <tunnel-id>
credentials-file: /etc/cloudflared/tunnel_cred.json

# ref: https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/configure-tunnels/local-management/configuration-file/
ingress:
- hostname: wowamazon.party
service: http://127.0.0.1:8080
- hostname: "*.wowamazon.party"
service: http://127.0.0.1:8080

- service: http_status:404
Raspberry Pi: Cloudflare Tunnel: Design High Availability in Services
upstream backend_servers {
# ref: https://docs.nginx.com/nginx/admin-guide/load-balancer/http-health-check/#passive-health-checks
server 192.168.128.21:8080 weight=1 max_fails=3 fail_timeout=30s;
server 192.168.128.22:8080 weight=1 max_fails=3 fail_timeout=30s;

}
server {
include global/wb_restrictions.conf;

listen 80;
http2 on;
# https://nginx.org/en/docs/http/server_names.html
server_name wowamazon.party *.wowamazon.party;
location / {
# Proxy requests to the backend servers
proxy_pass http://backend_servers;
proxy_redirect off;

include global/wb_proxy.conf;
}
}

In Conclusion

There was one thing that I was not sure about in the traffic routing. Based on the Cloudflare Tunnel config ( ingress ), the incoming traffic’s hostname is checked and matched ( top to bottom ). When the matched hostname is detected, then the traffic will be routed to the defined service ( e.g. http://127.0.0.1:8080 ) which is Nginx.

The Nginx would do the reverse proxying based on the hostname in the request, but the service ( http://127.0.0.1:8080 ) doesn’t use the hostname.

So, I wasn’t sure if Nginx was able to reverse proxying correctly based on the server_name config. ( I should check what’s in the requests by the way )

Based on my simple testing, the routed traffic to the Nginx had the hostname, so the Nginx was able to rever proxying correctly as I expected.

--

--

Life-is-short--so--enjoy-it

Gatsby Lee | Data Engineer | City Farmer | Philosopher | Lexus GX460 Owner | Overlander