Build your own cloud-agnostic TCP/UDP LoadBalancer for your Kubernetes apps
Recently I’ve been working on a Kubernetes migration for a project with a specific requirement for TCP/UDP load-balancing. I deployed this project onto multiple Kubernetes service providers (GKE, EKS and DO) since I needed to analyze my application’s performance based on the location of the cluster and the provider’s services.
Having a cloud agnostic application in need of UDP load-balancing turned out to be a little difficult to carry out. Most cloud providers do not support UDP load-balancing and might have cloud-specific methods to bypass this issue.
A simple solution that I found for this issue is to create a reverse proxy system on a standalone server (I used an instance of EC2) using Nginx and route UDP and TCP traffic directly to my Kubernetes services.
The idea is simple and requires a minimal change in your Kubernetes resources. You can take the following steps to build your LoadBalancer:
- Create a NodePort service for your application
- Create a small server instance and run Nginx with LB config
1. Create a NodePort service for your application
There are three main types of Kubernetes services by which you can expose your application pods.
ClusterIP: this kind of service exposes your application (internally) to the other resources on your cluster. As such, you can’t access your application externally unless you create another resource (eg. ingress). This is the default service type in Kubernetes.
LoadBalancer: the LB service exposes your application publicly through a public IP. This service creates an instance of your cloud’s Load Balancer. You can either assign it your own static IP or get an ephemeral IP. Supported protocols on this type of service vary based on your cloud provider’s LB characteristics.
NodePort: this type of service exposes your application on your cluster nodes, and makes them accessible through your node IP on a static port. This type supports multi-protocol services.
Hopefully, the definitions above make it clear that for our use we will need to exploit the NodePort type. We can create a NodePort service for our application using the following service.yaml
file. In this example, I have defined my pod’s containerPort
as 443 and require UDP/TCP support of incoming traffic to this port.
apiVersion: v1
kind: Service
metadata:
labels:
app: my-app
namespace: my-app-namespace
name: my-app-np-service
spec:
type: NodePort
ports:
- name: tcp-port
port: 443
targetPort: 443
protocol: TCP
nodePort: 30010
- name: udp-port
port: 443
targetPort: 443
protocol: UDP
nodePort: 30011
selector:
app: my-app
This service exposes my-app
pods’ port 443 through nodeIP:30010
with the TCP protocol and nodeIP:30011
with UDP.
The next step, you guessed it, is to load-balance incoming traffic from your server to your nodes (port 30010 for TCP LB and port 30011 for UDP LB).
2. Create a small server instance and run Nginx with LB config
For this step, you can get a small server from any cloud provider. To reduce the cost you can look for cheaper and less well-known cloud services.
Once you have the server, ssh
inside and run the following to install Nginx:
$ sudo yum install nginx
In the next step, you will need your node IP addresses, which you can get by running:
$ kubectl get nodes -o wide.
Note: If you are using a private cluster without external access to your nodes, you will need to set up a point of entry for this use (eg. NAT gateway).
Now you need to add the following to your nginx.conf (run sudo vi /etc/nginx/nginx.conf
):
worker_processes 1;
events {
worker_connections 1024;
}
stream {
upstream tcp_backend {
server <node ip 1>:30010;
server <node ip 2>:30010;
server <node ip 3>:30010;
...
}
upstream udp_backend {
server <node ip 1>:30011;
server <node ip 2>:30011;
server <node ip 3>:30011;
...
}
server {
listen 443;
proxy_pass tcp_backend;
proxy_timeout 1s; }
server {
listen 443 udp;
proxy_pass udp_backend;
proxy_timeout 1s;
}
}
Now you can start your Nginx by running:
$ sudo /etc/init.d/nginx start
In case you had already started you Nginx before making changes to your config file, run the following to restart it:
$ sudo netstat -tulpn # Get the PID of nginx.conf program
$ sudo kill -2 <PID of nginx.conf>
$ sudo service nginx restart
And now you have yourself a UDP/TCP LoadBalancer which you can access through <server IP>:443
.
Enjoy!