Kubernetes TCP load balancer service on premise (non-cloud)
I have been playing with kubernetes(k8s) 1.9 for quite a while now and here I will explain how to load balance Ingress TCP connections for virtual machines or bare metal on-premise k8s cluster.
Motivation
I would like to set up a load balanced TCP connection served outside of my kubernetes cluster.
Sounds simple, with the current version of kubernetes v1.9, we have the following options to redirect Ingress traffic from outside into the kubernetes cluster:
(a) Kubernetes Ingress
(b) Kubernetes service LoadBalancer
Lets discuss in detail how the following options help us achieve our goal.
- Kubernetes ingress resource supports only http rules and that too only on standard http ports 80, 443. Any ports other than these are not supported for now. A clip from official kubernetes ingress documentation:
- Kubernetes service LoadBalancer does the job for selected cloud providers.
Simplest approach
Deploying a ghost application in google kubernetes engine(GKE) running on port 2368. And a wget to <external_ip>:2368 works as expected.
However, In case of on-premise hosts, one could use --external-ip parameter and provide IP address of one of the kubernetes nodes to route TCP traffic into the service.
This works, even to an extent when the pod is moved out of the host, kubernetes is able to route traffic. As the service is attached to the IP, we have problems e.g. if the node itself goes down. I would call this as poor man’s LoadBalancer service.
On premise challenges
Cloud first approach: The priority of the Kubernetes is clearly the cloud vendors, on premise is secondary.
Alternatives: There are some proposals, https://github.com/kubernetes/kubernetes/issues/36220 which is still open.
Cloud Controller manager: This is something which I have not explored yet :)
NodePort vs HostPort vs HostNetwork
In simple words, NodePort, HostPort and HostNetwork expose ports on the kubernetes physical or virtual nodes. NodePort is a recommended way and is managed by kubernetes services. Its official definitions goes:
Kubernetes master will allocate a port from a flag-configured range (default: 30000–32767), and each Node will proxy that port
With HostPort you can force reserve a port on the kubernetes nodes. This depends on the CNI which your cluster is implemented. [1]
Don’t use
hostPort
unless it is absolutely necessary (for example: for a node daemon). It specifies the port number to expose on the host. When you bind a Pod to ahostPort
, there are a limited number of places to schedule a pod due to port conflicts.
However, with HostNetwork, the pod get access to nodes network namespace. This is similar to HostPort and does not depend on CNI.
HostNetwork — Controls whether the pod may use the node network namespace. Doing so gives the pod access to the loopback device, services listening on localhost, and could be used to snoop on network activity of other pods on the same node.
This gives us the benefit of exposing containers ports at host level which is good. In other words it means that, the port is blocked by the container and no new service can run in that host in the same port, which is a problem as well.
HostNetwork and DaemonSet
As explained, with hostNetwork, one could run pods which reserve ports at node level. And by running each of those pods as a DaemonSet, we end up with a cluster of services available at node level. After which we could just use our on-premise load balancer to perform load balancing.
Setting up Pods and DaemonSets
Ensure that your test setup has at least two nodes to test load balancing, assuming that master node is tained with NoSchedule. In my case as you can see, I have a five node cluster running version 1.9.2.
Lets use a very simple tcp-echo server and run it as a DaemonSet by passing port 446 as args. This tcp echo server replies with the same message along with the hostname where its run.
Create it with kubectl and verify that they are running on all nodes with -o wide.
Now its time to test those individually. I used netcat to test our echo server
Configure the nodes with your own on premise load balancer. Description on how to configure vendor specific load balancer is out of scope of this write-up. However, we could test this with your own external Nginx load balancer.
Setting up Nginx TCP load balancer (Optional)
A Good set of instructions on how to setup your own Ngnix TCP load balancer from Nginx official documentation is available [2].
I will setting up host lb.example.com to act as our load balancer with Nginx.
Note: We will be using Nginx stream and if you want to use open source Nginx and not Nginx Plus, then you might have to compile your own Nginx with -with-stream option which I did.
Here is my Nginx conf file which I used for testing this. I just defined the kubernetes node’s FQDN names as steam backends.
After Nginx is running successfully, lets test it.
I have configured nginx to use round-robin and as you can see, every time a new connection ends up to a different host/container. Also note the container hostname is same as the node hostname this is due to the hostNetwork.
Drawbacks
- As we discussed already, defining hostNetwork reserves the host’s port(s) for all the containers running in the pod.
- Load balancer should not be a single point of failure, for our nginx case, it is.
- Also, every time a new node is added or removed to the kubernetes cluster, the load balancer should be updated as a separate additional step.
Troubleshooting tips
- kubectl logs <tcp server pod>: Ensure that tcp server started successfully.
- kubectl describe ds tcp-server-ds: To check problems with the daemonSet.
Conclusion
This way, one could set up a kubernetes cluster to Ingress-Egress TCP connections routing from/to outside of the cluster. As discussed above, there are definitely ways to improve but if you are on-premise this is something you could do.
References
[1] HostPort depends on CNI implementation: https://github.com/kubernetes/kubernetes/issues/31307
[2] Set up your own Nginx TCP load balancer: https://www.nginx.com/resources/admin-guide/tcp-load-balancing/
HostPort: https://kubernetes.io/docs/concepts/configuration/overview/#services
NodePort: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
more reading: http://alesnosek.com/blog/2017/02/14/accessing-kubernetes-pods-from-outside-of-the-cluster/