SETTING UP KUBERNETES CLUSTER WITH MULTIPLE CONTROL PLANE NODES BEHIND HAPROXY

Murat Bilal
8 min readOct 29, 2023

--

In this article I demonstrate how to build a kubernetes cluster including 3 Control-plane and 2 worker nodes behind Haproxy.

According to kubernetes documentation You can set up an HA cluster:

  • With stacked control plane nodes, where etcd nodes are colocated with control plane nodes
  • With external etcd nodes, where etcd runs on separate nodes from the control plane.

I am using stacked control plane nodes option, where etcd nodes are colocated with control plane nodes.

A stacked HA cluster is where the distributed data storage cluster provided by etcd is stacked on top of the cluster formed by the nodes managed by kubeadm that run control plane components.

Each control plane node runs an instance of the kube-apiserver, kube-scheduler, and kube-controller-manager. The kube-apiserver is exposed to worker nodes using a load balancer.

Each control plane node creates a local etcd member and this etcd member communicates only with the kube-apiserver of this node. The same applies to the local kube-controller-manager and kube-scheduler instances.

This topology couples the control planes and etcd members on the same nodes. It is simpler to set up than a cluster with external etcd nodes, and simpler to manage for replication.

However, a stacked cluster runs the risk of failed coupling. If one node goes down, both an etcd member and a control plane instance are lost, and redundancy is compromised. You can mitigate this risk by adding more control plane nodes.

You should therefore run a minimum of three stacked control plane nodes for an HA cluster.

Here are the steps for deploying stacked cluster:

  1. Deploy 2G Ram, 2 VCPU, 20G disk Ubuntu 22.04 VMs.Minimal install option is enough. For Kubernetes cluster I am using v1.27

My vm IP addresses:

controller1  10.252.54.45
controller2 10.252.54.133
controller3 10.252.54.88
worker1 10.252.55.221
worker2 10.252.54.106
haproxy 10.252.54.81

2. On all nodes install containerd as a container runtime environment.

sudo apt install containerd -y
sudo mkdir /etc/containerd
sudo bash -c 'containerd config default > /etc/containerd/config.toml'

sudo systemctl start containerd
sudo systemctl enable containerd

alcalab@controller1:~$ sudo ctr version
Client:
Version: 1.7.2
Revision:
Go version: go1.20.3

Server:
Version: 1.7.2
Revision:
UUID: 5cd52fc2-b80c-4a30-9bab-af89035df0f2

3. Install Kubernetes Tools on all nodes(Kubeadm,Kubectl, kubelet)

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.27/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.27/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

4. Disable swap on all nodes

sudo swapoff -a

# Disable it also on /etc/fstab

5. Login haproxy vm. Install and configure haproxy :

sudo apt-get update
sudo apt-get install haproxy -y

# Edit the HAProxy configuration file (usually located at /etc/haproxy/haproxy.cfg)
# Add below config
frontend k8s-control-plane
bind 10.252.54.81:6443
mode tcp
option tcplog
default_backend k8s-control-plane

backend k8s-control-plane
mode tcp
balance roundrobin
option tcp-check
server control-plane-1 10.252.54.45:6443 check
server control-plane-2 10.252.54.133:6443 check
server control-plane-3 10.252.54.88:6443 check

sudo service haproxy restart
sudo service haproxy status

Oct 28 19:04:13 haproxy haproxy[1384]: [WARNING] (1384) : Server k8s-control-plane/control-plane-1 is DOWN, reason: Layer4 connection problem, info: "Connection refused at initial connection step of tcp-c>
Oct 28 19:04:13 haproxy haproxy[1384]: [WARNING] (1384) : Server k8s-control-plane/control-plane-2 is DOWN, reason: Layer4 connection problem, info: "Connection refused at initial connection step of tcp-c>
Oct 28 19:04:14 haproxy haproxy[1384]: [WARNING] (1384) : Server k8s-control-plane/control-plane-3 is DOWN, reason: Layer4 connection problem, info: "Connection refused at initial connection step of tcp-c>

# Currently it is down, this is normal

6. Please edit /etc/hosts for all vms. For example for controller1:

alcalab@controller1:~$ cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 controller1
10.252.54.45 controller1
10.252.54.133 controller2
10.252.54.88 controller3
10.252.54.106 worker2
10.252.55.221 worker1
10.252.54.81 haproxy

7. For all nodes, Forwarding IPv4 and letting iptables see bridged traffic

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF

# Apply sysctl params without reboot
sudo sysctl --system

# Verify that the br_netfilter, overlay modules are loaded by running the following commands:
lsmod | grep br_netfilter
lsmod | grep overlay

# Verify that the net.bridge.bridge-nf-call-iptables, net.bridge.bridge-nf-call-ip6tables, and net.ipv4.ip_forward system variables are set to 1 in your sysctl config by running the following command:
sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward

8. Login controller1 . Initialize First Control Plane Node(controller1):

sudo kubeadm init --control-plane-endpoint=10.252.54.81:6443 --upload-certs

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

kubeadm join 10.252.54.81:6443 --token ou152l.svboxp5m0gxvsu27 \
--discovery-token-ca-cert-hash sha256:d64a4fee845fa7cb499b7d113bb773b8c04a4e29999b3ac8c6c518f1f595d449 \
--control-plane --certificate-key 8921bddc74fade19ce3e752c2f3f8c596cb7185949f512bcdcf7b724c3d68481

9. Monitor First Control Plane node with “kubectl get nodes” and “kubectl get pods -A”. Everything should be ready and running.

10. Apply calico network plugin on First Control Plane Node:

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

11. Login controller2 and run below command.Join Control Plane Node 2 to the cluster.

# Take the previous command output for joining 2 control plane node.In our case controller2

sudo kubeadm join 10.252.54.81:6443 --token ou152l.svboxp5m0gxvsu27 \
--discovery-token-ca-cert-hash sha256:d64a4fee845fa7cb499b7d113bb773b8c04a4e29999b3ac8c6c518f1f595d449 \
--control-plane --certificate-key 8921bddc74fade19ce3e752c2f3f8c596cb7185949f512bcdcf7b724c3d68481

# Monitor with kubectl get nodes and kubectl get pods -A until all are ready and running.
# 10.252.54.81 is our haproxy IP

12. Login controller3 and run below command. Join Control Plane Node 3 to the cluster.

# Take the previous command output for joining control plane node to the cluster .In our case controller3

sudo kubeadm join 10.252.54.81:6443 --token ou152l.svboxp5m0gxvsu27 \
--discovery-token-ca-cert-hash sha256:d64a4fee845fa7cb499b7d113bb773b8c04a4e29999b3ac8c6c518f1f595d449 \
--control-plane --certificate-key 8921bddc74fade19ce3e752c2f3f8c596cb7185949f512bcdcf7b724c3d68481

# Monitor with kubectl get nodes and kubectl get pods -A until all are ready and running.
# 10.252.54.81 is our haproxy IP

13. Login worker node1 and run below commands.

containerd config default | sudo tee /etc/containerd/config.toml >/dev/null 2>&1
sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml

sudo kubeadm join 10.252.54.81:6443 --token w8egbz.ugc5hdxftaz7iop6 --discovery-token-ca-cert-hash sha256:53ee4ee84eb80e9046b030732fa67c0936c936821e736085f6d1a55150ba965b

# Monitor with kubectl get nodes and kubectl get pods -A until all are ready and running.
# 10.252.54.81 is our haproxy IP

14. Login worker node 2 and run below commands.

containerd config default | sudo tee /etc/containerd/config.toml >/dev/null 2>&1
sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml

sudo kubeadm join 10.252.54.81:6443 --token w8egbz.ugc5hdxftaz7iop6 --discovery-token-ca-cert-hash sha256:53ee4ee84eb80e9046b030732fa67c0936c936821e736085f6d1a55150ba965b

# Monitor with kubectl get nodes and kubectl get pods -A until all are ready and running.
# 10.252.54.81 is our haproxy IP

15. Check nodes and system pods from controller1:

alcalab@controller1:~$ kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-85578c44bf-vkc75 1/1 Running 0 132m 172.16.214.66 controller1 <none> <none>
calico-node-cqqln 1/1 Running 25 (11m ago) 91m 10.252.54.106 worker2 <none> <none>
calico-node-htg22 1/1 Running 0 143m 10.252.54.45 controller1 <none> <none>
calico-node-lg79p 1/1 Running 7 (26m ago) 36m 10.252.55.221 worker1 <none> <none>
calico-node-nhdkz 1/1 Running 0 97m 10.252.54.88 controller3 <none> <none>
calico-node-rlppr 1/1 Running 0 104m 10.252.54.133 controller2 <none> <none>
coredns-5d78c9869d-sswnz 1/1 Running 0 132m 172.16.214.67 controller1 <none> <none>
coredns-5d78c9869d-w8pw7 1/1 Running 0 144m 172.16.214.65 controller1 <none> <none>
etcd-controller1 1/1 Running 84 144m 10.252.54.45 controller1 <none> <none>
etcd-controller2 1/1 Running 20 104m 10.252.54.133 controller2 <none> <none>
etcd-controller3 1/1 Running 0 97m 10.252.54.88 controller3 <none> <none>
kube-apiserver-controller1 1/1 Running 94 144m 10.252.54.45 controller1 <none> <none>
kube-apiserver-controller2 1/1 Running 0 104m 10.252.54.133 controller2 <none> <none>
kube-apiserver-controller3 1/1 Running 17 97m 10.252.54.88 controller3 <none> <none>
kube-controller-manager-controller1 1/1 Running 77 (103m ago) 144m 10.252.54.45 controller1 <none> <none>
kube-controller-manager-controller2 1/1 Running 0 104m 10.252.54.133 controller2 <none> <none>
kube-controller-manager-controller3 1/1 Running 12 97m 10.252.54.88 controller3 <none> <none>
kube-proxy-57bxm 1/1 Running 0 104m 10.252.54.133 controller2 <none> <none>
kube-proxy-896v4 1/1 Running 23 (25m ago) 93m 10.252.55.221 worker1 <none> <none>
kube-proxy-pwhsn 1/1 Running 23 (11m ago) 91m 10.252.54.106 worker2 <none> <none>
kube-proxy-vcsz2 1/1 Running 0 144m 10.252.54.45 controller1 <none> <none>
kube-proxy-vzjm5 1/1 Running 0 97m 10.252.54.88 controller3 <none> <none>
kube-scheduler-controller1 1/1 Running 76 (103m ago) 144m 10.252.54.45 controller1 <none> <none>
kube-scheduler-controller2 1/1 Running 0 104m 10.252.54.133 controller2 <none> <none>
kube-scheduler-controller3 1/1 Running 14 97m 10.252.54.88 controller3 <none> <none>

alcalab@controller1:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
controller1 Ready control-plane 3h10m v1.27.7
controller2 Ready control-plane 150m v1.27.7
controller3 Ready control-plane 143m v1.27.7
worker1 Ready <none> 139m v1.27.7
worker2 Ready <none> 137m v1.27.7

16. Check haproxy status:

alcalab@haproxy:~$ echo "show stat" | sudo nc -U /run/haproxy/admin.sock | cut -d "," -f 1,2,8-10,18 | column -s, -t
# pxname svname stot bin bout status
k8s-control-plane FRONTEND 33564 40481017 444896960 OPEN
k8s-control-plane control-plane-1 11620 23858568 264448949 UP
k8s-control-plane control-plane-2 2711 10281107 135957292 UP
k8s-control-plane control-plane-3 638 2213922 44490719 UP
k8s-control-plane BACKEND 33564 40481017 444896960 UP

17. Test Your Kubernetes Cluster by installing nginx.Login to controller1 and run below commands:

kubectl create deployment nginx-app --image=nginx --replicas=2

alcalab@controller1:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-app-5c64488cdf-5629d 1/1 Running 0 52m
nginx-app-5c64488cdf-qvz9t 1/1 Running 0 52m

18. Expose the deployment as NodePort:

kubectl expose deployment nginx-app --type=NodePort --port=80
service/nginx-app exposed

alcalab@controller1:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h19m
nginx-app NodePort 10.103.18.197 <none> 80:31032/TCP

19. Try to access from port 31032 with curl

alcalab@controller1:~$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-app-5c64488cdf-5629d 1/1 Running 0 55m 172.16.189.82 worker2 <none> <none>
nginx-app-5c64488cdf-qvz9t 1/1 Running 0 55m 172.16.235.135 worker1 <none> <none>

alcalab@controller1:~$ curl http://10.252.55.221:31032
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

alcalab@controller1:~$ curl http://worker2:31032
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Congratulations! You have successfully set up a Kubernetes cluster on Ubuntu 22.04. Explore further by deploying more complex applications and services on your Kubernetes cluster.

--

--