Kubernetes 3 node cluster Setup !

Jeganathan Swaminathan ( jegan@tektutor.org )
tektutor
Published in
6 min readMar 16, 2022

Jeganathan Swaminathan ( jegan@tektutor.org )

In this article, you will learn how to setup a 3 node Kubernetes Cluster on your local Workstation or Server.

This blog helps you setup a Kubernetes 1.21 cluster with Docker Community Edition.

.

Kubernetes High Level Architecture

Operating System

As CentOS 8.x has reached its End of Life by 31st Dec 2021, RedHat stopped software updates from 31st Jan 2022. Hence, I would suggest you to use CentOS 7.x as it will reach its End of Life only by 30th June 2024.

For this setup, we will need 3 Virtual Machines or Physical Machines with CentOS 7.9 64-bit pre-installed. You may create Virtual Machines using KVM, VirtualBox, or VMWare, etc.,

Recommended System Configuration for each Virtual Machine

Quad Core Processor

16 GB+ RAM

250 GB+ HDD (Storage )

Installing Docker Community Edition in Master, Worker1 and Worker2 Virtual Machines.

sudo yum install -y yum-utilssudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
sudo yum install docker-ce docker-ce-cli containerd.io docker-compose-plugin

Let’s start Docker service as shown below on master and all worker nodes.

sudo systemctl daemon-reload
sudo systemctl enable docker
sudo systemctl start docker
sudo usermod -aG docker $USER
newgrp docker

Let’s disable virtual memory in master and all worker node virtual machines.

sudo swapoff -a

To permanently disable virtual memory(recommended for this setup), we need to disable swap partition in the below file /etc/fstab by commenting out the line that has the swap partition.

sudo vim /etc/fstab

Let’s us disable SELinux

sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config

Changing the hostnames of master, worker1 and worker2 Virtual Machines.

Login to the machine that you wish to use as master node in Kubernetes Cluster and change its hostname to master.tektutor.org.

You may replace tektutor.org with your preferred domain name.

sudo hostnamectl set-hostname master.tektutor.org

Repeat this on worker1 Virtual Machine

sudo hostnamectl set-hostname worker1.tektutor.org

Let’s also update the hostname of worker2 Virtual Machine.

sudo hostnamectl set-hostname worker2.tektutor.org

Updating the /etc/hosts file on master, worker1 and worker2 with their IP addresess.

sudo vim /etc/hosts

You need to replace the IP address and hostnames of your master, worker1 and worker2 appropriately.

192.168.167.151 master.tektutor.org
192.168.167.135 worker1.tektutor.org
192.168.167.136 worker2.tektutor.org

Master Node Firewall configurations

In the below commands, the IP Address 192.168.0.0/16 should be replaced with your VMs subnet appropriately.

sudo firewall-cmd --permanent --add-port=6443/tcp
sudo firewall-cmd --permanent --add-port=2379-2380/tcp
sudo firewall-cmd --permanent --add-port=10250-10252/tcp
sudo firewall-cmd --permanent --add-port=10255/tcp
sudo firewall-cmd --permanent --add-port=30000-32767/tcp
sudo firewall-cmd --permanent --add-masquerade
sudo firewall-cmd --permanent --zone=trusted --add-source=192.168.0.0/16 sudo modprobe br_netfilter
sudo systemctl daemon-reload
sudo systemctl restart firewalld
sudo systemctl status firewalld
sudo firewall-cmd --list-all

Worker Node Firewall configurations

In the below commands, the IP Address 192.168.0.0/16 should be replaced with your VMs subnet appropriately.

sudo firewall-cmd --permanent --add-port=10250/tcp
sudo firewall-cmd --permanent --add-port=30000-32767/tcp
sudo firewall-cmd --permanent --add-masquerade
sudo firewall-cmd --permanent --zone=trusted --add-source=192.168.0.0/16
sudo modprobe br_netfilter
sudo modprobe overlay
sudo systemctl daemon-reload
sudo systemctl restart firewalld
sudo systemctl status firewalld
sudo firewall-cmd --list-all

Configure IPTables to see bridge traffic in Master and Worker Nodes

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system

Configure Docker to use systemd cgroup driver by editing sudo vim /etc/docker/daemon.json with the below content.

{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}

Make sure you restarted Docker as shown below to apply the above configure changes

sudo mkdir -p /etc/systemd/system/docker.service.d
sudo systemctl daemon-reload
sudo systemctl enable docker && sudo systemctl start docker

Install kubectl kubeadm and kubelet on Master & Worker nodes

curl -LO https://dl.k8s.io/release/v1.21.0/bin/linux/amd64/kubeadmcurl -LO https://dl.k8s.io/release/v1.21.0/bin/linux/amd64/kubeletcurl -LO https://dl.k8s.io/release/v1.21.0/bin/linux/amd64/kubectlchmod +x ./kube*
sudo mv ./kube* /usr/local/bin

Configure kubelet in Master and Worker Nodes

As root user, edit the file /etc/sysconfig/kubelet and append the below and save it.

KUBELET_EXTRA_ARGS=--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice

You may now enable the kubelet service as shown below

sudo systemctl enable --now kubelet

Restart Docker and Kubelet

sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl restart kubelet

Install kubeadm depedency conntrack to enable logical network communication

sudo yum install -y conntrack 

Bootstrapping Master Node as root user

sudo kubeadm init --pod-network-cidr=192.168.0.0/16

Do the below steps as non-admin user

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

If you wish to run kubectl commands as non-admin user then you need to append the below line to $HOME/.bashrc file

export KUBECONFIG=~/.kube/config

In order to apply the export changes done in $HOME/.bashrc file you need to run the below command

source ~/.bashrc

In order to access the cluster without issues after machine reboots, add the below to /root/.bashrc Do the below as root user

export KUBECONFIG=/etc/kubernetes/admin.conf

In order to apply the export changes done in the /root/.bashrc, you need to manually run this

source /root/.bashrc

Save your join token in a file on the Master Node, the token varies on every system and every time you type kubeadm init, hence you need to save your join token for your reference before you clear your terminal screen.

The join token will vary each time you bootstrap the master, hence you need to replace the token with your token.

I normally save the token in a file token.txt. However, this is optional.

kubeadm join 192.168.154.128:6443 --token 5zt7tp.2txcmgnuzmxtgnl \
--discovery-token-ca-cert-hash sha256:27758d146627cfd92079935cbaff04cb1948da37c78b2beb2fc8b15c2a5adba

In case you forgot to save your join token and cleared the terminal screen, no worries try this on Master Node

sudo kubeadm token create --print-join-command

In Master Node

kubectl get nodes
kubectl get po -n kube-system -w

Installing Calico CNI in Master Node

To learn more about Calico CNI, refer https://docs.projectcalico.org/getting-started/kubernetes/self-managed-onprem/onpremises

curl https://docs.projectcalico.org/manifests/calico.yaml -O
kubectl apply -f calico.yaml

In Master Node watch the pod creation after installing Calico

kubectl get po -n kube-system -w

Press Ctrl+C to come out of watch mode.

In Worker1 and Worker2 Nodes

kubeadm join 192.168.154.128:6443 --token 5zt7tp.2txcmgnuzmxtgnl \
--discovery-token-ca-cert-hash sha256:27758d146627cfd92079935cbaff04cb1948da37c78b2beb2fc8b15c2a5adba

In Master Node

At this point, you are supposed to see 3 nodes in ready state.

kubectl get nodes

Congratulations, your 3 node Kubernetes cluster is all set !!!

Troubleshooting

In case you have trouble setting up master, you could perform kubeadm reset on all nodes as shown below, clean-up the folders and try kubeadm init in the master node.

kubeadm reset

You need to manually remove the below folder before you attempt to bootstrap master node.

sudo rm -rf /etc/cni/net.d
sudo rm -rf /etc/kubernetes
rm -rf $HOME/.kube
sudo rm -rf /root/.kube

You can follow the author to get notified when he publishes new articles.

If you found this post helpful, please click the clap 👏 button below a few times to show your support for the author 👇

My other articles

Using Metal LB in a bare metal /on-prem K8s setup

https://medium.com/@jegan_50867/using-metal-lb-on-a-bare-metal-onprem-kubernetes-setup-6d036af1d20c

Using Nginx Ingress Controller in bare-metal Kubernetes setup

https://medium.com/@jegan_50867/using-nginx-ingress-controller-in-kubernetes-bare-metal-setup-890eb4e7772

--

--

Jeganathan Swaminathan ( jegan@tektutor.org )
tektutor

Freelance Software Consultant & Corporate Trainer.I deliver training & provide consulting — DevOps,K8s, OpenShift,TDD/BDD,CI/CD,Microservices etc.