Installation Series — Article 4

Installing Kubernetes — Kubeadm Part 1

Ayman Abu Qutriyah
Kubernetes DeveOps
Published in
6 min readNov 19, 2023

--

One Master node, ContainerD, Calico, and multi-worker node implementation

Calico cat using kubadm

In my previous articles, we went through installing different flavors of Kubernetes and using other methods; in this article, we will go through installing CNCF Kubernetes using Kubeadm.

kubeadm is a tool provided by the Kubernetes project designed to help with the installation and configuration of Kubernetes clusters. It provides a straightforward and fast way to set up a minimum viable Kubernetes cluster with best practices. Since it is provided by the Kubernetes project, it is supervised by CNCF, and using it will make sure to stay vendor-independent and installation to be according to best practices.

We will be installing a single node control plan machine on Ubuntu 22.04, and will install two worker nodes using the same Ubuntu version. We will use ContainerD as the container run time and Calico as the CNI “Container network interface.”

The first step is to make a full update of the system:

sudo apt update && sudo apt -y full-upgrade

The second step will be installing kubelet, kubeadm and kubectl, to give a small brief about each; kubelete is a core component of Kubernetes, which runs on every node in the cluster, including both master and worker nodes. Its primary role is to ensure that containers are running in a Pod. While kubectl is a command-line tool that allows us to run commands against Kubernetes clusters. It is used to deploy applications, inspect and manage cluster resources, and view logs.

We start by adding Google Repository for Kubernetes to install both components:

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
sudo apt-get update

Update: If the Ubuntu version is 24.04 we can use the below to add the repositories:

echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

Afterward, we can install the three packages:

sudo apt-get install -y kubelet kubeadm kubectl

To verify the installed versions of the packages

kubectl version --client && kubeadm version

As we can see, at the time of this installation attempt, the version is 1.28.2, which is the latest at the time of this installation:

It is recommended to disable Linux swap to avoid stability and performance issues, to disable it permanently edit /etc/fstab and comment on the line of the swap:

then run:

sudo swapoff -a
sudo mount -a

As we can see now, the swap is disabled.

Install and configure pre-requisites for contained,you can find these requirements as well in the official documentation on kubernete.io, mainly below is related to kernel mode.

sudo modprobe overlay

The overlay module is a Linux kernel module that implements the overlay filesystem. It's used by container runtimes to efficiently manage and layer Docker images and containers. If you are using a container runtime that relies on the overlay filesystem (like Docker or ContainerD), ensuring that this module is loaded can be crucial for proper operation. For Containerd, which we use in this tutorial, the overlay filesystem is often used as the storage driver. While modern Linux distributions typically load this module by default, explicitly loading it can ensure compatibility and prevent potential issues.

Additionally, after loading br_netfilter, it's a good practice to set some sysctl params required by Kubernetes:

sudo sysctl net.bridge.bridge-nf-call-iptables=1
sudo sysctl net.bridge.bridge-nf-call-ip6tables=1

In a Kubernetes context, IP forwarding is crucial because the nodes need to forward traffic from pods to other pods and external networks. Without IP forwarding enabled, the networking in Kubernetes won’t function correctly, as the nodes won’t be able to send traffic to pods located on other nodes or to services outside the cluster.

sudo sysctl -w net.ipv4.ip_forward=1
echo "net.ipv4.ip_forward = 1" | sudo tee -a /etc/sysctl.conf ## To make it permanent
sudo sysctl -p

Now we need to install the container run time, as we stated earlier we will be using containerD as our container run time

sudo apt-get install -y containerd

Lets write the configuration file where containerD will expect to read from and enable the service

sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
sudo systemctl restart containerd
sudo systemctl enable containerd

Now lets enable the kubelet service

sudo systemctl enable kubelet

Now lets initialize the control plane and first control plane node, since we are using a single master node here, we can run the setup without specifying a control-plane end point, simply we run the below command.

sudo kubeadm init --pod-network-cidr=192.168.0.0/16

We should get an output like the below:

Now to be able to use the kubectl command-line tool, we have to save the Kubernetes configuration to the file that it normally reads

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Now if we tried to get the installed pods for the control plane which are usually found in the kube-system name space you will see the pods like below:

Note that all pods are running except for the coredns pods which actually need the network plugin to be ready to start up.

Now, let's install a network plugin to have the networking ready for the cluster; as we stated earlier, we will be using Calico as our CNI

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/tigera-operator.yaml

Now a punch of CRD will be created, and an operator pod will be created

Then we install custom resources for calico and the actual controllers

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/custom-resources.yaml

To check the created pods

Now if we returned to check the coredns pods:

Conclusion

This concludes the first part of installing Kubernetes using kubeadm, the next story will contain how we can add workers nodes to the same Kubernetes cluster that we have just created with a single node control plane.

--

--

Ayman Abu Qutriyah
Kubernetes DeveOps

IT infrastructure Architect/RedHat Architect with extensive experience in DevOps, Paas, kubernetes and OpenSource technologies