How to create a Kubernetes Cluster using KubeADM in Ubuntu

Arjun B Nair
6 min readAug 10, 2022

--

Introduction

Kubeadm is a tool built to provide kubeadm init and kubeadm join as best-practice “fast paths” for creating Kubernetes clusters. kubeadm performs the actions necessary to get a minimum viable cluster up and running. By design, it cares only about bootstrapping, not about provisioning machines.

Prerequisites

Server provisioning:
Two servers need to be provisioned. One for the master node with atleast 2GB vCPU and 8GiB of memory, another one for the worker node which requires at least 1 vCPU and 1 GiB of memory. In case of using public cloud(AWS), you can use a t2.medium for the master node and t2.micro for the worker node.

Operating system:
Since this document is specific for Ubuntu based OS, you can choose 18.04 or 20.04-amd64 Ubuntu machines.

Steps

Master node

  1. First copy the below into a shell script, provide execution permission and run the script. Copy the below script into a file, let’s say run.sh and provide execution permission by running chmod +x run.sh and run the shell script by running ./run.sh.
#!/bin/bashsudo apt update -y && sudo apt upgrade -y
sudo apt-get install -y ca-certificates curl gnupg lsb-release
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
# Add the repository to Apt sources:
echo
\
"deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get -y install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-pluginsystemctl enable docker.service
systemctl start docker.service
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
# sysctl params required by setup, params persist across rebootscat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
# Apply sysctl params without reboot
sudo
sysctl --system
lsmod | grep br_netfilter
lsmod | grep overlay
sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forwardcat <<EOF | sudo tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
sudo systemctl enable docker
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpgecho 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.listsudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
sudo hostnamectl set-hostname master-noderm /etc/containerd/config.toml
systemctl restart containerd

2. Next, we have to configure the Container Network Interface (CNI). You can configure any CNI according to your preference. Here in this document, we will go through configuring any three CNIs mainly Calico, Flannel and Antrea. You only need to try one of the CNIs in a cluster. You can visit the official Kubernetes website to see the list of available CNIs and learn how to configure them. Click here

Calico

Run the following commands and Calico will be installed.

kubeadm init --pod-network-cidr=192.168.0.0/16

Before running the next command, make sure to run step number #3 before.

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.3/manifests/tigera-operator.yaml
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.3/manifests/custom-resources.yaml
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
kubectl taint nodes --all node-role.kubernetes.io/master-

Flannel

Run the following commands and Flannel will be installed.

kubeadm init --pod-network-cidr=10.244.0.0/16

Before running the next command, make sure to run step number #3 before.

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Antrea

Run the following commands and Antrea will be installed.

Unlike Calico and Flannel, for Antrea when you specify the --pod-network-cidr=<CIDR Range for Pods> , you can provide the CIDR range of the infra network with a ‘/16’ for masking.

Run the following and Antrea will be installed

kubeadm init --pod-network-cidr=<CIDR range for Pods>/16

Before running the next command, make sure to run step number #3 before.

kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea.yml

3. Run the below commands to set the directory for Kubernetes.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf

4. After setting up the CNI, your master node will be ready. You can run the following command to view all your Pods initializing and running.

kubectl get pods --all-namespaces

Worker Node

1. First copy the below into a shell script, provide execution permission and run the script.

#!/bin/bashsudo apt update -y && sudo apt upgrade -y
sudo apt-get install -y ca-certificates curl gnupg lsb-release
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
# Add the repository to Apt sources:
echo
\
"deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get -y install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
systemctl enable docker.service
systemctl start docker.service
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
# sysctl params required by setup, params persist across rebootscat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
# Apply sysctl params without reboot
sudo
sysctl --system
lsmod | grep br_netfilter
lsmod | grep overlay
sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forwardcat <<EOF | sudo tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
sudo systemctl enable docker
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpgecho 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.listsudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
sudo hostnamectl set-hostname worker-node-01
rm /etc/containerd/config.toml
systemctl restart containerd

After running the script, the worker node will be ready to join the Kubernetes cluster.

Joining the Worker Node to the Kubernetes Cluster

In order to join the worker node to the Kubernetes cluster, following the below steps.

In the master node, you need to create a token using kubeadm and generate a join command.

kubeadm token create --print-join-command

This will create an executable join command that can be run on the worker node. After running the generated ‘kubeadm join’ command in the worker node, it will join the Kubernetes cluster.

You can run ‘kubectl get nodes’ from the master node to check whether the node has joined the cluster or not.

Removing a Worker Node from the Kubernetes Cluster

  1. First you need to drain the node from the Kubernetes cluster. Run the following command to drain the node.
kubectl drain <node_name> --ignore-daemonsets

2. Next, run the following command to remove the worker node from the cluster.

kubectl delete node <worker node name>

3. Next, go to the worker node server and run the following command so that we are reseting things on the worker node server instance as well.

kubeadm reset

4. Next, we need to remove the CNI data from the worker node. Go to the location mentioned below and remove any file associated with the CNI.

cd /etc/cni/net.d/

--

--

Arjun B Nair

DevOps Engineer with experience in cloud engineering, automation, Python programming, SQL, Networking, Containers, etc.