A Journey With Kubernetes: Part 2 — Introducing Virtual Machines
The adventure continues….
In my previous post, we explored the struggles and triumphs that I undertook while creating a Kubernetes cluster using Raspberry Pi’s. Although there were quite a few issues getting it up and running, the end result was pretty exciting and very workable. In this post, we are going to build upon what I learned and instead of using Raspberry Pi’s, I will be using virtual machines on a server. The goal is to get everything up and running on x64 architecture and see if the cluster is still usable after a power loss.
The server I will be working with is a Dell Poweredge R710. It has seventy-two gigabytes of RAM and three terrabytes of storage running in RAID 1. Ubuntu Server 20.04 is running on top of all of this. For the virtual machines, they will be using Ubuntu Server 20.04 as well. I will not go into detail about creating the virtual machines. Just note that like the POC I ran during the semester, we will have one primary node and three worker nodes. In our case here, I am using QEMU/KVM with virt-manager (VMWare and Virtualbox should work just fine as well). It’s also helpful to have SSH installed too.
To start off, it’s important to update the operating system with:
sudo apt update
After updating the system, Docker can be installed:
sudo apt install docker.io
To check that Docker did in fact get installed, run:
docker --version
Set Docker to launch when the system is booted:
sudo systemctl enable docker
To check that Docker is running:
sudo systemctl status docker
In case Docker isn’t running, this command will start it up:
sudo systemctl start docker
Next, it’s time to get Kubernetes installed. Before that can be done, I have to add the signing keys:
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
After that, the Kubernetes package repository can be added to the software sources:
sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
Once added, Kubernetes can be installed:
sudo apt install kubeadm kubelet kubectl
Now that all packages are installed, swap has to be disabled before creating the cluster:
sudo swapoff -a
One thing I learned the hard way is that this command will not persist when the virtual machine is rebooted. Meaning, when the virtual machine is rebooted, swap will be re-enabled. To make it persist, I commented out the line that began with /swap in the /etc/fstab file using the Nano text editor.
Now that swap was disabled, I ran the following command to initialize and create the cluster. Note that it did take a few minutes to complete:
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
Output of the cluster being created successfully:

Running this command, will show all nodes on the cluster:
kubectl get nodes
Output:

You’ll notice that the STATUS says not ready. This is because the cluster is missing networking software. Before that issue can be tackled, these commands need to be executed to create a directory for the cluster:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
With the directory created, Flannel can be installed, adding a network layer to the cluster. This will allow the different nodes to communicate with each other.
kubectl apply -f https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml
The final step is to add the three worker nodes to the cluster. This was accomplished by running the bash script that I created here. Essentially, this script installs Docker, Kubernetes and disables swap (basically, what we did earlier ;)). After logging into the node em-worker01 and running the script, I ran following command to get the node to join the cluster:
sudo kubeadm join 192.168.168.17:6443 --token h17syp.f8wwnr9bawpfl4bo --discovery-token-ca-cert-hash sha256:4663cd769b14427c4d33cd9e2fa94cb0832e836c921a58554143658233784ae4
After running this command, we get the following output:

If you’re like me and you forget to copy the join the command after initializing and creating the cluster, this will show that command:
kubeadm token create --print-join-command
For the remaining two nodes (em-worker02 and em-worker03) the previous steps were utilized. After all three worker nodes had joined the cluster, I went back to the primary node and re-ran the command to get all nodes:
kubectl get nodes
It took a couple tries, but eventually the output looked like this:

At this point, the cluster is ready to use. The big perk here is that it is up and running on x64 architecture. Therefore, I shouldn’t have any issues with a lack of Docker images. Also, it’s also worth mentioning that the cluster was able to survive a server restart. This was a relief to see since it was quite annoying having to rebuild everything after a restart on the Raspberry Pi’s. In the next part of this series, I will attempt to get multiple databases deployed to the shiny new cluster. Cheers!
Special thanks to Kubernetes Documentation and Stackoverflow for helping me figure this out!