And now: let’s spin up a Vagrant Kubernetes Cluster

Martien van den Akker
Nerd For Tech
Published in
10 min readOct 5, 2023
Kubernetes Architecture

It has already been a few months since I wrote about a Kubernetes-related subject. It was about using Docker as a K8S Container Runtime. It was a follow-up in traject to create a Multi-node Vagrant-VirtualBox project to get a Kubernetes cluster. Now, I conclude with a description of my vagrant project that does the setup of a Kubernetes cluster.

My “Install-Kubernetes-on-your-own-laptop” series consists of the following articles:

The steps below are mainly based on the labs in the Udemy course Certified Kubernetes Administrator (CKA) with Practice Tests. I also used the Kubernetes documentation: getting started with a production environment. Another good resource is How to Start a Kubernetes Cluster From Scratch With Kubeadm and Kubectl.

I wrapped the different steps into scripts, run by Vagrant provisioners.

Pre-requisites and where to find it

Some of my current colleagues who might be interested in this article, use Apple MacBooks. Currently, those use an Apple M2 chip. Since I work with VirtualBox VMs using Oracle Linux 8 on an Intel chipset, this probably won’t work directly on those Apples.

The complete project can be found on my GitHub project Vagrant — Oracle Linux 8 — Kubernetes. It’s been there for a while already. But, I only now find the time to write about it.

To use the project, you’ll need:

  • Vagrant: My current version is 2.3.7
  • VirtualBox: My current version is 7.0.10
  • Git: My current version is 2.42.0
  • An Intel x86–64 (64- bit) computer, with a minimum of 16GB and about 50GB of free disk space.

The software can easily be installed and maintained using Chocolatey:

Chocolatey GUI showing Vagrant and VirtualBox

How to spin-up

Make a fork of my Github Vagrant — Oracle Linux 8 — Kubernetes project and clone it, or clone it directly.

Then edit the settings.yml file. See Modularizing Multi-machine Vagrant projects for the meaning of the different settings.

Having done that, open a command line window, a PowerShell, or a Cmder window, and issue the following command:

vagrant up

Then wait until the VMs are created and the main provisioners have run.

After having the base provisioning done, you might want to use the takesnapshots.cmd script to take snapshots for all the nodes. This enables you to revert back to this stage when a follow-up provisioner fails.

Since I deliberately set the run property of all the Kubernetes install provisioners to never, you need to run each provisioner one by one by:

vagrant provision --provision-with ${provisioner-name}

When you’re confident with the running of the different provisioners, you can update the run property of all those provisioners to once and give it a go. It should end up with a complete working Kubernetes environment in one run.

Explanation of the configuration steps

The following provisioners are currently defined in the provisioners.yml file:

Provisioners in the project

The provisioners prepLinux, initFileSystem, addOracleUser, and installDocker are already described in the first two articles.

The others are left for this article. The setup of the Kubernetes cluster in this project follows the instructions in the Kubernetes Docs — Production environment. As a disclaimer, I should state that this project would not suffice for a real production environment.

If you would take a look at the Vagrantfile you’ll see that some of the provisioners are defined on both the master and worker nodes.

These are:

  • setupHosts: Setup Hosts
  • updateDNS: Update DNS
  • setupBridgedTraffic: Setup Bridged Traffic
  • installCRIDocker: Install CRI Docker
  • installKubeCLIs: Install Kube CLIs and service

A few are only defined for the master node:

  • kubeadmInit: Initialize using Kubeadm
  • installWeave: Install the Network Policy Provider Weave
  • genJoinClusterScript: Generate a Join Cluster Script for the worker nodes

The last one is only defined for the worker nodes:

  • joinCluster: Join the worker nodes to the Cluster

I’ll explain each of these provisioners below.

Setup Hosts

The first step in the setup is to make sure that the cluster nodes, the master and the workers, can reach each other. This is done in the setupHosts provisioner that makes use of the setup-hosts.sh script.

This script will determine the IP address of the current NIC (Network Interface Card). It will add this address to the /etc/hosts file. To get to the other nodes, it will add the assumed IP addresses of the other nodes to the /etc/hosts file.

Run this provisioner with:

$ vagrant provision --provision-with setupHosts

Update DNS

The script run by this provisioner will set Google’s DNS service to the resolved.conf file. By doing so, the hosts can resolve the addresses needed to download the container images and other components.
See also Docker docs — Networking Overview: DNS services.

Run this provisioner with:

$ vagrant provision --provision-with updateDNS

Setup Bridged Traffic

This provisioner configures Linux for the network plugin. The script automates the steps as described in Kubernetes docs — Install and configure prerequisites.

The modules br_netfilter and overlay are enabled. The Kubernetes bridged networking properties are set in the /etc/sysctl.d/k8s.conf.

Run this provisioner with:

$ vagrant provision --provision-with setupBridgedTraffic

Install CRI Docker

In the past, Kubernetes used Docker by directly issuing Docker commands. It used the Docker APIs directly, to create, start, stop containers, etc. I may not state this completely accurately. But read more about it at FAQ: What’s the deal with dockershim and cri-dockerd?

In the current architecture, Kubernetes uses the standard Container Runtime Interface to work with containers. Docker still works with Kubernetes, however, you’ll need a CRI-conformant third-party adapter, that is cri-dockerd.

This provisioner and its script use the instructions on How to install cri-dockerd and migrate nodes from dockershim to do the install on an Oracle Linux 8 platform. Something like this may not be necessary when using other container engines, like containerd, as they may “talk” Container Runtime Interface natively.

Run this provisioner after the installDocker provisioner with:

$ vagrant provision --provision-with installDocker
$ vagrant provision --provision-with installCRIDocker

Install Kube CLIs and service

The fun part of actually installing Kubernetes is near. However, to be able to do so, we need three components. Two of them are Command-Line Interfaces and one is actually an OS service:

  • kubelet: this is an agent that communicates with the container runtime engine to start or stop containers, create or drop volume mounts, and register the network-related changes to the networking component.
  • kubeadm: this is the Kubernetes administration tool. We use this to initialize and maintain the Kubernetes cluster.
  • kubectl: once the Kubernetes cluster is up and running, we use this interface to “talk” to the cluster. To tell Kubernetes to create the resources we want.

Besides the container runtime engine, the kubelet is about the only Kubernete scomponent that runs as a direct OS service. Because it has to communicate with the different components on the OS level. In Oracle terms, you could compare it with an Oracle Weblogic Nodemanager, or an Oracle Enterprise Manager Agent. It has quite comparable responsibilities.

Run this provisioner with:

$ vagrant provision --provision-with installKubeCLIs

Initialize using Kubeadm

This is the first provisioner that is only defined to run on the Master node. Because this will initialize the so-called Control Plane of Kubernetes.

It uses the kubeadm to create pods for the Kubernetes components like the kube-apiserver, etcd, kube-controller-manager, kubescheduler, etc. These run only on the master nodes.

The actual creation of the Kubernetes cluster is done through the kubeadm init command:

$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.56.11 --cri-socket unix:///run/cri-dockerd.sock

Here you see that it advertises the api-server address as the address of the control plane. Also, it defines the cri-dockerd.sock as the cri-socket, and it defines an IP address range for the different nodes.

Another way of doing thiste would be to have a kubeadm-config.yaml like:

apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: 1.27.1
controlPlaneEndpoint: "kubemaster-1:6443"
networking:
podSubnet: 10.244.0.0/16

And then run:

$ kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.out

Where the output is saved to the kubeadm-init.out file.

The current version of the provisioning script uses the first method. It saves the resulting admin.conf in /etc/kubernetes/admin.conf to the ~/.kube/config file. This way, the kubectl command is initialized to log on to the Kubernetes cluster.

Run this provisioner with:

$ vagrant provision --provision-with kubeadmInit

It should run only on kubemaster-1.

Install the Network Policy Provider Weave

Networking is important for a Kubernetes environment. Pods need to get their own IP addresses, which we want to abstract using services. We may want to expose those services in a way that we can get to them through the public IP addresses of the nodes. And we may want to control which pods/services are allowed to reach which other pods/services. For instance, databases shouldn’t get connections directly from the evil internet.

Kubernetes has outsourced this to other network policy providers. Like Weave, Calico, or Cillium. See for instance: Kubernets Docs — Install a Network Policy Provider.

This provision does an install of Weave. It runs the install-weave.sh script that determines the latest, greatest version of Weave. It downloads the YAML file of Weave. And then feeds it to kubectl apply.

This provisioner also only runs on the master node. It only needs to be run once for the complete cluster, since it uses kubectl to tell Kubernetes that we want the Weave components on all the nodes. Kubernetes then figures out how to make sure that all the nodes will have the proper Weave-related pods running. At the Initialize using Kubeadm step above, we initialized kubectl on the master node. So, it makes sense to run it only once on the master node.

Run this provisioner only on kubemaster-1 with:

$ vagrant provision --provision-with installWeave

It should run only on kubemaster-1.

Generate a Join Cluster Script for the worker nodes

The worker nodes are also configured using kubeadm. This has to be done per worker node. In the output of the kubeadm init command, you’ll find the kubeadm join to do this. In this command, a token and a hash of the Certificate Authority certificate that is generated during the Kubernetes initialization.

The token, however, is only valid for two hours. So you’ll need to do the join within these two hours. When installing Kubernetes in one go, this won’t be much of a trouble. However, in real life, you might want to extend your cluster at a later time. So, I wanted to have a script that generates a script to join the cluster on demand. And also to be able to have this done automatiscaly in a provisioner.

The current version of the script used by this provisioner, gen-join-cluster-scr.sh, does a list of the current tokens, grabs the first one, and sets the $KUBE_TOKEN variable with it. To generate a new token, you could use the command:

$ sudo kubeadm token create

As an improvement, this could be added to the script.

The template of the script is join-cluster.sh.tpl, and the gen-join-cluster-scr.sh will use the envsubst command to expand that file to the actual join-cluster.sh script used by the worker nodes. The script is stored in the /vagrant shared folder, which is accessible to all the nodes in the Vagrant project.

Since this script uses kubectl to get the token, and openssl to get the public key of the CA certificate from the master node, this provisioner is run only on the Master node.

Run this provisioner with:

$ vagrant provision --provision-with genJoinClusterScript

It should run only on kubemaster-1.

Join the worker nodes with the Cluster

For every worker node, the join-cluster.sh script that is generated by the master node is run by this provisioner. It will use the kubeadm join command to create the kubernetes components needed on the worker node, and register those with the control plane.

Having run this one you would now have a running Kubernetes cluster.

Run this provisioner with:

$ vagrant provision --provision-with joinCluster

It should run only on the two workernodes.

Check out your cluster

At this point, you will have a working Kubernetes cluster. So, let’s check it out.

First, go to the shell of the kubemaster:

$ vagrant ssh kubemaster-1

Then do a sudo to oracle:

[vagrant@kubemaster-1 ~]$ sudo su - oracle

Check out the nodes:

[oracle@kubemaster-1 ~]$ kubectl get nodes
List of the nodes

Next, list the deployments, DeamonSets, and pods of all the namespaces:

[oracle@kubemaster-1 ~]$ kubectl get deploy,ds,po -A -o wide
The pods on all the namespaces

Notice that the kube-proxy and weave-net pods are running on all the nodes.

Let’s create a deployment with 3 pods:

[oracle@kubemaster-1 ~]$ kubectl create deploy my-first-nginx --image=nginx --replicas=3

Then check out the created deployment and pods:

[oracle@kubemaster-1 ~]$ kubectl get deploy,pod -o wide
Deployment pods

Notice here that the pods are only running on the kubeworker nodes. In my case two on kubeworker-1 and one on kubeworker-2. This is because the kubemaster-1 has a taint that prevents scheduling “worker” pods:

[oracle@kubemaster-1 ~]$ kubectl describe nodes |grep Taints:
kubemaster-1 taints, preventing scheduling pods

Conclusion

To me, this was a fun exercise, to get the bits and pieces into my own project.

There are several choices to make:

  • I chose Docker since I already had some install scripts for it. But, I might add another provisioner for containerd.
  • I used Weave here, but you could also choose Calico or Cillium. I could add a provisioner of one of these as well.

When I add other components to choose from, I will create provisioners for those. And I’ll try to document those in my GitHub project vagrant-ol-kubernetes. You can choose the combination of your liking.

But you can add provisioners yourself as well, of course. I would be interested.

--

--

Nerd For Tech
Nerd For Tech

Published in Nerd For Tech

NFT is an Educational Media House. Our mission is to bring the invaluable knowledge and experiences of experts from all over the world to the novice. To know more about us, visit https://www.nerdfortech.org/.

Martien van den Akker
Martien van den Akker

Written by Martien van den Akker

Technology Architect at Oracle Netherlands. The views expressed on this blog are my own and do not necessarily reflect the views of Oracle

No responses yet