Setting up a local Kubernetes cluster from scratch

Prerequisites

Install the following:

Vagrantfile

We will be using a Vagrant file to get a cluster with a control plane and two worker nodes. This simplifies the creation of the cluster while not hiding away the details of how a cluster gets created.

apiVersion: kubeadm.k8s.io/v1beta2kind: ClusterConfigurationkubernetesVersion: 1.22.1controlPlaneEndpoint: "k8s-cp:6443"networking:  podSubnet: 10.244.0.0/16

Vagrant up!

Start the cluster with:

vagrant up
==> k8s-worker-2: Machine booted and ready!
==> k8s-worker-2: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> k8s-worker-2: flag to force provisioning. Provisioners marked to run always will still run.

Accessing the control plane

You can then ssh into the control plane:

vagrant ssh k8s-cp
# Reset if needed sudo kubeadm reset
sudo kubeadm init --config ./kubeadm-config.yaml
mkdir -p $HOME/.kubesudo cp /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config
# Install a CNI# Weaveworks
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
# Calico is pretty bad, buggy as of 2022
# kubectl create -f https://projectcalico.docs.tigera.io/manifests/tigera-operator.yaml
# kubectl create -f https://projectcalico.docs.tigera.io/manifests/custom-resources.yaml
# Restart daemonssudo systemctl daemon-reload
sudo systemctl restart kubelet
vagrant@k8s-cp:~$ alias k=kubectl
vagrant@k8s-cp:~$ k get nodes
NAME STATUS ROLES AGE VERSION
k8s-cp Ready control-plane,master 8d v1.22.1
vagrant@k8s-cp:~$ k get po -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-78fcd69978-sz5mb 1/1 Running 0 2m13s
coredns-78fcd69978-td5lg 1/1 Running 0 2m13s
etcd-k8s-cp 1/1 Running 0 2m21s
kube-apiserver-k8s-cp 1/1 Running 0 2m20s
kube-controller-manager-k8s-cp 1/1 Running 0 2m21s
kube-proxy-dbqhm 1/1 Running 0 2m13s
kube-scheduler-k8s-cp 1/1 Running 0 2m20s
weave-net-njrsl 2/2 Running 1 (114s ago) 2m13s
k describe po weave-net-njrsl -n kube-system
sudo kubeadm token create --print-join-command
>> use the output of this command for the worker node

Add worker nodes

Go back to your local shell and open up a connection to the first worker node

vagrant ssh k8s-worker-1
kubeadm join k8s-cp:6443 --token xxx --discovery-token-ca-cert-hash sha256:xxx[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
vagrant@k8s-cp:~$ k get nodes
NAME STATUS ROLES AGE VERSION
k8s-cp Ready control-plane,master 4m11s v1.22.1
k8s-worker-1 Ready <none> 27s v1.22.1

Summary

In summary, we have created a local Kubernetes cluster from scratch using a Vagrantfile. We’ve booted up a control plane, installed a container network interface. We have then added a worker node to the cluster by using the join command. We now have the foundation for running k8s exercises locally!

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
jty

jty

Software Developer | Data | Web3 Enthusiast