Creating multiple Kubernetes clusters using Kind — Part 1
Kind is a tool for running local Kubernetes clusters using Docker container “nodes”. kind was primarily designed for testing Kubernetes itself, but may be used for local development or CI.
In this blog, we will focus on creating 2 kubernetes clusters (sitea and siteb) using kind and we will see on a high level:
- How to write config files and create clusters from them
- Inspect the clusters and understand what kind did from docker perspective
- How to interact with the clusters
Let’s install kind as per the instructions mentioned here.
How to write config files and create clusters from them:
Kind needs a config file that describes the cluster configuration like how many nodes need to be there in the cluster? what ports need to be opened? how many control plane nodes need to be there in the cluster? Any folders that need to be mounted on the cluster? which version of the kubernetes needs to be used etc.
Kind makes use of an image kindest/node to create the kubernetes nodes as shown below:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
kindest/node <none> 3da3ccb2a738 2 weeks ago 926MB
Now, let us make use of this and write a simple config file for our clusters sitea and siteb.
sitea:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 32333
hostPort: 32333
listenAddress: "0.0.0.0" # Optional, defaults to "0.0.0.0"
protocol: tcp
- role: worker
- role: worker
- role: worker
siteb:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 32334
hostPort: 32334
listenAddress: "0.0.0.0" # Optional, defaults to "0.0.0.0"
protocol: tcp
- role: worker
- role: worker
In the above config files, the number of role entries under nodes defines the number of nodes that need to be deployed in the cluster and each role entry defines if the node needs to act as a control plane or worker. We can also specify the port mappings, mount points under the role as needed.
We need to pass the sitea and siteb files to kind as follows:
$ kind create cluster --name sitea --config sitea
--> Success creating sitea cluster:
Creating cluster "sitea" ...
✓ Ensuring node image (kindest/node:v1.27.1) 🖼
✓ Preparing nodes 📦 📦 📦 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
✓ Joining worker nodes 🚜
Set kubectl context to "kind-sitea"
You can now use your cluster with:
kubectl cluster-info --context kind-sitea
Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂
$ kind create cluster --name siteb --config siteb
Creating cluster "siteb" ...
✓ Ensuring node image (kindest/node:v1.27.1) 🖼
✓ Preparing nodes 📦 📦 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
✓ Joining worker nodes 🚜
Set kubectl context to "kind-siteb"
You can now use your cluster with:
kubectl cluster-info --context kind-siteb
Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂
If you observe, there are only 2 worker nodes in siteb configuration. The reason is because of the below error that pops up when 3 worker nodes are added in the second cluster. While I’m thinking it as my system limitation, please feel free to comment of any other reason you can think of :)
Error:
Creating cluster "siteb" ...
✓ Ensuring node image (kindest/node:v1.27.1) 🖼
✓ Preparing nodes 📦 📦 📦 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
✗ Joining worker nodes 🚜
Deleted nodes: ["siteb-worker3" "siteb-worker" "siteb-worker2" "siteb-control-plane"]
ERROR: failed to create cluster: failed to join node with kubeadm: command "docker exec --privileged siteb-worker kubeadm join --config /kind/kubeadm.conf --skip-phases=preflight --v=6" failed with error: exit status 1
Command Output: I0603 17:48:03.474240 264 join.go:412] [preflight] found NodeName empty; using OS hostname as NodeName
Inspect the clusters and understand what kind did from docker perspective:
Kind is Kubernetes IN Docker which means it deploys kuberentes nodes inside docker containers. Now lets take a look at what we have deployed from the docker perspective.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8ee07c3c4db9 kindest/node:v1.27.1 "/usr/local/bin/entr…" 2 minutes ago Up 2 minutes siteb-worker
02ca90a0cb6a kindest/node:v1.27.1 "/usr/local/bin/entr…" 2 minutes ago Up 2 minutes siteb-worker2
1ec24eda468c kindest/node:v1.27.1 "/usr/local/bin/entr…" 2 minutes ago Up 2 minutes 0.0.0.0:32334->32334/tcp, 127.0.0.1:38483->6443/tcp siteb-control-plane
251a2d8efea9 kindest/node:v1.27.1 "/usr/local/bin/entr…" 3 minutes ago Up 3 minutes 0.0.0.0:32333->32333/tcp, 127.0.0.1:44293->6443/tcp sitea-control-plane
ac194c3ed31a kindest/node:v1.27.1 "/usr/local/bin/entr…" 3 minutes ago Up 3 minutes sitea-worker3
2a77735b3d17 kindest/node:v1.27.1 "/usr/local/bin/entr…" 3 minutes ago Up 3 minutes sitea-worker2
233f221d1aaf kindest/node:v1.27.1 "/usr/local/bin/entr…" 3 minutes ago Up 3 minutes sitea-worker
The names of these containers clearly tell us the kind of kubernetes node it is running as. Also, the instructions in sitea config file have opened the port 32333 which is shown in docker ps output for the control plane. Similarly for the siteb control plane as well. Kind not only creates containers but also a docker network for these containers to run in. That can be covered in depth in other blog later. For now, we got an understanding of what containers are created because of the kind create command.
How to interact with the clusters:
Now that we have created 2 clusters, we can view them as follows:
$ kind get clusters
sitea
siteb
We can download kubectl for interacting with these clusters. Now that we have created both the clusters when we try and get the nodes this happens:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
siteb-control-plane Ready control-plane 16m v1.27.1
siteb-worker Ready <none> 15m v1.27.1
siteb-worker2 Ready <none> 15m v1.27.1
It is pointing to the siteb cluster which means all the commands that are issued with kubectl are applied in siteb cluster only. Now, to interact with the sitea cluster we need to change the context as follows:
$ kubectl config use-context kind-sitea
Switched to context "kind-sitea".
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
sitea-control-plane Ready control-plane 19m v1.27.1
sitea-worker Ready <none> 19m v1.27.1
sitea-worker2 Ready <none> 19m v1.27.1
sitea-worker3 Ready <none> 19m v1.27.1
We can see that after changing the context the get nodes command now ran on the sitea cluster. Kind creates a $HOME/.kube folder and creates a config file which stores the context. We can see the contents of the file to see how the clusters and users are mapped in the context and what the current context is.
Note: When we create clusters using kind, for example myfirstcluster, it is stored as kind-myfirstcluster in the config file so, when you try to change context it you need to pass kind-infornt of the cluster name.
The nodes can be deleted using the command kind delete cluster — name <clustername> as follows:
$ kind delete cluster --name sitea
Deleting cluster "sitea" ...
Deleted nodes: ["sitea-control-plane" "sitea-worker3" "sitea-worker2" "sitea-worker"]
Thats it for this blog! Let’s see how these kind clusters communicate with each other in the next blog.
If you are interested in exploring more on kind the here is the quick start guide from which all this referenced from.