Throughout years, numerous tools have been developed to provide bootstrap capabilities for a Kubernetes cluster. A considerable proportion of these tools focus on constructing a holistic and smooth DX for cluster install while supplying several flag options for advanced configuration (e.g. Kubespray, Kops, ClusterAPI). Undoubtedly, this is the state of the art for when it comes to production-ready clusters, however, these install mechanisms prove to be ponderous and time-consuming for the product development and testing stages. The end-user community required lightweight bootstrap tools to enable speedy and reliable infrastructure provisioning on local environments.
Nowadays, the most prominent tools that simplify the cluster creation on a local machine are minikube, kind, microK8s, k3s, etc. This blog post will focus on highlighting kind as a provisioning tool and its advanced configuration to tailor the networking layer.
Kind is an open-source tool that generates Kubernetes clusters using Docker. The 2 prerequisites to utilize kind are Docker and Go(1.11+).
In the past years, Docker has become the de facto application packager, allowing software testing in an isolated environment (aka container). As such, most of the engineers will be familiar with operating the tool, but most importantly it will be available on their local environments.
Similarly, Go has gained a lot of traction as a development language. To install Go follow the instructions here.
At its core, kind will use a base image that will run on top of Ubuntu. This provides good coverage of package dependencies to execute systemd, container, and Kubernetes components. To differentiate between the different flavors of Kubernetes, a node image is built on top of the base one. This will contain necessary binaries and configuration for the cluster.
Once kind CLI is installed on your local environment, a cluster can be created by using one command:
kind create cluster
Absolutely and magnificently easy!
By default, kind will use kindnetd to propagate the network overlay functionalities in a cluster. However, use-cases may occur in which it is required to mirror the CNI plugin in the production cluster, such as canal, calico, cilium, or flannel. This is necessary to secure code standardization, portability, and stability when propagated through different environments.
In these circumstances, kind provides a suite of advanced configuration options to custom the CNI plugin integration. The example below, will showcase how flannel can be installed as the default networking component.
First, it is mandatory to disable the installation of kindnetd as one of the core components of the cluster. A config.yaml file with the following content will be created, where the
disableDefaultCNI flag will be used:
# the default CNI will not be installed
To prompt the creation of the cluster, use the following command:
### create a cluster with no CNI pod
kind create cluster --config=config.yaml
At this stage, the node will be in a “NotReady” state and some of the pods (e.g. DNS) will be “Pending”. It is required, to install the networking layer. In this example, flannel will be installed:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Note: Feel free to install the CNI plugin of your choice here
Whilst, the cluster node will be in a ready state, some of the pods will fail with this error:
Failed create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "81217ecd1e2aab8c948089ee9ddc15d88611ba36e49fc1f3d94133ef4400c3a8": failed to find plugin "flannel" in path [/opt/cni/bin]
This error is cause due to the fact that the flannel CNI plugin is not available on the kind cluster node, and which, as a reminder, is a docker container running locally.
In this case, we need to mount the flannel CNI plugin to the cluster node. But before that, let’s build the CNI plugins locally:
git clone https://github.com/containernetworking/plugins.git
cd plugins# this will build the CNI binaries in bin/*./build_linux.sh
Now, we can mount the CNI plugins as a volume to the kind Docker container:
# the default CNI will not be installed
- role: control-plane
- hostPath: /full/path/plugins/bin
Rebuild your kind cluster and all the components should be up and running!
Additionally, kind allows further tailoring of the networking layer, with parameters such as CIDRs for pods and services, API server endpoint and port, and even the configuration of the kube-proxy mode. For the latest, configuration options follow this guide.
Lightweight bootstrapping tools have initiated a new era for developing and deploying applications to Kubernetes clusters. It is remarkable, how tools such as kind can fortify and automate feature testing and propagation through multiple environments. And this can only be complementary to a smooth and simple developer experience, whilst generating a custom flavor of the networking layer.