Lightweight, multi-node, multiple local Kubernetes clusters on your Linux machine

Cristian Posoiu
7 min readMay 19, 2020

--

An easy way to run locally multiple Kubernetes clusters, with light resources consumption

Getting into the task

Many, these days, are working on or developing for Kubernetes clusters.
When I started, long time ago, I had to install it (and everything above) on some bare metal machines. And of course, I also wanted to be able to test things on my local machine.
I chose kubeadm for the bare metal install.
For the local development environment setup, I don’t recall exactly all the options I had at that time, but I think there were at least:

  • kubeadm
  • minikube
  • Juju, with LXC/LXD nodes

Important to keep in mind, because this is reflected into the solutions checked and adopted: I’m a Linux user.

Wanted features

The features I was searching for:
a) simple to start/stop the cluster whenever working/not working on a project
b) multiple clusters — since I was working on multiple Kubernetes based projects
c) multi-node clusters. Yes, I wanted to also be able to check how deployed software would behave when a node would fail or if a deployment would have more than 1 replicas
d) not much additional CPU/memory weight besides what Kubernetes control plane itself is using. Think laptops for example.
e) be able to select Kubernetes version

Quick rundown on alternatives

Let’s quickly examine some of the existing tools at that time (2018?)

Kubeadm — while a great tool, if run directly on my development box, it would mean I would get a single cluster, with a single node.

Features rundown:
a) no , b) no, c) no, d) yes, e) yes

Minikube — it started a full VirtualBox virtual machine on my desktop. Sometimes, my computer would go 100% on a cpu doing nothing (VirtualBox’s fault most probably, not minikube). Also, I don’t think it supported multi-node clusters.
Features rundown: a) yes, b) yes, c) no, d) no, e) yes

Juju: I worked with LXC/LXD system containers for a long time and I liked and still like their versatility, so I was wondering if I couldn’t work with LXD containers as Kubernetes nodes. It would gain immediately an ‘yes’ on many items from the features list.
I found Ubuntu’s Juju would know to install it that way. And tried it. Out of more than 3 tries, I think it kindof’ worked only once! When it did not work, Juju was just waiting for tens of minutes waiting for something to happen!? Also, having Juju still running around and visibly consuming CPU cycles after install didn’t seem right to me.
Feature rundown:
a) hmm, yes? (it worked only once)
b) haven’t tested, most probably yes
c) yes
d) kind of yes
e) ?

Decision

Closest to what I wanted was Juju. The fact that it worked once on Juju/LXD gave me hope. So I embarked on building my own shell script to start multi-node clusters, with LXD and Kubeadm. It was not that easy. I bumped into many issues, like kernel modules, root directories being mounted, container permissions, BTRFS storage pools, DHCP, etc.

Initially it was a single, medium sized shell script. At some point I decided that I should share it to others, so I decided to make the script more “presentable” (easier to develop on, enhance, maintain). In the same time, I decided to also allow it to be enhanced by addons.

Remember though — since LXD runs on Linux only, this is a Linux only solution, unlike some of the others.

Updated contenders list

As an update to the list of what can now be used for local development:

Minikube: It looks pretty complex now and I’d like to try the Docker driver to see how it goes.
I do have a feeling that multi-node is not in (yet).

Microk8s — easy to install, lightweight.
Features rundown:
a) yes. If I remember well, I did not like though that when I said ‘stop’, something was still running.
b) no
c) no. You can “join” multiple running instances, though those are most probably in different VMs/physical machines?
d) 99% yes. See comment from a)
e) yes

Kind — uses a Docker container as a node.
Need to test it, but based on a quick reading of their docs, features rundown:
a) yes
b) yes
c) no
d) probably yes
e) kindof? Not clear from the documentation

k3s — Seems a little bit different in its usage area and architecture. Did not tried it so far.
Based only on a quick look at their homepage and a few documentation pages, features rundown:
a) probably. I think they are creating systemd service files, meaning you could say “systemctl stop k3s”.
b) no
c) probably not. Their home page are mentioning “server” and “agent” and “On a different node”. I hope you can at least run one agent and server on the same machine…
d) yes.
e) probably not

Installing the script

The repository with the code is at https://github.com/cr1st1p/k8s-on-lxd

Until some distribution dependent install is created, just “git clone” this repository and ensure script k8s-on-lxd.sh is in your $PATH (or create a symlink to it, if not)

Running the script

Before creating cluster you will need to run a setup phase first:

k8s-on-lxd.sh --setup

This will double check first that you have the necessary programs available (kubectl, LXD, jq, sort, etc) and even give suggestions on how to install them if not found, for a couple of Linux distributions.

Then it starts setting up LXD, it creates also an LXD profile, and it starts the longest part (a few minutes) — to build some base LXD images. This is done one time per Kubernetes version. Also, things could be speed up by copying the images from somewhere else for example (your other computer, your colleague’s computer).

Now, you are ready to start your first cluster. By the way, the clusters, for now, are built of 1 master node and N worker nodes. Standard pods will not be scheduled on master nodes, so you’ll need at least 1 worker node.

k8s-on-lxd.sh --name cluster1 --master

This will use previously built images and start a new cluster with 1 master node, using whatever script’s default Kubernetes version is (1.18.2 at the time of writing this). Later on I’ll tell about versions. Script will do also a few checks — like for swap or free disk space.

If successful it should end with something like:

INFO:  Enjoy your kubernetes experience! 
INFO:  Feel free to report bugs, feature requests and so on, at https://github.com/cr1st1p/k8s-on-lxd/issues

In case of problems, script should write some appropriate information or point you to a troubleshooting document.

Now, start one worker node:

k8s-on-lxd.sh --name cluster1 --worker

It should end in a similar note as for the master.

And, if like me, you want more nodes, then just run again the same command! Here we have it: multi-node! See them:

kubectl --context lxd-cluster1 get nodes

Accessing the cluster

After setting up the master node, code will create a kubectl ‘context’ to help accessing the cluster. Name is lxd-CLUSTERNAME. For example, to access what we just created, where we gave the cluster the name cluster1:

kubectl --context lxd-cluster1 get pod --all-namespaces

So, you would use kubectl as always, just with the added parameter “— context lxd-cluster1”. I think there are also tools out there that could make switching between configurations/contexts even easier.

Multiple clusters

If you need another cluster, there is no need to remove this one. Just start over by creating the master, but give it a different name, and then add some worker node(s):

k8s-on-lxd.sh --name my-second-cluster --master
k8s-on-lxd.sh --name my-second-cluster --worker

If you run a ‘lxc list’ command you will see multiple containers running, for both the ‘cluster1’ cluster and ‘my-second-cluster’.

Start/Stop/Remove

The setup of the clusters is lightweight in resources due to use of LXD system containers, but what you hold in them might not be light. So you will want to start/stop a cluster at will:

k8s-on-lxd.sh --name my-second-cluster --stop
# or
k8s-on-lxd.sh --name my-second-cluster --run

The above commands will stop/start the LXD containers that script finds to be part of the named cluster.
If you just want to remove a cluster altogether:

k8s-on-lxd.sh --name my-second-cluster --stop
k8s-on-lxd.sh --name my-second-cluster --delete

The “ — delete” command will remove the LXD containers.

Addons

When I decided to share the script, I changed it a lot also so that it has the notion of addons — code that should easily add new functionality. By the way, there is a developer oriented documentation as well, if you want to help :-)

You can find addons to:
- handle your proxy environment
- add Kubernetes Dashboard
- add a local storage class
- add NFS backed storage class

The addons will either run whenever specific setup or container start/stop phases are happening, or, on demand. For example, addon ‘proxy’ is always hooked into various phases of the script. But addon ‘dashboard’ is installed/removed only by user request.

Kubernetes versions

I did say that you could have multiple kubernetes versions running, right?

By default, the script uses version 1.18.2, as of this writing. But you can tell it to use a different version by using the parameter ‘ — k8s-version X.Y.Z’.

You need to specify the version in 2 of the steps: during the setup ( — setup) — so that it creates the images with the appropriate Kubernetes version, and while starting a master ( — master) — so that it knows which image it should use. It is not required to use while starting a worker node.

k8s-on-lxd.sh --k8s-version 1.13.12 --setup
k8s-on-lxd.sh --k8s-version 1.13.12 --name cluster2 --master
k8s-on-lxd.sh --name cluster2 --worker

Most used Kubernetes versions I worked with, with this script: 1.13.12, 1.16.2 and 1.18.2

Final words

Remember — there are multiple ways to run a local Kubernetes cluster — some of which I mentioned already, but there could be others as well. Each one with their strengths and weaknesses. As you saw, I did not test them all and some of the comparisons were made only by reading their documentation. As always - test on your own and choose whichever is appropriate for your specific needs and workflows :-)

Enjoy!

--

--

Responses (1)