Kubernetes multi-nodes cluster with k3s and multipass

Mattia Peri
Jun 2 · 7 min read

Working with Kubernetes, it might happen to need a local Kubernetes cluster for development and testing purposes. Of course, Minikube is an option. But what if we need something more powerful without any added complexity? For “example”, what if we are preparing ourself for the CKAD: Certified Kubernetes Application Developer certification?

Here is my personal solution on OS X (but should work smoothly with GNU/Linux too) that I’d like to share with you: “multipass” + “k3s”.

Multipass

First of all, we need a virtualization layer in order to run any number of Kubernetes nodes. Very likely, the easiest way to get an Ubuntu VM on OS X is using multipass. Created by Canonical Ltd:

It’s a system that orchestrates the creation, management and maintenance of virtual machines and associated Ubuntu images to simplify development.

k3s

Since I want to keep things small and simple, I take advantage of the amazing Kubernetes project created by Rancher k3s. K3s promises to be a Lightweight Kubernetes:

K3s is a Certified Kubernetes distribution designed for production workloads in unattended, resource-constrained, remote locations or inside IoT appliances.

How to create a local Kubernetes cluster

KISS style, just 9 commands in 3 easy steps to setup a basic 3 nodes k3s cluster:

Step 1: Install multipass

I take it for granted that brew.sh can’t be missing in your Mac. If not, please follow that link. After that, it is nothing more than:

$ brew cask install multipass

Step 2: Create the Virtual Machines

Let’s assume we would like a Kubernetes cluster with 1 master node (“k3s-master”) and 2 (worker) nodes (“k3s-worker1” and “k3s-worker2”).

$ multipass launch --name k3s-master --cpus 1 --mem 512M --disk 3G
$ multipass launch --name k3s-worker1 --cpus 1 --mem 512M --disk 3G
$ multipass launch --name k3s-worker2 --cpus 1 --mem 512M --disk 3G

Step 3: Create the k3s cluster

Things here become a little more tricky and a couple of notes are deserved:

# Deploy k3s on the master node
$ multipass exec k3s-master -- /bin/bash -c "curl -sfL https://get.k3s.io | sh -"
# Get the IP of the master node
$ K3S_NODEIP_MASTER="https://$(multipass info k3s-master | grep "IPv4" | awk -F' ' '{print $2}'):6443"
# Get the TOKEN from the master node
$ K3S_TOKEN="$(multipass exec k3s-master -- /bin/bash -c "sudo cat /var/lib/rancher/k3s/server/node-token")"
# Deploy k3s on the worker node
$ multipass exec k3s-worker1 -- /bin/bash -c "curl -sfL https://get.k3s.io | K3S_TOKEN=${K3S_TOKEN} K3S_URL=${K3S_NODEIP_MASTER} sh -"
# Deploy k3s on the worker node
$ multipass exec k3s-worker2 -- /bin/bash -c "curl -sfL https://get.k3s.io | K3S_TOKEN=${K3S_TOKEN} K3S_URL=${K3S_NODEIP_MASTER} sh -"

Check everything:

$ multipass list
Name State IPv4 Release
k3s-worker2 RUNNING 192.168.64.5 Ubuntu 18.04 LTS
k3s-worker1 RUNNING 192.168.64.4 Ubuntu 18.04 LTS
k3s-master RUNNING 192.168.64.3 Ubuntu 18.04 LTS
$ multipass exec k3s-master kubectl get nodes
NAME STATUS ROLES AGE VERSION
k3s-master Ready <none> 48s v1.14.1-k3s.4
k3s-worker1 Ready <none> 16s v1.14.1-k3s.4
k3s-worker2 Ready <none> 6s v1.14.1-k3s.4

Note the <none> in the ROLES column. In the second part of this story, we’ll get there.

That’s it, the Kubernetes cluster is up and running. Nevertheless, you might want to have a look to the following steps:

How to create a local Kubernetes cluster (that makes sense)

Hereunder I’m going to add some useful tips to make the cluster useful for developing and testing purposes:

4. Configure kubectl
5. Configure cluster node roles and taint
6. Helm installation
7. Service type “NodePort”
8. Ingress controller Traefik

Step 4. Configure kubectl

If we want to forget about the multipass CLI, it’s very easy to configure kubectl installed in your host machine to use the brand new k3s cluster directly (I’m assuming kubectl is already installed in your machine, otherwise: $ brew install kubernetes-cli). First, we need to retrieve the kubectl config file from the k3s-master node and edit the it with the k3s-master IP address, as described hereunder:

# Copy the k3s kubectl config file locally
$ multipass copy-files k3s-master:/etc/rancher/k3s/k3s.yaml ${HOME}/.kube/k3s.yaml
# Edit the kubectl config file with the right IP address
$ sed -ie s,https://localhost:6443,${K3S_NODEIP_MASTER},g ${HOME}/.kube/k3s.yaml
# Check
$ kubectl --kubeconfig=${HOME}/.kube/k3s.yaml get nodes

Specifying the --kubeconfig is boring and, above all, potentially dangerous in case we forget to do it and we run the wrong command in the wrong cluster (anyone? no? really?!). It might worth to merge the kubectl config file k3s.yaml with your current ${HOME}/.kube/config file or, since our k3s cluster is not intended to be written in the rocks, we have a couple of easy options (see the official documentation for details):

# Create a dedicated alias:
$ alias k3sctl="kubectl --kubeconfig=${HOME}/.kube/k3s.yaml"

or

# Use the KUBECONFIG variable
$ export KUBECONFIG=${HOME}/.kube/k3s.yaml

Step 5. Configure cluster node roles and taint

As we have previously noticed, the 3 nodes have no roles. That’s because:

Let’s take care of these two configurations:

# Configure the node roles:
$ kubectl --kubeconfig=${HOME}/.kube/k3s.yaml label node k3s-master node-role.kubernetes.io/master=””
$ kubectl --kubeconfig=${HOME}/.kube/k3s.yaml label node k3s-worker1 node-role.kubernetes.io/node=””
$ kubectl --kubeconfig=${HOME}/.kube/k3s.yaml label node k3s-worker2 node-role.kubernetes.io/node=””
# Configure taint NoSchedule for the k3s-master node
$ kubectl --kubeconfig=${HOME}/.kube/k3s.yaml taint node k3s-master node-role.kubernetes.io/master=effect:NoSchedule

The nodes roles are now properly configured:

$ kubectl --kubeconfig=${HOME}/.kube/k3s.yaml get nodes
NAME STATUS ROLES AGE VERSION
k3s-master Ready master 4h12m v1.14.1-k3s.4
k3s-worker1 Ready node 3h57m v1.14.1-k3s.4
k3s-worker2 Ready node 3h57m v1.14.1-k3s.4

Eventually we are ready for a deployment. What I’d like to highlight is that the NGiNX pods are going to be scheduled only in the k3s-worker1 and k3s-worker2 nodes:

$ kubectl --kubeconfig=${HOME}/.kube/k3s.yaml run nginx --image=nginx --replicas=3 --expose --port 80
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
service/nginx created
deployment.apps/nginx created
$ kubectl --kubeconfig=${HOME}/.kube/k3s.yaml get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-755464dd6c-6nzvr 1/1 Running 0 3h22m 10.42.1.3 k3s-worker2 <none> <none>
nginx-755464dd6c-rkd6r 1/1 Running 0 3h22m 10.42.1.4 k3s-worker2 <none> <none>
nginx-755464dd6c-v5v64 1/1 Running 0 3h22m 10.42.2.3 k3s-worker1 <none> <none>

Step 6: Helm installation

Helm tries to combine a template engine and a package manager for Kubernetes. Helm installation might not be as straightforward as we are used to, indeed k3s demands for a few more steps:

$ kubectl --kubeconfig=${HOME}/.kube/k3s.yaml -n kube-system create serviceaccount tiller$ kubectl --kubeconfig=${HOME}/.kube/k3s.yaml create clusterrolebinding tiller \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:tiller
$ helm --kubeconfig=${HOME}/.kube/k3s.yaml init --service-account tiller$ helm --kubeconfig=${HOME}/.kube/k3s.yaml install stable/mysql

If you are interested to know something more about Helm, I shared my experience with the Helm chart repository topic in the following story: “Create a public Helm chart repository with GitHub Pages

Step 7: Service type “NodePort”

Actually this step is not k3s specific. I just wanted to highlight that the NodePort services creation is as easy as usual because the network between host machine and VMs is transparent for the user:

$ kubectl --kubeconfig=${HOME}/.kube/k3s.yaml create deploy nginx2 --image=nginx$ kubectl --kubeconfig=${HOME}/.kube/k3s.yaml create svc nodeport nginx2 --tcp=30001:80 --node-port=30001$ curl -XGET -s -I -o /dev/null -w "%{http_code}\n" http://$(multipass info k3s-master | grep "IPv4" | awk -F' ' '{print $2}'):30001
200

Step 8: Ingress controller Traefik

I didn’t mention before but it’s easy to believe that to keep things light, k3s comes with some compromises and one of them regards Ingress. In order for the Ingress resource to work, the cluster must have an Ingress Controller running (and usually it’s NGiNX). K3s includes Traefik for this purpose instead. It also includes a simple service load balancer that makes it possible to get an external IP for a Service in the cluster. Let’s see a simple example on how to configure the ingress controller (just edit the YAML file with the right ip address):

$ cat <<EOF > ingress-controller.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
spec:
rules:
- host: 192.168.64.3.xip.io
http:
paths:
- path: /
backend:
serviceName: nginx2
servicePort: 30001
EOF
$ kubectl --kubeconfig=${HOME}/.kube/k3s.yaml apply -f ingress-controller.yaml
ingress.extensions/ingress created
$ curl -XGET -s -I -o /dev/null -w "%{http_code}\n" http://192.168.64.3.xip.io/
200

nip.io is a “Dead simple wildcard DNS for any IP Address” allowing you to map any IP Address to a hostname.

Final step: clean-up everything

It was a nice journey, now it’s time to get rid of everything:

$ multipass stop k3s-master k3s-worker1 k3s-worker2
$ multipass delete k3s-master k3s-worker1 k3s-worker2
$ multipass purge

Conclusions

It’s very easy to create a local multi-node Kubernetes cluster with a nice degree of complexity without losing the Minikube simplicity. Moreover, the solution described adds a remarkable level of flexibility thanks to multipass VMs implementation that guarantees isolation and security to your Kubernetes tweaking.


Hope you find these information useful, any feedback or advice for improvements are very well accepted.

Mattia Peri

Written by

AWS Certified Solutions Architect | Senior DevOps Engineer