CODEX

Reliable Kubernetes on a Raspberry Pi Cluster: The Foundations

Scott Jones
Jan 14 · 6 min read
Photo by Louis Reed on Unsplash

In my previous article, I gave you the lowdown of what is in my cluster. Now, let's take a look at what it took to get it in place.

Part 1: Introduction
Part 2: The Foundations
Part 3: Storage
Part 4: Monitoring
Part 5: Security

Hardware

I am running a cluster of 3 RPis — 1 Pi4 and 2 Pi 3B+s. Each of those is connected individually to a power supply and a switch. I don’t have any fancy PoE hats, although that would be a very good improvement for my setup. One key thing to mention at this point is storage — we are going to need centralized storage somewhere in the cluster, and I opted for an external HDD, but any mountable storage will do the trick. At this point, the network diagram is fairly simple.

Cluster network topology

Cluster Architecture

As you can see from the topology diagram, I have a single master node and 2 additional agents. This does introduce a single point of failure into the cluster — if the master node dies, then nothing will be managing the cluster. I have accepted this risk, given I have a single point of failure with my HDD too. I have mitigated it by ensuring both failure points are on a single node, meaning both of the other nodes are nonessential and can be recovered from.

Setting up the Master Node

Before installing the cluster, you need to ensure that any storage attached to the node is attached on boot. How to do that is out of scope for this article, but there are many good resources around to suit any use case. Additionally, all hostnames should be set up across the cluster before installing K3s, otherwise, you can get in a bit of a pickle. Installing K3s is one simple command

curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC='server --no-deploy traefik --disable servicelb' INSTALL_K3S_VERSION="v1.18.9+k3s1" sh

This snippet will install and start K3s, without Traefik and the built-in load balancer. This allows me to manually install the version of Traefik I want with more control, and to install MetalLb later. I have also pinned the version so I can install the same version across the cluster. I did originally have a later version, but that was giving issues with CPU usage skyrocketing, so thanks to my cluster monitoring I could pinpoint that. At the time of writing, 1.18.9+k3s1 was the latest version I have proven stability for on my setup.

Checking the cluster is up and running is really straight forward. This command will tell you what is up and connected

sudo kubectl get nodes

It should give you something like this if it has worked

Setting up your agents

The first thing you need is to get your node token from the master. Run the following command to get that.

cat /var/lib/rancher/k3s/server/node-token

Once you have that, on each of your nodes run the following

curl -sfL https://get.k3s.io | K3S_URL=https://<<MASTER-NODE-IP>>:6443 K3S_TOKEN=<<NODE-TOKEN>> INSTALL_K3S_VERSION="v1.18.9+k3s1" sh -

Once you have done that on each node, check the nodes again, and the output should have changed to reflect this

You now have a 3 node cluster! It is however of limited external use until you install an ingress controller such as Traefik.

Access outside the cluster

The simplest way is to take a copy of your kubeconfig file (found at /etc/rancher/k3s/k3s.yaml) and use kubectl on another machine using that as the config file. Be aware you will need to update the server to not be 127.0.0.1, but the IP address of your cluster master node.

Installing Traefik

One of the most important things we need to get set up is our ingress controller. Thankfully, that's really simple thanks to a tool called Helm. We will use Helm to install Traefik into our cluster. You can specify a lot of things in the values, but the main ones to look at are the TLS setup for our HTTPS certificates and the loadbalancer config. Each of those will be specific to your setup. For mine, I have set a specific load balancer external IP so that I know I will always get the same one as an entry point, rather than relying on auto-assignment. See here for a base set of defaults. You can take a copy of that and update it for your scenario. When you have that, run the below commands

sudo kubectl create namespace traefik
helm repo add traefik https://helm.traefik.io/traefik
helm repo update
helm install traefik traefik/traefik --namespace=traefik --values=traefik.values.yaml

That should now setup Traefik for you. To check, run the following to see the status of the pods, and just look for a running pod with 1/1 ready.

sudo kubectl get pods -n traefik

Finally, to allow access we need a working load balancer. You should have a load balancer service created, but because we have nothing to service it yet, it will be stuck at pending

sudo kubectl get svc -n traefik

Installing MetalLB

This is also fairly simple to install to your cluster. There is a premade YAML file that you can apply straight to your cluster which does most of the work.

sudo kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml

The only thing missing is you will need to add a configuration specific to your cluster for the load balancing. I created mine in metallb.yaml:

apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: address-pool-1
protocol: layer2
addresses:
- 192.168.1.200-192.168.1.254

And then applied it in the standard way

sudo kubectl apply -f ./metallb.yaml

This gives me a load balancer setup that will dish out IPs in the range 192.168.1.200–192.168.1.254. I also excluded this range from my DHCP pool to ensure I didn't end up with network clashes. Finally, if you look at the running services for Traefik again, you should see that your load balancer now has an external IP that you can use outside the cluster to access it.

Testing it all together

A simple way to test this would be to add an ingress route into the Traefik dashboard. If you don’t have any DNS entries you can point at your cluster, you can make do with hostfile entries for now. This should not be left on for a production-grade cluster because it gives out a lot of information, and is not secured (by default). The following traefik-dashboard.yaml will setup the route

apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: dashboardsecure
namespace: traefik
spec:
entryPoints:
- websecure
routes:
- match: Host(`traefik.internal`)
kind: Rule
services:
- name: api@internal
kind: TraefikService
middlewares:
- name: traefik-sso@kubernetescrd
tls:
certResolver: cloudflare

And again apply it the usual way

sudo kubectl apply -f ./traefik-dashboard.yaml

You should then be able to access the Traefik dashboard at https://traefik.internal. It should look something like this

Default Traefik dashboard

Congratulations! You now have a working Kubernetes cluster ready to service both internal and external requests! Next time we will look at persistent storage options open to us, so we will be able to deploy apps with storage requirements.

CodeX

Everything connected with Tech & Code

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store