How I built my Kubernetes cluster with a shared storage on Raspberry Pi using K3s

Victor Vargas
9 min readJul 11, 2023

--

Greetings!

I am writing this story to share my local setup, which I use as a base to study and keep me up to date with techniques and technologies.

I wouldn't be here if I didn't have access to so much information shared by all software engineering professionals around the globe, whether through articles, videos, comments, etc. Thank you very much for that, and I hope I can help more people with my contribution.

And last but not least, I thank my father, Telmovargas, who gave me a network switch, and my friend Leandro Preda who gave me some Raspberry Pi units 😊.

Overview

I started following the suggested topology from the K3s website, adding one more server as a storage server to support scenarios that need shared and persistent file systems, like databases.

Single-server Setup with an Embedded DB
My setup

Server, agent-01, and agent-02 follow the same role suggested by K3s reference.

As the Load Balancer, I used a Raspberry Pi with HAPROXY (non-detailed here, perhaps in another article).

And finally, to improve my environment, I placed one more Raspberry Pi as a storage server, using NFS (Network File System protocol).

Requirements

Hardware

  • 4x Raspberry Pi (at least 3 Model B+, recommended 4 Model B)
Raspberry Pi 3 Model B+
  • 4x SD Cards (at least 16Gb)

Software

Preparing the SD Cards

Prepare all of the SD Cards with the Raspberry OS, from a proper format to the installation of the OS, following the steps below, setting different hostnames for each card, such as master, storage, work-01, and work-02.

If you are familiar with preparing the sd card with Raspberry Pi OS, you may jump to the next session (Setting common settings).

  • Step 1: Open the SD Card Formatter tool
  • Step 2: Under the Select card, choose the correct media and Format
  • Step 3: Open the Raspberry Pi Imager tool
  • Step 4: Click the CHOOSE OS, select Raspberry Pi OS (other), and finally select Raspberry Pi OS Lite (64-bit)
  • Step 5: Click the CHOOSE STORAGE and select the correct media
  • Step 6: Click the gear button, and don't prefill the wifi password (if asked)
  • Step 7: Set the hostname accordingly (such as server, storage, agent-01, and agent-02)
  • Step 8: Enable SSH and password authentication
  • Step 9: Set the username and password (and remember it later!)
  • Step 10: Set locale (America/Sao_Paulo in my case) and click SAVE.
  • Step 11: Click WRITE and do it for all sd cards (changing hostname accordingly)

Setting common settings

With all sd cards prepared, it is time to use them and turn the servers on.

For each one, follow the initial (and common) steps to prepare the operational system.

Step 1: Logging in through SSH

Step 2: Update the OS

pi@server:~ $ sudo apt update && sudo apt upgrade

Step 3: Enable cgroups

Following the K3s requirement instructions, edit the cmdfile.txt file.

pi@server:~ $ sudo vi /boot/cmdline.txt

Add the text below to the end of the line.

... group_memory=1 cgroup_enable=memory

Attention to adding a space before the text.

Step 4: Disable Wifi and IPv6

Edit the config.txt file.

pi@server:~ $ sudo vi /boot/config.txt

Add the line below.

dtoverlay=disable-wifi

Edit the sysctl.conf file.

pi@server:~ $ sudo vi /etc/sysctl.conf

Add the lines below.

net.ipv6.conf.all.disable_ipv6=1
net.ipv6.conf.default.disable_ipv6=1
net.ipv6.conf.lo.disable_ipv6=1

Step 5: Set the static IP

According to the diagram (in the Overview section), set the correct IP for each one of the servers.

Edit the dhcpcd.file.

pi@server:~ $ sudo vi /etc/dhcpcd.conf

Make the changes to the eth0 interface as below.

interface eth0
static ip_address=192.168.0.230/24
static routers=192.168.0.1
static domain_name_servers=192.168.0.1 8.8.8.8

According to the overview diagram, remember to set the corresponding IP for each server (192.168.0.230 for server.local, 192.168.0.235 for agent-01.local, 192.168.0.236 for agent-02.local, and 192.168.0.240 for storage.local). And replace the router IP (192.168.0.1) with your router IP, if needed.

Step 6: Install dependencies and NTP

Install the dependencies below to satisfy connectivity with the NFS, clone codes from Git, and NTP to keep date time always up to date.

pi@server:~ $ sudo apt install nfs-kernel-server git ntp

Step 7: Reboot the server

Make all changes effective by rebooting the server.

pi@server:~ $ sudo reboot

Setting the Storage Server

On the storage server, we have to install and define a folder to share files among the Kubernetes nodes, including the server node.

Step 1: Create the storage directory

pi@storage:~ $ mkdir /home/pi/storage

Step 2: Configure the export filesystem

Since we have already updated the Raspberry Pi OS and installed nfs-kernel-server, we just have to set a new share in the exports file.

pi@storage:~ $ sudo vi /etc/exports

Add the line.

/home/pi/storage *(rw,all_squash,insecure,async,no_subtree_check,no_root_squash,anonuid=1000,anongid=1000)

Step 3: Reboot the server

Apply all the changes by rebooting the server.

pi@storage:~ $ sudo reboot

Setting the Server Node

Step 1: Install K3s as server

Use the --prefer-bundled-bin to avoid problems with old iptables versions.

pi@server:~ $ sudo curl -sfL https://get.k3s.io | sh -s - --prefer-bundled-bin

Step 2: Collect the server token

The server token will be used during agents (worker nodes) registration.

pi@server:~ $ sudo cat /var/lib/rancher/k3s/server/node-token

Step 3: Get the kubeconfig file

The kubeconfig file will be used to manage Kubernetes remotely. The file is located at /etc/rancher/k3s/k3s.yaml.

Save it to your local machine, in my case (macOS) the file is found at ~/.kube/config.

Inside the config file, locate and replace the server address using the correct IP from:

server: https://127.0.0.1:6443

To:

server: https://192.168.0.230:6443

Step 4: Checking the server node (kubectl)

Setting work nodes

Step 1: Install K3s as an agent

Replace the K3S_TOKEN with the token obtained in step 2 of the last section (Setting the Server Node).

pi@agent-01:~ $ sudo curl -sfL https://get.k3s.io | K3S_URL=https://192.168.0.230:6443 K3S_TOKEN=K10edd6cf996273b853d06778130b527995ffefcd3baa5e703078ab9b5af0268e79::server:0846809b60d6f1e02afc9dcf652173de sh -s - --prefer-bundled-bin

Step 2: Checking all nodes (kubectl)

Testing the load balancing

In this first test, we will deploy an NGINX container with four replicas. Each one will produce an HTML with its IP.

Create a first-test.yaml file, with the content:

apiVersion: v1
kind: Namespace
metadata:
name: first-test-ns

---
apiVersion: apps/v1
kind: Deployment
metadata:
name: first-test-depl
namespace: first-test-ns
spec:
replicas: 4
selector:
matchLabels:
app: first-test-app
template:
metadata:
labels:
app: first-test-app
spec:
containers:
- name: first-test-ctr
image: nginx
ports:
- containerPort: 80
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "mkdir /usr/share/nginx/html/first-test && echo '<html><body><p>My current IP: '$(hostname -I)'</p></body></html>' > /usr/share/nginx/html/first-test/index.html"]

---
apiVersion: v1
kind: Service
metadata:
name: first-test-svc
namespace: first-test-ns
spec:
ports:
- name: http
port: 8080
targetPort: 80
protocol: TCP
selector:
app: first-test-app

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: first-test-ingress
namespace: first-test-ns
spec:
rules:
- host: load-balancer.local
http:
paths:
- path: /first-test/
pathType: Prefix
backend:
service:
name: first-test-svc
port:
number: 8080

Replace the host load-balancer.local with your load balancer. In my case, I am using the HAPROXY in another Raspberry Pi as Load Balancer.

Deploy the yaml file and check the resources.

vvbvargas:~ $ kubectl apply -f first-test.yaml
vvbvargas:~ $ kubectl get all -n first-test-ns

Check the ingress (it is using the IPs from the three nodes, server, agent-01, and agent-02).

Lastly, let's check the container's IP.

The test should resolve the request to them (10.42.1.9, 10.42.2.9, 10.42.0.21, and 10.42.0.22).

And at last, time to test making a request (or many!) to my load balancer.

Setting the NFS Provisioner

Install NFS Provisioner using Helm (I followed the instructions from here).

vvbvargas:~ $ helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/

Using the IP and the directory configured before, during the preparation of the storage server.

vvbvargas:~ $ helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--set nfs.server=192.168.0.240 \
--set nfs.path=/home/pi/storage

And check the StorageClass.

vvbvargas:~ $ kubectl get sc

Testing the shared storage

In this last test, we will deploy another NGINX container with eight replicas, showing a simple webpage.

Create the second-test.yaml file, with the content:

apiVersion: v1
kind: Namespace
metadata:
name: second-test-ns

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: second-test-pvc
namespace: second-test-ns
spec:
accessModes:
- ReadOnlyMany
storageClassName: nfs-client
resources:
requests:
storage: 1Mi

---
apiVersion: apps/v1
kind: Deployment
metadata:
name: second-test-depl
namespace: second-test-ns
spec:
replicas: 8
selector:
matchLabels:
app: second-test-app
template:
metadata:
labels:
app: second-test-app
spec:
containers:
- name: second-test-ctr
image: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: /usr/share/nginx/html/second-test
name: second-test-volume
subPath: second-test
volumes:
- name: second-test-volume
persistentVolumeClaim:
claimName: second-test-pvc

---
apiVersion: v1
kind: Service
metadata:
name: second-test-svc
namespace: second-test-ns
spec:
ports:
- name: http
port: 8080
targetPort: 80
protocol: TCP
selector:
app: second-test-app

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: second-test-ingress
namespace: second-test-ns
spec:
rules:
- host: load-balancer.local
http:
paths:
- path: /second-test
pathType: Prefix
backend:
service:
name: second-test-svc
port:
number: 8080

Deploy the yaml file and check the resources.

vvbvargas:~ $ kubectl apply -f second-test.yaml
vvbvargas:~ $ kubectl get all -n second-test-ns

And check the persistent volume claim.

vvbvargas:~ $ kubectl get pvc -n second-test-ns -o wide

The first test, still without index.html.

Now, in order to create an index.html file in the storage server, access and locate the folder that was created by NFS Provisioner.

Create a simple index.html, like below.

<!DOCTYPE html>
<html lang="en">

<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Document</title>
</head>

<body>
<img src="https://source.unsplash.com/random/200x200" alt="">
</body>

</html>

And finally, the page is working properly.

If you enjoyed it, please share, comment, and give it a clap (👏).

If you loved it, please share, comment, and consider buying me a cup of coffee 😉.

--

--