Multi-Arch Raspberry Pi Kubernetes Cluster

Nikolaos Panagiotou
10 min readJul 14, 2024

--

Introduction

Kubernetes is a standard solution for running containerized applications in computing clusters of different configurations. It transparently handles the scaling, the load-balancing, and the effective communications between various software components such as microservices. At the same time, it provides the building block for developing cloud-native applications. Such applications can be later deployed in various IaaS providers including AWS and Azure. In addition, the same software can be deployed in local VPC (Virtual Private Clusters) out of the box. That is, Kubernetes is a platform for deploying scalable and infrastructure-agnostic containerized apps.

That said the maintenance and the management of a Kubernetes cluster is a time-consuming process with associated monetary costs. As a result, AWS and other similar platforms provide fully managed versions of Kubernetes. Furthermore, AWS also provided its orchestrator flavor named ECS. Since providers such as AWS and Azure provide their own managed versions of Kubernetes why bother building a local cluster based on heterogeneous nodes?

  • It's a funny process!
  • It allows you to experiment with your local cluster with different applications.
  • It is a low-cost cluster based on an AMD64 node and an ARM node. The computing resources are limited so you will need to try different configurations for ensuring scalability.
Figure 1: One of the RPi for the tutorial.

Requirements

Let's describe all the hardware necessary for this tutorial:

  1. An AMD64 computing node with a Debian-based OS. At least 2GB of memory are required.
  2. An ARM computing node such as a Raspberry Pi (RPi). Model 4 or greater is recommended with 4 or more GB of memory.
  3. A card reader for preparing the RPi OS.
  4. An SD card with at least 16GB.
  5. A local network with Wi-Fi.

For the writing of this tutorial, we experimented with:

  1. Ubuntu 24.04 Server OS on all the RPis. Ubuntu 24.04 Desktop for the laptop.
  2. One AMD64 laptop with 8GB of memory. This laptop provides an SD Card device.
  3. One RPi 400 with 4GB of memory.
  4. One RPi 4 with 8GB of memory.
  5. One RPi 5 with 8GB of memory.

Note: You don't need 3 Raspberry Pis for the tutorial. You can use just your Debian node and one RPi. Use the configurations only for node1 and raspberrypi1.

Preparing the RPi

To prepare the SDcards for the RPis we used the tool imager.

Let's prepare the SD Card:

  1. Open RPi imager.
  2. Select your RPi device.
  3. Select the Operating System. Ubuntu Server is at ‘Other General-Purpose OS’.
  4. Select the storage device. In other words, select your SD Card.
Figure 2: RPi Imager Interface.

You will see a menu similar to Figure 2. When ready select “Next” and select “Edit Settings”. In the pop-up: 1. select the hostname of your RPis, 2. Configure username and password and 3. Set up a Wi-Fi password. On the tab services, enable ‘SSH’ and remember to tick “password authentication”. Start the installation using this configuration.

Figure 3: RPi Imager OS Customization.

Configuring the Network

Connect the RPi to a local monitor and keyboard. Now, let's configure the network of the RPi. On the RPi terminal, edit the contents of /etc/netplan/*yaml with your favorite editor. Add the following under wlan0.

dhcp4: no
optional: true
addresses:
- 192.168.2.211/24
gateway4: 192.168.2.1
nameservers:
addresses: [8.8.8.8, 8.8.4.4]

Configure the network using:

sudo netplan try
sudo netplan apply

For the rest of the tutorial, the network is set as follows:

  1. Laptop: 192.168.2.206
  2. RPi-1: 192.168.2.211
  3. RPi-2: 192.168.2.212
  4. RPi-3: 192.168.2.213

Configure Passwordless SSH

We need to enable passwordless SSH between all the machines. To do so let's generate first a SSH key on every machine.

ssh-keygen # Follow the instructions and use default directories

On all the machines copy the following content to .ssh/config. This configuration allows all the machines to use these parameters to connect to each other.

Host raspberry1
HostName 192.168.2.211
User <your username>
Host raspberry2
HostName 192.168.2.212
User <your username>
Host raspberry3
HostName 192.168.2.213
User <your username>
Host node1
HostName 192.168.2.201
User <your username>

From each machine run the command ssh-copy-id <other machine> for all the other machines. For example, for node1:

ssh-copy-id raspberry1
ssh-copy-id raspberry2
ssh-copy-id raspberry2

Install Terminator

The following steps must be executed on all the machines. To broadcast input on multiple machines you can use Terminator. This software can be configured to automatically ssh to multiple machines using ~.config/terminator/config. A screenshot of Terminator is illustrated in Figure 4.

Figure 4: Terminator instance.

The contents for my setup are the following:

[global_config]
[keybindings]
[profiles]
[[default]]
cursor_color = "#aaaaaa"
[[raspberry1]]
cursor_color = "#aaaaaa"
exit_action = restart
use_custom_command = True
custom_command = ssh raspberry1
[[raspberry2]]
cursor_color = "#aaaaaa"
exit_action = restart
use_custom_command = True
custom_command = ssh raspberry2
[[raspberry3]]
cursor_color = "#aaaaaa"
exit_action = restart
use_custom_command = True
custom_command = ssh raspberry3
[layouts]
[[default]]
[[[window0]]]
type = Window
parent = ""
[[[child1]]]
type = Terminal
parent = window0
profile = default
[[all_raspberries]]
[[[child0]]]
type = Window
parent = ""
order = 0
position = 70:27
maximised = True
fullscreen = False
size = 2490, 1536
title = nikos@nikos-IdeaPad-Gaming-3-16ARH7: ~
last_active_term = 5f13f20f-7bc3-49eb-a22b-145fad1e6852
last_active_window = True
[[[child1]]]
type = HPaned
parent = child0
order = 0
position = 857
ratio = 0.3448692152917505
[[[terminal2]]]
type = Terminal
parent = child1
order = 0
profile = raspberry1
uuid = 5f13f20f-7bc3-49eb-a22b-145fad1e6852
[[[child3]]]
type = HPaned
parent = child1
order = 1
position = 929
ratio = 0.5723967960566851
[[[terminal4]]]
type = Terminal
parent = child3
order = 0
profile = raspberry2
uuid = 43b98729-3b5b-4309-931c-ccab1338c845
[[[terminal5]]]
type = Terminal
parent = child3
order = 1
profile = raspberry3
uuid = cb8e8c64-262e-4185-9d7a-63001116675c

Kubernetes Cluster Architecture

In this tutorial, we will build a simple Kubernetes cluster that consists of 4-nodes. Kubernetes architecture consists of Control-Plane nodes and Worker nodes. The Control-Plane nodes handle the state of the cluster while worker nodes are used for the Pod deployments.

Control-Plane: The manager of a cluster that handles the current cluster state. It detects cluster events and makes decisions for the cluster’s scheduling. It runs various services such as:

  • API Server: This provides the Pods the Kubernetes API to communicate with each other.
  • Etcd: A highly available key-value store.
  • Scheduler: Manages and assigns Pods to nodes.
  • Controller Manager: Monitors the various resources.
  • Cloud-Specific Controller Manager: A Controller Manager that implements an abstraction for different providers such as AWS.

Pod: The smallest Kubernetes object that can be deployed or created. It may contain one or more containers. These containers share the same network and namespace. In addition, they may share storage volumes.

Note: For this tutorial, we use the node1 (AMD64) node as the Control-Plane and the three RaspberryPi as the worker nodes.

Install Docker Engine

Each node needs a container engine. Use Terminator to connect to all the nodes and use the following to install docker-engine in Ubuntu 24.04. However, if you have a different OS follow the official instructions.

# For Ubuntu 24.04 
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update

sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
# Test docker installation
sudo docker run hello-world

Install CRI-Dockerd

CRI-Dockerd is necessary on all the nodes of the cluster. For all worker nodes use ARM architecture since we are building a heterogeneous Kubernetes cluster. For the Control-Plane use an AMD64 version.

# 1. download cri-dockerd using https://github.com/Mirantis/cri-dockerd/releases 
# Worker Node:
wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.14/cri-dockerd-0.3.14.arm64.tgz
#Control-Plane Node:
wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.15/cri-dockerd-0.3.15.amd64.tgz

# 2. Untar the archive.
tar -xvf cri-dockerd-your-version-

# 3. Move the cri-dockerd to /usr/local/bin/
sudo mv ./cri-dockerd/cri-dockerd /usr/local/bin/

# 4. Run cri-dockerd help message.
cri-dockerd --help

# 5. Download cri-docker service and socket
wget https://raw.githubusercontent.com/Mirantis/cri-dockerd/master/packaging/systemd/cri-docker.service
wget https://raw.githubusercontent.com/Mirantis/cri-dockerd/master/packaging/systemd/cri-docker.socket
sudo mv cri-docker.socket cri-docker.service /etc/systemd/system/
sudo sed -i -e 's,/usr/bin/cri-dockerd,/usr/local/bin/cri-dockerd,' /etc/systemd/system/cri-docker.service

# 6. Start cri-docker
systemctl daemon-reload
systemctl enable cri-docker.service
systemctl enable --now cri-docker.socket
systemctl status cri-docker.socket

Note: You must install Docker Engine and CRI-Dockerd on all the nodes.

Kubeadm, Kubectl, and Kublet

Let's install the Kubernetes software components to the nodes. For this tutorial, we use:

  • kubeadm: A simple CLI-tool for setting up the cluster.
  • kubectl: A CLI-tool for interacting with a Kubernetes cluster.
  • kubelet: A software component that runs on Kubernetes workers and communicates with Control-Plane receiving commands for running the services using the local container runtime.

To install the services use the following commands.

sudo apt-get update
# apt-transport-https may be a dummy package; if so, you can skip that package
sudo apt-get install -y apt-transport-https ca-certificates curl gpg

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
sudo systemctl enable --now kubelet

You can install kubeadm software only on the node1 since it is not necessary for the workers.

Initialize the Control-Plane

To initialize the Control-Plane on the node1 simply run the following commands.

sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --cri-socket=unix:///var/run/cri-dockerd.sock

This command starts all the services necessary for the Control-Plane. For instance, API-Server and Etcd. The parameter cri-socket is necessary since we decided to use the CRI-Dockerd plugin for managing the worker container. The parameter pod-network-cidr is important for the next steps of the tutorial. Please note the output of this command. It provides the command you must use on worker nodes to join the cluster.

sudo kubeadm join 192.168.2.201:6443 --token <token> \
--discovery-token-ca-cert-hash sha256:<hash> \
--cri-socket=unix:///var/run/cri-dockerd.sock

Network Plugins

After we initialized the Control-Plane an important Pod is still not running. This Pod is named CoreDNS. This Pod is necessary for allowing the Pods to communicate with each other, for service discovery, and for load balancing. This service is automatically configured after we install a Container Network Interface(CNI). Container Network Interface is a Linux library that allows the creation of container network interfaces. These interfaces are used by Kubernetes. For example, during the Pod creation, the Control-Plane uses the CNI plugin in order to create a network interface for this Pod. Kubernetes does not provide a specific CNI plugin. For this tutorial, we decided to use a simple solution that suits beginners. This CNI is named Flannel. Please use the following command for the Flannel CNI download. The CNI plugin is also a Kubernetes Pod.

kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

We used the parameter pod-network-cidr since Flannel uses the CIDR address 10.244.0.0/16.

After installation wait ~5 minutes for the plugin to finish the setup. use the command kubectl get pods --all-namespaces and ensure that the output lists the coredns Pods as Running (check Figure 5).

Figure 5: List of the Pods running in all the namespaces.

Worker Nodes

Use Terminator to connect to worker nodes. On each worker node please run the command provided by kubeadm init .

sudo kubeadm join 192.168.2.201:6443 --token <token> \
--discovery-token-ca-cert-hash sha256:<hash> \
--cri-socket=unix:///var/run/cri-dockerd.sock

If you do not remember the token and the hash you can generate a new token with kubeadm token create . You can view the hash with

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
openssl dgst -sha256 -hex | sed 's/^.* //'

On node1 run kubectl get nodes to view the nodes of the cluster. This command lists the nodes of the cluster (see Figure 6).

Figure 6: Nodes of the Kubernetes cluster.

Add the label arch=arm to the RPi nodes. This step is useful when selecting the image for the Pod. An ARM image is necessary for the RPi nodes.

kubectl label nodes raspberrypi1 arch=arm
kubectl label nodes raspberrypi2 arch=arm
kubectl label nodes raspberrypi3 arch=arm
kubectl get nodes --show-labels

MariaDB

Run a simple MariaDB Pod to the Kubernetes cluster.

kubectl run mariadb-test-pod --image=arm64v8/mariadb --env="MARIADB_ROOT_PASSWORD=secret"

Note: We must always use ARM64 images for the cluster since we used only ARM worker nodes.

Controlling Control Plane From Other Machines

Until now we connected to node1 to run various kubectl commands. To run kubectl from other computers first install kubectl. Then run the following command to copy the configuration for the kubectl command:

scp root@node1:/etc/kubernetes/admin.conf ~/.kube/config

Remember to allow root access on node1.

--

--