OpenShift + Raspberry Pi

Ben Swinney
15 min readJul 1, 2022

--

Raspberry Pi 4 8Gb with SSD

I’ll begin this article with a confession…

I’m a tech geek and huge Red Hat OpenShift fan. I work with it on a daily basis in my role within IBM Client Engineering, it’s the foundational piece of many of the Proof of eXperiences I deliver to customers.

In my HomeLab, I run OKD, the Community Distribution of OpenShift, often upgrading to new versions the moment they drop, breaking it on purpose so I can enjoy the time fixing it. OpenShift’s integrations with ArgoCD, KubeVirt, Advanced Cluster Management and StackRox to name just a few, makes it an ideal Kubernetes platform of choice, regardless of where you deploy it, be that Public, Private or On-Premises.

I could spend all day drooling over it…

However, despite my huge affection for it, I was always disappointed it couldn’t be ran on something like a Raspberry Pi. 😞

Being the tech geek that I am, I love to play with Cloud Native technologies. I’ve previously built Kubernetes Clusters running on a couple of Raspberry Pi’s, built architecture Kubernetes Clusters running x86 and ARM compute. Just because it was something cool I could do within the safety of my HomeLab. It’s one of the many advantages of having a HomeLab brings. You can break environments without risk of bringing down a UK-wide national Estate Agents entire Production system, which may or may not have been something I did very early in my IT career. 🤦‍♂

Anyways, back to the Raspberry Pi.

I always hoped there would be a way in which OpenShift could be deployed to a small Edge-like device, such as a Raspberry Pi, and provide near the same features and reliability I had come to expect from it.

I was fortunate enough to come across an experimental flavour of OpenShift/OKD called MicroShift, optimised for Far Edge devices.

It’s aimed at the niche market between minimal standalone edge devices running Single Node OpenShift and fully-fledged OpenShift/OKD edge clusters.

Far Edge devices are often deployed into a field that pose very different operational, environmental and business challenges from Cloud computing.

Typically, this means that trade-offs need to be made for running OpenShift at the far edge vs running OpenShift On-Premises or a HyperScalers Cloud datacentre. For example, my Raspberry Pi lives in my HomeLab, in a rack in my garage, with no redundant power supplies, insufficient cooling, and the occasional bump from when my son uses the rack as a rebound board for his soccer practice. 😠

MicroShift’s design goals cater for being a far edge environment by:

  • making frugal use of resources (CPU, Memory, Network and Storage)
  • tolerate severe networking constraints
  • update safely, securely and seamlessly without disrupting the running workloads
  • build upon edge-optimised OS’s like RHEL for Edge and Fedora IoT
  • provide consistent development and management experience as seen in a standard OpenShift cluster

Running MicroShift on a Raspberry Pi

A Raspberry Pi 4 8Gb to be exact.

Armed with an 8Gb Pi, I explored what types of Operating System would be suitable candidate to run MicroShift on. The Project itself recommended RHEL, Fedora or CentOS Stream and I quickly decided that since this was a far-edge device, I should investigate an OS suitable for those use-cases.

The most obvious choice was RHEL for Edge, but although I work for IBM Client Engineering, and Red Hat is a part of IBM, I don’t always have access to some of the more niche Red Hat software such as in this case, RHEL for Edge.

Luckily, I could use the Fedora IoT release which is the upstream project for RHEL for Edge. One of the advantages of Fedora IoT and RHEL for Edge is that they are OSTree based Linux distros, which means an immutable OS, which is perfect for this build. 🆒

Feel free to just copy and paste the commands without reading the article, most should just work.

Fedora IoT Installation and Configuration

So, getting down to the good stuff.

Firstly, we need to install the Fedora IoT OS onto the Raspberry Pi. Typically, this is written to the SD card, but in my case I’ll be using a USB-A to 2.5-Inch SATA convertor and an old 512Gb SSD I have lying around.

I initially tried the majority of the steps below on my MacBook using the Raspberry Pi Imager but had issues with booting the SSD. It was only when I tried the steps on my Fedora 36 Server using the arm-image-installer binary did I have any luck. However, your milage may vary.

I began by first installing the arm-image-installer, this binary will be used to create a bootable configuration on the SSD to connect to my Raspberry Pi 4.

$ dnf update
$ dnf install -y arm-image-installer

Download the Fedora IoT Raw image for the Raspberry Pi 4 (aarch64) from getfedora.org/en/iot/download/ and store it in a convenient location.

$ mkdir ~/fedoraiot36
$ cd ~/fedoraiot36
$ wget https://download.fedoraproject.org/pub/alt/iot/36/IoT/aarch64/images/Fedora-IoT-36-20220618.0.aarch64.raw.xz

Copy an existing SSH Public key or create a new one using the ssh-keygen command

$ cp ~/.ssh/id_rsa.pub ~/fedoraiot36/id_rsa.pub

Next make note of the device name you will use to boot the Fedora IoT OS, in my case I was using an SSD, so I was going to use /dev/sdd. Be careful not to confuse your SSD with any other SSD you may have installed.

$ lsblkNAME                       MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 223.6G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 1G 0 part /boot
└─sda3 8:3 0 222.5G 0 part
└─fedora_virthost01-root 253:0 0 222.4G 0 lvm /
sdb 8:16 0 1.1T 0 disk
└─sdb1 8:17 0 1.1T 0 part
sdc 8:32 0 256M 1 disk
└─sdc1 8:33 0 251M 1 part
sdd 8:48 0 489G 0 disk
├─sdd1 8:49 0 501M 0 part
├─sdd2 8:50 0 1G 0 part
└─sdd3 8:51 0 2.5G 0 part
zram0 252:0 0 8G 0 disk [SWAP]
nvme0n1 259:0 0 931.5G 0 disk
nvme1n1 259:1 0 931.5G 0 disk
└─nvme1n1p1 259:3 0 931.5G 0 part

Clean off any previously partitions from the SSD. You may not need to do this, but it always leaves me feeling good, knowing nothing can contaminate the new OS when it’s written. 20+ years in IT can make you feel a little untrustworthy when re-using drives.

I typically use sfdisk to clean off partitions, feel free to use any tool you’re comfortable with.

$ sfdisk --delete /dev/sddThe partition table has been altered.

I’m now ready to use the arm-image-installer to create our bootable Fedora IoT OS.

$ cd ~/fedoraiot36
$ arm-image-installer \
--target=rpi4 \
--image=Fedora-IoT-36-20220618.0.aarch64.raw.xz \
--resizefs \
--addkey=id_rsa.pub \
--media=/dev/sdd

You will see output similar to below.

=====================================================
= Selected Image:
= Fedora-IoT-36-20220618.0.aarch64.raw.xz
= Selected Media : /dev/sdd
= U-Boot Target : rpi4
= Root partition will be resized
= SSH Public Key id_rsa.pub will be added.
=====================================================
*****************************************************
*****************************************************
******** WARNING! ALL DATA WILL BE DESTROYED ********
*****************************************************
*****************************************************
Type 'YES' to proceed, anything else to exit now= Proceed? YES
= Writing:
= Fedora-IoT-36-20220618.0.aarch64.raw.xz
= To: /dev/sdd ....
4293451776 bytes (4.3 GB, 4.0 GiB) copied, 62 s, 69.2 MB/s
0+523216 records in
0+523216 records out
4294967296 bytes (4.3 GB, 4.0 GiB) copied, 121.196 s, 35.4 MB/s
= Writing image complete!
= Resizing /dev/sdd ....
Resizing the filesystem on /dev/sdd3 to 127810690 (4k) blocks.
The filesystem on /dev/sdd3 is now 127810690 (4k) blocks long.
= Raspberry Pi 4 Uboot is already in place, no changes needed.
= Adding SSH key to authorized keys.
= Installation Complete! Insert into the rpi4 and boot.

Before I go running into the Garage and plugging in the SSD into the Raspberry Pi, I had to first update the Raspberry Pi to use the latest UEFI firmware.

If you’re using an SD card, you shouldn’t need to complete this step

Ok, I’ll admit that I did just exactly that and after plugging in the SSD and waiting for the Raspberry Pi logo, I was instead met with a boot error loop and spent a good couple of hours trying to figure out why. It was only after some digging and troubleshooting that resulted me in formatting a few SSD’s and many explicit words, did I stumble upon this link which mentioned booting UEFI firmware via USB.

So, using the latest firmware available, which at the time of writing is v1.33, I extracted it to /dev/sdd1

$ cd ~/fedora36iot
$ wget https://github.com/pftf/RPi4/releases/download/v1.33/RPi4_UEFI_Firmware_v1.33.zip
$ mount /dev/sdd1 /mnt
$ cd /mnt
$ unzip ~/fedora36iot/RPi4_UEFI_Firmware_v1.33.zip
Archive: /root/fedora36iot/RPi4_UEFI_Firmware_v1.33.zip
inflating: RPI_EFI.fd
replace bcm2711-rpi-4-b.dtb? [y]es, [n]o, [A]ll, [N]one, [r]ename: A
inflating: bcm2711-rpi-4-b.dtb
inflating: bcm2711-rpi-400.dtb
inflating: bcm2711-rpi-cm4.dtb
inflating: config.txt
inflating: fixup4.dat
inflating: start4.elf
inflating: overlays/upstream-pi4.dtbo
inflating: overlays/miniuart-bt.dtbo
inflating: Readme.md
creating: firmware/
inflating: firmware/Readme.txt
creating: firmware/brcm/
inflating: firmware/brcm/brcmfmac43455-sdio.clm_blob
inflating: firmware/brcm/brcmfmac43455-sdio.bin
inflating: firmware/brcm/brcmfmac43455-sdio.Raspberry
inflating: firmware/brcm/brcmfmac43455-sdio.txt
inflating: firmware/LICENCE.txt
$ rm Readme.md
$ cd
$ umount /dev/sdd1

After adding the latest UEFI firmware, I plugged the SSD into the Raspberry Pi and watched as the now glorious Raspberry Pi boot logo appeared.

Before I allowed the Raspberry Pi to move past the boot screen, I’d read that the Raspberry Pi 4 limits the RAM to 3Gb by default when using UEFI firmware. This is enabled due to a hardware bug within the Broadcom SoC, which I won’t even begin to try and understand the details behind.

Thankfully this can be disabled by the following steps:

  • Hit the Esc key upon boot
  • Navigate to Device Manager — Raspberry Pi Configuration — Advanced Configuration — Limit RAM to 3GB <Disabled>
  • Hit F10 to Save and then Esc to Exit

The boot process may take a bit of time, but once booted, log in via ssh.

I had already assigned my Raspberry Pi 4’s MAC address to my DHCP server and created a dns entry (microshift) for the address. I recommend you do the same to simplify things.

If you need to find the DHCP address, you can use the nmap tool or search your DHCP logs or leases to find the IP address assigned to the Raspberry Pi 4.

$ ssh root@microshift
Boot Status is GREEN - Health Check SUCCESS

With the system booted, and I can log in and I’m greeted with a Health Check SUCCESS message.

The next step is optional, depending on how you connect your Raspberry Pi to an external network. I have mine connected via an Ethernet cable, which is also providing power to my Raspberry Pi via a PoE+ HAT. If you are planning on using a WiFi network, then you will need to configure the WiFi connection on the Raspberry Pi.

[root@microshift ~]# nmcli device wifi connect <YOUR-SSID> -ask

and once connected, update Fedora IoT to the latest packages and reboot to apply them.

[root@microshift ~]# rpm-ostree upgrade
Receiving objects; 95% (1579/1656) 3.0 MB/s 108.8 MB
Receiving objects; 95% (1579/1656) 3.0 MB/s 108.8 MB... done
Staging deployment... done
Upgraded:
criu 3.17-2.fc36 -> 3.17-4.fc36
criu-libs 3.17-2.fc36 -> 3.17-4.fc36
dnsmasq 2.86-6.fc36 -> 2.86-9.fc36
glibc 2.35-11.fc36 -> 2.35-12.fc36
glibc-common 2.35-11.fc36 -> 2.35-12.fc36
glibc-minimal-langpack 2.35-11.fc36 -> 2.35-12.fc36
gnutls 3.7.6-1.fc36 -> 3.7.6-3.fc36
kernel 5.18.5-200.fc36 -> 5.18.6-200.fc36
kernel-core 5.18.5-200.fc36 -> 5.18.6-200.fc36
kernel-modules 5.18.5-200.fc36 -> 5.18.6-200.fc36
krb5-libs 1.19.2-9.fc36 -> 1.19.2-11.fc36
mozjs91 91.10.0-1.fc36 -> 91.11.0-1.fc36
nettle 3.7.3-3.fc36 -> 3.8-1.fc36
podman 3:4.1.0-8.fc36 -> 3:4.1.1-1.fc36
podman-plugins 3:4.1.0-8.fc36 -> 3:4.1.1-1.fc36
python3 3.10.4-1.fc36 -> 3.10.5-2.fc36
python3-libs 3.10.4-1.fc36 -> 3.10.5-2.fc36
rpm-ostree 2022.10-2.fc36 -> 2022.10-3.fc36
rpm-ostree-libs 2022.10-2.fc36 -> 2022.10-3.fc36
uboot-images-armv8 2022.04-1.fc36 -> 2022.04-2.fc36
Run "systemctl reboot" to start a reboot[root@microshift ~]# systemctl reboot

As this is a far-edge device and sitting in my unreliable Garage within my HomeLab where I often break things, I decided to add an entry into the hosts file to allow MicroShift to perform local lookup.

[root@microshift ~]# hostnamectl hostname microshift.swinney.io
[root@microshift ~]# echo "192.168.1.78 microshift microshift.swinney.io" >> /etc/hosts

Deploy MicroShift

Phew …. hopefully you’re still with me by this point?

Deploying MicroShift was quite straight forward, but I did need to do a bit of digging for the correct repos to use. As Fedora IoT does not use dnf, I had to manually add the Fedora Modular, Fedora Updates Modular and finally the MicroShift repos.

[root@microshift ~]# curl -L -o /etc/yum.repos.d/fedora-modular.repo https://src.fedoraproject.org/rpms/fedora-repos/raw/rawhide/f/fedora-modular.repo[root@microshift ~]# curl -L -o /etc/yum.repos.d/fedora-updates-modular.repo https://src.fedoraproject.org/rpms/fedora-repos/raw/rawhide/f/fedora-updates-modular.repo[root@microshift ~]# curl -L -o /etc/yum.repos.d/group_redhat-et-microshift-fedora-36.repo https://copr.fedorainfracloud.org/coprs/g/redhat-et/microshift/repo/fedora-36/group_redhat-et-microshift-fedora-36.repo

Once the repositories have been added, I then enabled and installed cri-o and MicroShift.

[root@microshift ~]# rpm-ostree ex module enable cri-o:1.23NOTICE: Experimental commands are subject to change.
Checking out tree 180514f... done
Enabled rpm-md repositories: fedora-cisco-openh264 fedora-modular fedora updates
Updating metadata for 'fedora-cisco-openh264'... done
Updating metadata for 'fedora-modular'... done
Updating metadata for 'fedora'... done
Updating metadata for 'updates'... done
Importing rpm-md... done
rpm-md repo 'fedora-cisco-openh264'; generated: 2022-04-07T16:52:38Z solvables: 4
rpm-md repo 'fedora-modular'; generated: 2022-05-04T21:11:12Z solvables: 822
rpm-md repo 'fedora'; generated: 2022-05-04T21:15:55Z solvables: 58687
rpm-md repo 'updates'; generated: 2022-06-28T01:24:42Z solvables: 12408
Resolving dependencies... done
Writing OSTree commit... done
Staging deployment... done
[root@microshift ~]# rpm-ostree install cri-o cri-tools microshiftChecking out tree 180514f... done
Enabled rpm-md repositories: copr:copr.fedorainfracloud.org:group_redhat-et:microshift fedora-cisco-openh264 fedora-modular fedora updates
Updating metadata for 'copr:copr.fedorainfracloud.org:group_redhat-et:microshift'... done
Importing rpm-md... done
rpm-md repo 'copr:copr.fedorainfracloud.org:group_redhat-et:microshift'; generated: 2022-04-21T04:36:37Z solvables: 6
rpm-md repo 'fedora-cisco-openh264' (cached); generated: 2022-04-07T16:52:38Z solvables: 4
rpm-md repo 'fedora-modular' (cached); generated: 2022-05-04T21:11:12Z solvables: 822
rpm-md repo 'fedora' (cached); generated: 2022-05-04T21:15:55Z solvables: 58687
rpm-md repo 'updates' (cached); generated: 2022-06-28T01:24:42Z solvables: 12408
Resolving dependencies... done
Will download: 10 packages (64.5 MB)
Downloading from 'copr:copr.fedorainfracloud.org:group_redhat-et:microshift'... done
Downloading from 'fedora-modular'... done
Downloading from 'fedora'... done
Downloading from 'updates'... done
Importing packages... done
Checking out packages... done
Running pre scripts... done
Running post scripts... done
Running posttrans scripts... done
Writing rpmdb... done
Writing OSTree commit... done
Staging deployment... done
Freed: 15.8 kB (pkgcache branches: 0)
Added:
conntrack-tools-1.4.6-2.fc36.aarch64
cri-o-1.23.0-1.module_f36+13590+dd749e2f.aarch64
cri-tools-1.22.0-3.module_f36+13590+dd749e2f.aarch64
libnetfilter_cthelper-1.0.0-21.fc36.aarch64
libnetfilter_cttimeout-1.0.0-19.fc36.aarch64
libnetfilter_queue-1.0.5-2.fc36.aarch64
microshift-4.8.0-2022_04_20_141053.fc36.aarch64
microshift-selinux-4.8.0-2022_04_20_141053.fc36.noarch
runc-2:1.1.1-1.fc36.aarch64
socat-1.7.4.2-2.fc36.aarch64
Changes queued for next boot. Run "systemctl reboot" to start a reboot

With MicroShift and cri-o installed, I next rebooted the Raspberry Pi 4 and it booted into the new OSTree with the packages installed.

[root@microshift ~]# systemctl reboot

Once the Raspberry Pi 4 has rebooted, I logged in and confirm the packages are installed.

[root@microshift ~]# rpm-ostree status
State: idle
Deployments:
● fedora-iot:fedora/stable/aarch64/iot
Version: 36.20220624.0 (2022-06-24T07:44:03Z)
BaseCommit: 180514f73f45fe614b6d21b44278187ae02c8ba5c0d81be393183f38dcc52aec
GPGSignature: Valid signature by 53DED2CB922D8B8D9E63FD18999F7CBF38AB71F4
LayeredPackages: cri-o cri-tools microshift
EnabledModules: cri-o:1.23
fedora-iot:fedora/stable/aarch64/iot
Version: 36.20220624.0 (2022-06-24T07:44:03Z)
Commit: 180514f73f45fe614b6d21b44278187ae02c8ba5c0d81be393183f38dcc52aec
GPGSignature: Valid signature by 53DED2CB922D8B8D9E63FD18999F7CBF38AB71F4

Configure and Start MicroShift

Ok, I’ve installed MicroShift and cri-o, but there’s still a few more steps to complete to configure the OS before I can start MicroShift. I promise you we are nearly there…

I downloaded the OpenShift 4.9 oc and kubectl clients, although it should be noted that MicroShift it based on version 4.8. I suspect 4.10 would be ok to use as well, but I decided to try and stick to a version closer to 4.8.

[root@microshift ~]# curl -o oc.tar.gz https://mirror.openshift.com/pub/openshift-v4/aarch64/clients/ocp/stable-4.9/openshift-client-linux.tar.gz && \
tar -xzvf oc.tar.gz && \
rm -f oc.tar.gz && \
install -t /usr/local/bin {kubectl,oc} && \
rm -r oc kubectl README.md

Digging through the MicroShift Github site, I noticed a newer binary for MicroShift had been released in the Nightly’s, so I used that version to replace the binary supplied from the repos. I didn’t test to see if the binary supplied from the repos would work out of the box, as I’m a sucker for trying out the latest and greatest, I suspect it would, but I can’t confirm.

[root@microshift ~]# curl -L https://github.com/openshift/microshift/releases/download/nightly/microshift-linux-arm64 > /usr/local/bin/microshift
[root@microshift ~]# chmod +x /usr/local/bin/microshift
[root@microshift ~]# cp /usr/lib/systemd/system/microshift.service /etc/systemd/system/microshift.service
[root@microshift ~]# sed -i "s|/usr/bin|/usr/local/bin|" /etc/systemd/system/microshift.service
[root@microshift ~]# systemctl daemon-reload

As with any device, regardless of if it’s in a Garage or a Datacenter, you should enable the OS firewall.

[root@microshift ~]# systemctl enable firewalld --now
[root@microshift ~]# firewall-cmd --zone=trusted --add-source=10.42.0.0/16 --permanent
[root@microshift ~]# firewall-cmd --zone=public --add-port=6443/tcp --permanent
[root@microshift ~]# firewall-cmd --zone=public --add-port=80/tcp --permanent
[root@microshift ~]# firewall-cmd --zone=public --add-port=443/tcp --permanent
[root@microshift ~]# firewall-cmd --zone=public --add-port=5353/udp --permanent
[root@microshift ~]# firewall-cmd --reload

Create a config.yaml file within /etc/microshift. This file will hold information that MicroShift will use when booting the cluster.

[root@microshift ~]# mkdir /etc/microshift
[root@microshift ~]# cat - > /etc/microshift/config.yaml <<EOF
cluster:
url: https://192.168.1.78:6443
domain: microshift.swinney.io
EOF

Let’s start cri-o and MicroShift

[root@microshift ~]# systemctl enable --now crio microshift

All being well, within a few minutes I should start to see Pods spinning up and the node eventually turn ready.

[root@microshift ~]# export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig
[root@microshift ~]# watch "oc get nodes;oc get pods -A"

If you are following along at the same time, you should see output similar to below.

NAME                    STATUS   ROLES    AGE   VERSION
microshift.swinney.io Ready <none> 7m v1.21.0
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system kube-flannel-ds-rzzwg 1/1 Running 0 7m
kubevirt-hostpath-provisioner kubevirt-hostpath-provisioner-sfvzk 1/1 Running 0 7m
openshift-dns dns-default-tfm4m 2/2 Running 0 7m
openshift-dns node-resolver-hhlg9 1/1 Running 0 7m
openshift-ingress router-default-85bcfdd948-nh77j 1/1 Running 0 7m
openshift-service-ca service-ca-7764c85869-xj68s 1/1 Running 0 7m

If you did … well done their legend. Give yourself a pat on the back. 👍

If you didn’t, don’t despair, this is what learning is all about.

Start by checking the cri-o and MicroShift logs for errors and debug messages.

[root@microshift ~]# journalctl -u crio -f
[root@microshift ~]# journalctl -u microshift -f

It will likely be DNS…

Fry from Futurama saying “It’s Always DNS”

or maybe a bug.

Similar to the one I hit which meant that MicroShift did not honour the domain name I specified within the /etc/microshift/config.yaml file. The bug results in the pods within MicroShift not being able to resolve a Service IP.

Thankfully a workaround is available:

[root@microshift ~]# oc -n openshift-ingress set env deployment/router-default ROUTER_SUBDOMAIN="\${name}-\${namespace}.apps.microshift.swinney.io" ROUTER_ALLOW_WILDCARD_ROUTES="true" ROUTER_OVERRIDE_HOSTNAME="true"

Deploy a basic application

So, feeling good about myself, I wanted to deploy a basic Nginx web server as a test to make sure my little Raspberry Pi running MicroShift was working as intended.

I created a namespace and applied a nginx manifest with a deployment, service and route.

[root@microshift ~]# oc new-project demo-nginx
Now using project "demo-nginx" on server "https://192.168.1.78:6443".
[root@microshift ~]# oc apply -f https://raw.githubusercontent.com/benswinney/microshift-demos/master/nginx/nginx.yamldeployment.apps/demo-nginx created
service/demo-nginx created
route.route.openshift.io/demo-nginx created

Once the deployment, service and route have been created, they can be queried via the oc command and we can test the route via curl.

[root@microshift ~]# oc get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
demo-nginx 1/1 1 1 6m23s
[root@microshift ~]# oc get routes
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
demo-nginx demo-nginx-demo-nginx.apps.microshift.swinney.io demo-nginx 8080 edge/Redirect None
[root@microshift ~]# curl https://demo-nginx-demo-nginx.microshift.swinney.io --insecureHello from MicroShift on Raspberry Pi 4 8Gb

and via my web browser

Wooohoooo! 😄

What Next?

I’m buzzing that it’s now possible to run OpenShift, or as I should refer to it, MicroShift, on a Raspberry Pi 4 device and this opens huge possibilities for far-edge deployments. I’m quite excited to see where Red Hat takes this cool little project and I will be following it closely.

So, what’s next? I’d like to try and see how I can integrate the use of MicroShift this into a project I’ve been working on with a few colleagues out of IBM Client Engineering, affectionately called “One Touch Provisioning” or OTP for short. OTP takes the technologies of OpenShift. Red Hat Advanced Cluster Management (RHACM) and ArgoCD to provide Cluster and Application Life-cycling as well as Governance all driven by GitOps and Code.

From reading, MicroShift has been built with the ability to auto-apply manifests at boot, so I’ll also be exploring ways in which you can build the Fedora IoT image with MicroShift, cri-o and RHACM manifests within the image and once power is applied, it will automatically register itself against an RHACM Hub cluster. Opening possibilities to apply Applications and Governance and Cluster Configuration via GitOps.

--

--

Ben Swinney

Chief Architect with IBM Client Engineering. All my opinions are my own and do not represent those of IBM