Mastering Kubernetes on Edge Devices with K3s

Tom Brovender
Develeap
Published in
7 min readMay 1, 2023

At KubeCon 2023, Rey Lejano from SUSE gave an insightful talk called “Sharpen the Edge with K3s and Containerized Operating Systems.” This article provides a summary of the key points from the talk, focusing on the challenges of running Kubernetes on edge devices and the solutions offered by K3s and containerized operating systems. While the article will not go in-depth into every aspect of the talk, it will give you an idea of the content covered and the overall structure.

NOTE: There will be parts of the talk that won’t be covered in-depth in this article.

All images in this article come from the talk itself, I do not own any of the images, or code that was shared in this article.

Understanding Edge Devices and Sites

Before diving into the challenges and solutions for Kubernetes on edge devices, it’s crucial to understand what an edge device or site is. Essentially, these are:

  1. Any remote site or device that processes data and is connected by a network.
  2. Any place that is not part of an organization’s core data center, network, or cloud.
  3. Remote devices that can sense, infer, and act by themselves.

Challenges of Edge Devices

When dealing with edge devices, there are several challenges to consider:

  1. Resources are scarce: Edge devices typically have limited computational power, storage capacity, and memory compared to traditional data centers or cloud environments. This means that deploying and running applications on edge devices requires careful consideration of resource constraints and optimization techniques to ensure smooth operation.
  2. Physical sites can be numerous or difficult to reach: Edge devices are often deployed in remote or hard-to-reach locations. This can make it challenging to access them for maintenance, repairs, or upgrades. In some cases, these devices might be spread across a large geographic area, adding to the complexity of managing them.
  3. Provisioning and onboarding the devices: Setting up and configuring edge devices can be a complex and time-consuming process. This includes tasks like installing software, configuring network settings, and ensuring the devices are secure. Additionally, onboarding new devices to an existing infrastructure can be challenging, especially when dealing with a large number of devices.
  4. Patching or upgrading the sites: Ensuring that edge devices are up-to-date with the latest security patches and software updates is critical to maintaining their security and reliability. However, this can be challenging due to the remote nature of the devices, as well as the limited resources available for updates.

The 4 Components for Success

To tackle these challenges, four key components are needed:

  1. Choose the Right Kubernetes Distribution: Selecting an appropriate Kubernetes distribution is essential for edge devices, as they have unique resource constraints and requirements. Opting for a distribution that is lightweight and tailored for resource-constrained environments can help enhance efficiency and performance.
  2. Select a Suitable Operating System: Utilizing an operating system designed for edge devices is important, as it can facilitate easy deployment, management, and updates. The operating system should be lightweight and optimized for running containerized applications, ensuring efficient resource utilization and ease of management.
  3. Enhance Operational Efficiency: Ensuring operational efficiency is critical when managing a large number of edge devices. This involves streamlining processes like device provisioning, configuration, monitoring, and updates. By automating these processes and centralizing management, organizations can reduce the time and effort required to manage their edge infrastructure and improve overall operational efficiency.
  4. Integrate Cloud-Native Technologies with Edge Infrastructure: Integrating edge devices with cloud-native technologies can significantly enhance the scalability, flexibility, and manageability of edge computing infrastructures. By leveraging cloud-native tools and practices, organizations can seamlessly deploy, manage, and scale applications across both edge devices and cloud environments, while benefiting from the scalability and flexibility of the cloud.

Kubernetes Distro

For our chosen distro, we will use K3s, a CNCF sandbox project packaged as a single binary. K3s is easy to install with just one command:

curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE=0644 sh -

Small brief about k3s architecture

  1. Control plane: This is the brain of the Kubernetes cluster, responsible for managing all the resources and orchestrating the deployment and scaling of containerized applications. In K3s, the control plane consists of three components:
  2. etcd: This is a distributed key-value store that stores the cluster’s configuration data and state. In K3s, the embedded etcd database is used to store the Kubernetes API objects.
  3. Nodes: These are the worker machines that run the containerized applications. In K3s, the nodes are similar to standard Kubernetes nodes, but with a reduced set of components, such as kubelet and containerd.
K3s Architecture

Containerized OS

Although containerized operating systems are not new, the way we use them needs to be adapted. We will use Linux base images to create a bootable containerized OS with three tools:

  1. cOS — a toolkit to build a containerOS
  2. Luet — a container-based package manager
  3. Elemental — a software stack enabling centralized, full cloud-native OS management with Kubernetes

Elemental offers features such as:

  • Runtime and buildtime framework for booting containers in VMs, Cloud, and Bare Metal
  • Container image can be booted as-is or used to create an installation medium (e.g., iso, raw image, ova, cloud, ipxe, vagrant, qcow2)
  • Additional customizations via cloud-init
  • A/B upgrades and immutable systems
  • K3s embedding capability

Creating the Docker Image

The first step in our process is to create the Docker image. Rey provided an example Docker image during his talk:

# Let's copy over luet from official images.
# This version will be used to bootstrap luet itself and Elemental internal components ARG LUET_VERSION=0.32.0
FROM quay.io/luet/base:$LUET_VERSION AS luet

FROM registry.suse.com/bci/bci-minimal:15.4
ARG K3S_VERSION=v1.24.10+k3s1
ARG ARCH=amd64
ENV ARCH=${ARCH}
ENV LUET_NOLOCK=true

# Copy the luet config file pointing to the upgrade repository
COPY repositories.yaml /etc/luet/luet.yaml

# Copy luet from the official images
COPY --from=luet /usr/bin/luet /usr/bin/luet RUN luet install -y \
toolchain/yip \
toolchain/luet \
utils/installer \ system/cos-setup \ system/immutable-rootfs \ system/grub2-config \ system/base-dracut-modules

# Install k3s server/agent
ENV INSTALL_K3S_VERSION=${K3S_VERSION} RUN curl -sfL https://get.k3s.io > installer.sh && \
INSTALL_K3S_SKIP_START="true" INSTALL_K3S_SKIP_ENABLE="true" sh installer.sh && \ INSTALL_K3S_SKIP_START="true" INSTALL_K3S_SKIP_ENABLE="true" sh installer.sh agent && \ rm -rf installer.sh

## System layout
# Required by k3s etc.
RUN mkdir /usr/libexec && touch /usr/libexec/.keep

# Copy custom files
# COPY files/ /
# Copy cloud-init default configuration
COPY cloud-init.yaml /system/oem/

# Generate initrd
RUN mkinitrd

# OS level configuration
RUN echo "VERSION=999" > /etc/os-release
RUN echo "GRUB_ENTRY_NAME=derivative" >> /etc/os-release RUN echo "welcome to our derivative" >> /etc/issue.d/01-derivative

# Copy cloud-init default configuration
COPY cloud-init.yaml /system/oem/

NOTE: cloud-init.yml is not part of this article, but it is essential to understand that it is the boot sequence that runs when the device boots. I recommend watching the talk to fully grasp the flow.

With the Docker image in hand, we can create a bootable ISO using the iso.yaml and Luet:

packages:
uefi:
- live/grub2-efi-image
isoimage:
- live/grub2
- live/grub2-efi-image

boot_file: "boot/x86_64/loader/eltorito.img"
boot_catalog: "boot/x86_64/boot.catalog"
isohybrid_mbr: "boot/x86_64/loader/boot_hybrid.img"

initramfs:
kernel_file: "vmlinuz"
rootfs_file: "initrd"

image_prefix: " mylinuxderivative-1."
image_date: true
label: "COS_LIVE"

luet:
repositories:
- name: Elemental
enable: true
urls:
- quay.io/costoolkit/green
type: docker

Now, we will build the docker image and run it with the following command to create the ISO.

docker run -v $PWD:/cOS -v /var/run:/var/run --entrypoint /usr/bin/luet-makeiso -ti --rm quay.io/costoolkit/toolchain ./iso.yaml --image <name_of_image>:<tag>

when we install the bootable device, our boot sequence connects the device to our K3s cluster, and we can start using it.

Patching and Upgrades

For upgrades, Rey discussed using the System Upgrade Controller, a Rancher product, which can be found at https://github.com/rancher/system-upgrade-controller.

apiVersion: upgrade.cattle.io/v1
kind: Plan
metadata:
name: elemental-upgrade
namespace: system-upgrade
labels:
k3s-upgrade: server
spec:
concurrency: 1
version: fleet-sample # Image tag
nodeSelector:
matchExpressions:
- {key: k3s.io/hostname, operator: Exists}
serviceAccountName: system-upgrade
cordon: true
# drain:
# force: true
upgrade:
image: quay.io/costoolkit/mylinuxderivative:1.2 # Image upgrade reference
command:
- "/usr/sbin/suc-upgrade"

Solving the Edge Puzzle

The edge is not as complicated as it seems. We have to remember the main three layers which run at an edge site:

  1. Hardware (H/W) layer — the hardware on which we are running
  2. OS layer — our Elemental-based Linux image
  3. Kubernetes (K8s) layer — K3s, in this case

In conclusion, Rey Lejano’s presentation at KubeCon offered valuable insights into managing Kubernetes clusters on edge devices, shedding light on the unique challenges and critical components for success. As a personal highlight, the talk transformed a lack of understanding into a solid grasp of how to implement and manage edge devices and sites effectively.

For those who have followed this article, it is highly recommended to watch Rey’s full talk when available, as it offers a wealth of information beyond the key points covered here. By doing so, you can deepen your understanding and uncover further insights from Rey Lejano’s expertise, empowering you to confidently navigate the world of edge devices and Kubernetes clusters.

--

--

Tom Brovender
Develeap

DevOps Enginner @ Develeap | CKA | GitOps | TF Associate