K3S on Synology — What if it works?

Synology is known as a good nas manufacturer, their nas include many useful services like the most common as smb, ftp, afp and nfs but also expose dns, domain/active directory services and many others.

Synology nas has different hardware architecture from ARM to X86. the “plus” model like DS218+ has Intel Apollolake CPU and it has support for Docker.

Unfortunately I don’t have the DS218+ but “only” the DS218 model. This little box has a Realtek rtd1296 ARM 64bit CPU, 2Gb RAM and it doesn’t support Docker out of the box.

I’m a Kubernetes enthusiast, play with it, work with it all days, just awesome! While surfing the web (Reddit) I found the K3s project (as they say “5 less than k8s”) developed by Rancher.

From their github page, K3s is “Lightweight Kubernetes. Easy to install, half the memory, all in a binary less than 40mb.”
Pretty awesome, it supports arm64, and it’s already compiled for that architecture. https://github.com/rancher/k3s

One night, I thought... Maybe I could install K3s on my Synology nas.

At the very beginning I thought it was pretty simple, it’s a 40mb Go-binary, no problem, just execute it and you’re good to go.

I was wrong…

I cannot use the automagic script because it requires systemd or openrc to setup the application

curl -sfL https://get.k3s.io | sh -

Synology Dsm does not have systemd or openrc so I needed to skip all requirements. I’ve downloaded the binary directly from Github page and put it in a dedicated share on Synology.

That 40mb binary is a “self contained application” and it needs some requirements to work correctly. The first run was unsuccessfully.

./k3s-arm64 server

Starting the server just hangs and complains about missing CNI plugins… and it was only the tip of an iceberg.
I noticed that on startup, the application extract itself in /var/lib/rancher/k3s and then try to start containerd (https://containerd.io/) but Synology does not have the overlay snapshotter kernel module so the startup failed.

I cannot rely on K3S default startup process because Synology configuration is totally customized. I needed a different approach.

Divide et Impera:

I started K3s with -d option and the application extract itself in my choosen path under /volume1/k3s

./k3s-arm64 server -d /volume/k3s

the application has three main folder “data”, “server” and “agent”.
for my customization I created a folder called “custom”.

I did some digging and found a snapshot workaround adding in containerd config.toml the snapshotter = “native” option. https://github.com/rancher/k3s/issues/575#issuecomment-526998741

First come first: start a functional containerd service.
From the automagic installation I picked up all needed requirements like setup the PATH env as all bin (like containerd) are stored in the bin folder:

bash-4.3# cat /volume1/homes/admin/.profile
export PATH=$PATH:/volume1/k3s/data/<uuid_of_k3s_extracted>/bin

Edited config.toml with snapshotter workaround:

# config.toml[plugins.opt]
path = “/volume1/k3s/agent/containerd”
level = "debug"
snapshotter = "native"
stream_server_address = “”
stream_server_port = “10010”
bin_dir = “/volume1/k3s/data/<uuid_of_k3s_extracted>/bin”
conf_dir = “/volume1/k3s/agent/etc/cni/net.d”

To start containerd service in foreground just use (containerd is in PATH now) the following command:

containerd -c /volume1/k3s/custom/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /volume1/k3s/agent/containerd

At this point containerd server is up and running. Just to be sure I tried to start a pod following: https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md and it worked successfully!

Next step was to startup K3s; as containerd was working foreground in one terminal, in a new one I tried to fire up K3s with the previous command (only set up debug log): “./k3s-arm server -d /volume1/k3s -v 99”. The application was already extracted and containerd was up so k3s started to create the main application pods like coredns and traefik.

Following the logs new problems appears. all Iptables modules were not loaded and k3s complained about missing kernel modules.

Cross-Compile, what a show

Synology Dsm has firewall management included. It is a simple iptables service managed by dsm web-ui. From DSM I disabled firewall management because K3s want to manage all iptables chains from itself. The big problem here is that all iptables modules is not loaded at boot and modprobe command is not present. I had to use insmod instead. I needed to find also the correct startup order of all missing kernel modules.

Some modules were not present in /lib/modules so I needed to compile them from sources so new chapter began:

I’ve downloaded the kernel sources from sourceforge and setup a cross compile environment (read details in tl;dr). From menuconfig I selected ALL iptables modules and compiled it.

After some tries and some failures (like kernel panic and blue blinking led) I found the correct order and created a sample startup script to load at boot the missing kernel modules.

With a working containerd service and all iptables modules loaded I tried to restart k3s service. No more iptables complains, but pods were in crashloop status. something else was missing..

After some digging I found that a static route was missing.
Traffic from external network goes to pods through cni0, (default subnet address Cni0 static route is a requirement. I added the route with ip command:

/sbin/ip r add dev cni0

After few seconds the pods were in running state.

I made it, K3s was running on my little DS218.

Some fixes and recap

After reset k3s and restarted all services a few times I found that the infrastructure was pretty solid so I deployed some containers, like nginx, docker registry and bitwarden.
I setup a custom CA with ssl certificates for my lab domain and configured the ingress to access the resources.

I created a repo on Github with folder tree used in this POC and the instructions for compiling the kernel modules.

After some fixes I was able to reproduce the installation process just following this steps:


Before setup k3s check if all iptables modules has been activated. To load all modules just start:

 /volume1/k3s/custom/script/enable_kernel_modules.sh start 

Check the output and dmesg. if something is missing (100% sure) you need to cross-compile missing kernel modules. In my Github repo check the Readme howto.

Once all iptables system is up and running proceed to next step.

Create a share on Synology called k3s and setup the following directory structure:

├── custom
│ ├── config.toml
│ └── scripts
│ ├── enable_cni_route.sh
│ ├── enable_containerd.sh
│ ├── enable_k3s.sh
│ └── enable_kernel_modules.sh
├── default
│ └── .
└── k3s-arm
3 directories, 7 files

I created a default backup copy of k3s just in case I have to reset the situation

./k3s-arm server -d /volume1/k3s/default

Immediately after data extraction stop the execution and run again the same command but with different path:

./k3s-arm server -d /volume1/k3s

As previous step I killed the application after data extraction.
At this point I had a default backup data and the working base data.

Containerd setup

Add to system PATH the bin dir of k3s:

cat /volume1/homes/admin/.profile
export PATH=$PATH:/volume1/k3s/data/<uuid_of_k3s_extracted>/bin

Start containerd server in foregroud:

containerd -c /volume1/k3s/custom/config.toml  -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /volume1/k3s/agent/containerd

K3s Server

Once containerd is up and running start k3s server with the following command (starting in foreground in new terminal):

/volume1/k3s/k3s-arm64 server -d /volume1/k3s/ --kubelet-arg=eviction-hard=memory.available\<100Mi --kubelet-arg=eviction-hard=nodefs.available\<2Gi --kubelet-arg=eviction-hard=nodefs.inodesFree\<5\% --kubelet-arg=image-gc-high-threshold=100 --kubelet-arg=image-gc-low-threshold=99

I added the kubelet argument just to disable eviction methods because the Synology volume was nearly full.

execute the following command in a new terminal to check the situation:

kubectl get pods --all-namespaces -o wide

After some time all pods are in running status.

If coredns pod or traefik pod goes in crashloopbackoff check the static route for cni0

# ip r dev cni0 scope link

if the static route is missing it must be added manually:

/sbin/ip r add dev cni0

Configure Traefik:

Edit Traefik ingress to disable ssl verify:

# add skip ssl in traefik
kubectl edit configmap -n kube-system traefik
# add after loglevel
insecureSkipVerify = true
# delete traefik pod to reload config
kubectl delete pod -n kube-system traefik-<id>

Start on Boot:

Copy the bash scripts under custom/script in /usr/local/etc/rc.d and mark them executable.
When Synology boots-up it starts containerd, k3s, kernel modules and checks if there is the cni0 static route.
I think there is a better implementation of startup procedure but for now it’s just a poc.

Test it!

When all pods in running status, try to deploy an Nginx and an ingress:
(change dns host with dns record using nip.io)

apiVersion: apps/v1
kind: Deployment
name: nginx-deployment
app: nginx
replicas: 1
app: nginx
app: nginx
- name: nginx
image: nginx:latest
- containerPort: 80
apiVersion: v1
kind: Service
name: nginx
app: nginx
- port: 80
protocol: TCP
app: nginx
apiVersion: extensions/v1beta1
kind: Ingress
name: nginx
namespace: default
kubernetes.io/ingress.class: traefik
- host: nginx.<nas_ip>.nip.io
- path: /
serviceName: nginx
servicePort: 80

check the status:

bash-4.3# kubectl get pods
nginx-deployment-68c7f5464c-2cvhv 1/1 Running 0 3d23h

Point to dns host specified in ingress to see nginx welcome page.

That’s it, K3S on Synology — Happy SynoHacking!




Technology Enthusiast, Debian addicted, Python lover

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Triangulation with Python

Java Constructor

Project YouTube

August — The month of delivery

Start of a new journey of learning

Conquest of Distributed Systems (part 3) — Actor Model Hidden in Plain Sight

Push Notification & Rich Notification with Firebase Cloud Messaging (FCM) Explained iOS — Swift 5

Let’s consider you are a product manager at a company like Zomato.To

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Marco Mezzaro

Marco Mezzaro

Technology Enthusiast, Debian addicted, Python lover

More from Medium

Terraform for DDLs is a bad idea

AWS Parameter Store — How to get all parameters at a given path

Top 5 Tips to pass the CKAD Exam

What is the docker ENTRYPOINT and how is it used ?