I recently bought 3 ODROID-HC1 devices to add a dedicated storage cluster to my home Kubernetes. I thought that it’s a good excuse to spend some time redeploying the cluster. Usually, I would’ve gone with CoreOS, since I’m a big fan of their immutable OS. Unfortunately, that is not an option if you have ARM nodes. So I had to choose between manual provisioning and Ansible. I chose Ansible.
I knew about Kubespray, but still, I decided to spend some time looking for alternatives. I found a few other projects: some targeted towards ARMs, some with a “simplified” approach. In the end, I decided to go with Kubespray. Partly, because it is a part of official k8s tooling, partly because they started to use kubeadm under the hood.
Kubespray is the oldest project aimed to automate Kubernetes cluster provisioning. There are a lot of good manuals on how to deploy a simple k8s cluster using Kubespray, like this or this. So, I’m not going to repeat any of this here. Instead, I’ll try to highlight the steps required to make things work in a multi-arch cluster.
Kubernetes’s support for non-amd64 platforms improved dramatically during the last 2 years. Thanks to the popularity of cheap ARM boards like RaspberryPi and the community. Kubespray does not yet work with multi-arch setup out of the box, but it’s getting there with every release. As it turned out it’s not very difficult to make it work. In general, the work could be split into a few categories:
- change some Kubespray configs where architecture is still hardcoded.
- choose overlay network that works with different architectures and setup it
- configures arch-specific software to be scheduled to corresponding nodes using nodeSelector
When I started experimenting with Kubespray and multi-arch deployment, around 2 months ago, I had to change a lot of files. Lots of URLs for binaries and docker images had a hardcoded amd64 part. Luckily, at the moment of writing only a few such places left in the repo.
- One of such hardcoded arch is still present in the latest 2.8 release, but it’s already fixed in master. So we need to change it.
Previously I had to do such changes all over the place.
- Another thing that you have to deal with is the current implementation of checksums map. Kubespray was not meant to run in a hybrid environment, so it assumes that a binary could only have a single checksum. That is not a case in a multi-arch environment, where you have one binary per architecture.
- It might change in the future. For example, a simple map could be replaced with some sort of multileveled map. But for now, the easiest way is to comment out sha256 checksums for the binaries you’re gonna use.
- The last thing left in this section is to extend the architecture group with ARM variable.
The main choice one has to make is what overlay network to use. Here are the main options I considered:
- cilium was my first choice. I heard a lot of great stuff about it and for a long time, I want to try it on a real system. Unfortunately, it does not support ARM yet.
- calico — is another popular solution which I didn’t try yet. But, again, as far as I can tell, it still does not support multi-arch containers, but they are working on it. [UPDATE: the ticket is closed now, and you can use it on amd64/arm64 and ppc64 architectures.]
- flannel — I’ve been using flannel in most of my k8s clusters for a few years now and it turned out that I’m gonna use it for a much longer period. It is the only CNI that supports all the architectures. Well, maybe not all, but all architectures that I would ever need.
Deploying flannel is very straight-forward. Flannel does not yet support multi-arch containers. You need to add a DaemonSet for each architecture you use and limit the nodes using NodeSelector. For example, to make it work on ARM64, all you need to do is copy original manifest and update a few lines.
When I added all the changes for the flannel and tried to provision, playbook kept failing with the following error.
Looking into failing step, it does not seem to be related to the networking. But in the logs of the several scheduled containers, it’s clear that the problem is in CNI plugins.
After some googling, I found a few issues in the flannel repo mentioning missing plugins in the flannel 0.10. Especially the absence of the portmap plugin. The workaround for this is either downgrade to flannel 0.9.1 or install CNI plugins manually. I decided to install the plugins using Ansible. After this, provisioning succeeded.
When you’re dealing with a multi-arch setup on daily basis at some point you’ll get tired of gating deployments. I mean for each deployment you create, you need to add NodeSelector so it will be able to run.
Docker’s manifests are still not very popular, and only a few projects are using. As far as I know, only Docker and GKE registries support them.
Multiarch manifest-based dockerfiles [amd64,arm64,arm] - lwolf/docker-multiarch
So I created a repo on GitHub with a bunch of dockerfiles and build steps to build manifest-based containers for the software I use. The build is automatic, it checks releases daily and runs build on new releases. Currently, I have flannel, flannel-cni, kubernetes-dashboard, Prometheus and helm. Thanks to the manifest-based flannel container, I removed my custom one-per-architecture flannel DaemonSet. Now I use an upstream version of it. So, instead of a bunch of DaemonSets, I need to replace a few CNI related variables:
Upgrading K8S versions
During the time have this setup, a few versions of Kubernetes were released. So I had a good chance to test upgrades between releases. I started with v1.12 and went through the upgrades up to v1.12.5. In general, everything went well, I didn’t have any problems with the core components. The only thing that was causing provision to fail from time to time was nginx-ingress-controller provided by Kubespray. For some reason, Kubespray is trying to delete it and reinstall each time. I ended up disabling it and other charts. I wasn’t planning on managing helm charts with Kubespray anyways. I find this workflow a bit strange: Ansible variables -> jinja -> helm templates -> helm install.
After this change, updates became stable.
It’s great to see that Kubernetes is getting good support for a multi-arch setup. I am very pleased with the number of hacks that I had to do to Kubespray to make it work seamlessly in a hybrid cluster.
I planned to cover the deployment of GlusterFS in this post as well since it was the reason for this whole deployment. But GlusterFS, as usual, caused a lot of trouble to deploy and deserves a separate post.
Originally published at blog.lwolf.org on February 2, 2019.