How to Develop for ARM on a Budget

With a Kubernetes cluster based on Raspberry Pis, GitLab, and spare time.

Remco Hendriks
The Startup
29 min readOct 1, 2020

--

Photo by Craig Dennis from Pexels

Developing a full computer cluster in one’s bedroom may seem like an exotic or complicated thing to do. However, with the wonderfully versatile Raspberry Pi platform any interested tinkerer can now easily play with building such as clusters themselves, and on a reasonable budget! While any developer can start some nodes on AWS or Azure at the click of a button, developing your own physical cluster has a satisfaction all its own, and allows one to learn things you never would otherwise. At the end of this cookbook, you will have a small, but fairly fast and stable arm64-based Kubernetes cluster, paired with GitLab to use as a build and deployment platform, so the cluster can be used for something real.

While Raspberry Pis are simple and cheap, they are real computers running a real OS, making them an ideal tinkering platform. One of the major differences with other ‘real’ computers is the CPU architecture, but this may soon change as well. Intel and x86 have dominated the server and desktop markets for many years, but there are some major moves happening hinting that this landscape is about to shift. Amazon AWS released their 2nd generation arm-based 64-bit CPU instance type, Canonical releases Ubuntu 20 with support for arm 64-bit, and Apple announced ARM based Apple silicon for their upcoming Mac computers. Raspberry Pis offer probably the cheapest and easiest way to gain some real experience with ARM64 right now!

The moves around ARM ignited my interest in doing something interesting with a couple Raspberry Pis I had lying around. While doing so, I did encounter some issues while trying to make everything work, which I’ve tried to document in this story. I used six Raspberry Pis to form a Kubernetes development cluster, which I integrated into a workflow developing web applications. It will fulfill an important part of my personal development pipeline where I test the apps I make, before these are shipped into production. I will touch various subjects and tools which are put together to make this work. It’s quite involved, so there’s no deep-dive on the architecture or software used. You can use this as a cookbook to replicate the set-up I made. Some steps are abbreviated, and I presume basic knowledge of using Ubuntu Server with the command-line, shell usage, and editing files.

Materials used

My experimental setup has the following hardware. Like other home-brew Kubernetes clusters, it uses commodity hardware. I find it particularly interesting to find out how much performance I can buy in comparison with cloud providers, without enterprise-grade security or fail-overs.

List of required hardware:

  • 6x Raspberry Pi 4, 4GB: link
  • 6x USB3 64GB thumb drive for OS: link
  • 5x USB3 128GB thumb drive for data volumes: link
  • 8-port gigabit network switch: link
  • 6x 1ft Cat6 ethernet cable: link
  • 1x micro-SD card 32GB, only used at setup: link
  • 6x USB-C power supply: link
  • Cluster case: link
  • Optional: heat sinks: link

I’ve decided to go with the Raspberry Pi 4 with 4GB of memory, because it has the required power to run the ‘regular’ k8s version of Kubernetes maintained by CNCF and Google. While the ‘lightweight’ version k3s may work as well, with more memory to spare and thus workable on Pis with 2GB memory or less, I opt for the regular version to stick as close as possible to a production-grade cluster.

To power the Pis, I went with the cheapest option, using six USB-C wall adapters. While this isn’t aesthetically pleasing, it’s more cheap than using a USB hub with cables, or convenient as six power-over-ethernet HATs.

Putting your cluster together is fairly straightforward, and should take a few hours. There are plenty of posts on the internet who assemble a similar set-up, so I wont go into the details of that. When completed, it feels quite sturdy and easy to work with. When you’re ready to put it in your bookshelf, get a small desk fan to blow air through it. I use two spare case fans connected to a 5V USB cable, which blow just enough air, and are completely silent. This way, my Pis rarely get hotter than 50 degrees Celsius.

My home-brew Pi cluster

The one-time cost of this set-up is USD 594.85. While this seems quite expensive upfront, it will quickly earn itself back if you are using a cluster from a cloud provider. I’ll make an overview later in this cookbook.

Let’s put this together.

  • Part 1: Install Ubuntu 20 LTS 64-bit on USB
  • Part 2: Install Kubernetes
  • Part 3: Set up Persistent Volumes with Rook and Ceph
  • Part 4: Set up MetalLB to access your services
  • Part 5: Organize workflow with GitLab
  • Part 6: Comparison and conclusion

Next: Install Ubuntu 20 LTS 64-bit on USB

Part 1: Install Ubuntu 20 LTS 64-bit on USB

I want each Pi to boot from USB, because it is much faster than a micro-SD card. This blog demonstrates that using USB drives increase performance tremendously. It’s also much less prone to data corruption with a sudden reboot, which can happen frequently considering the novelty of the stack I am using. I’m not using SSD drives, a regular fast USB 3 thumb drive is cheaper and is performant enough for my case.

I’ve followed this thread on the Raspberry Pi forum to make this work.

  1. Flash the micro-SD card with Raspbian OS Lite 32-bit (link), and all the OS thumb drives with Ubuntu Server 20.04 LTS 64-bit preinstalled image (link). I use balenaEtcher to do this (link).
  2. Using only the micro-SD card, boot up a Pi and update the bootloader: link
  3. Without rebooting, plug in the OS drive, find the OS boot partition, and mount it:

Here, the boot partition is /dev/sda1 , mount it to your filesystem:

4. Decompress vmlinuz on the boot partition:

5. Edit config.txt, edit the [pi4] section into:

This is all that is required to boot Ubuntu directly from USB. Shut down the Pi, remove the micro-SD card, and boot. It should present you the usual freshly-installed Ubuntu prompts to set up a password.

6. Enable automatic kernel decompression. This is required in case the operating system downloads kernel updates, replacing the old decompressed kernel. The Pi won’t boot up unless it has a decompressed kernel. To do this automatically after each update session, Add a new script to the boot partition auto_decompress_kernel:

In /etc/apt/apt.conf.d/, add a file 999_decompress_rpi_kernel and add:

Make the script executable:

To test if the automation works, check with sudo apt-get upgrade . It should mention if a new kernel is decompressed or not.

Rinse and repeat for all the Pis. You can re-use the micro-SD card each time, you don’t need to flash it again after every use.

Next: Install Kubernetes

Part 2: Install Kubernetes

Getting Kubernetes up and running requires quite some pre-work, which can be tedious to repeat for every single node. So try to use an automation tool like Ansible to save some time on these steps.

I’ve used this post to set up my cluster. I’ve added and changed some instructions to my liking.

  1. Set up static IP addresses for your nodes. This is useful to debug later. I have set up the entire IP range 192.168.3.0/24 for my Pis, and later, for the services it will run. Add /etc/netplan/00-installer-config.yaml, add and change the following snippet, and apply with sudo netplan apply.

2. Install Docker. This is pretty straightforward. Apply:

Exit the shell using exit and log in again to use the docker command.

Test if the installation works with running the hello-world container:

3. To ensure container security, change the control group drivers to use systemd instead of the default cgroups . This is recommended by Kubernetes. Change /etc/docker/daemon.json to:

Additionally, cgroups need to be enabled when the system boots, so add the following values to /boot/firmware/cmdline.txt :

  • cgroup_enable=cpuset
  • cgroup_enable=memory
  • cgroup_memory=1
  • swapaccount=1

My /boot/firmware/cmdline.txt looks like:

4. Set up iptables for correct network routing. Put the following snippet into /etc/sysctl.d/k8s.conf:

Apply with sudo sysctl --system.

5. Assign hostnames to your nodes, this will be used by Kubernetes as recognizable names. I use kubernetes-master-1 for master, kubernetes-worker-[x] for nodes. Do this with the following command:

6. Reboot. Check if the hostname is set by entering hostname , check if the cgroups are set up properly with docker info . It shouldn’t show any warnings anymore at the cgroup section:

7. Set up Kubernetes repository and install packages. Use this one-liner:

If you disabled automatic updates in part 1, you shouldn’t worry about automatic updates of these packages. Otherwise, pin the packages with:

8. Initialize the Kubernetes control plane. This is the point where all pre-installation comes together, and your first Pi will become the master node. With the initialization command, I already set the pod network CIDR for use with Flannel, the container network interface:

If all goes well, it will say that the control-plane has initialized successfully, how to use the cluster, and the command to let other nodes join. It looks like:

I usually use the generated config on the master Pi node and on my development notebook, for my convenience tinkering with the cluster. Check if you can access the cluster by listing the nodes:

9. Install the Container Network Interface (CNI). This is required to make virtual networks between nodes. Flannel is a light-weight solution which has an arm64 implementation, which is suitable for my cluster. Apply it with the one-liner:

Verify that the installation succeeded by checking on pod statuses for coredns and kube-flannel. These should get a Running status after a while, so keep checking with the following command:

10. Add the other Pis to join the cluster: Repeat step 1–7 for the other Pis, and run the join command:

For every node that joins, a flannel pod is created on the new node, and should get a Running status. Check on your master node or development laptop (sample log for 2 Pis):

11. Install the Kubernetes Dashboard, the proof that the cluster is running OK. For me, this is the indicator that all previous steps are successful. The dashboard provides a useful web user interface to collect information about your cluster, and also allows you to manage the resources. The default installation one-liner works for arm64:

Check that the dashboard deployed correctly by checking the status of the pods, it should be Running:

12. To access the dashboard, you will need to create a user and get an access token. This official guide details how to do that. I’ll summarize the steps to do so.

Make a file dashboard-sa.yaml, and apply it with kubectl apply -f dashboard-sa.yaml:

Similarly, do so for dashboard-crb.yaml :

Obtain the access token by running:

This will output the details of the service account token, with a long token string in the data part. You will need to copy and save that for usage with the dashboard.

To access the dashboard, run in a separate terminal window:

Leave this running, and open in a web browser:

It should prompt with a login screen:

Login screen, image from the Kubernetes documentation

Select token, paste your access token, press sign in. You are now logged into the Kubernetes dashboard.

13. For the final step, I want the dashboard to display simplified resource usage statistics. This is done by metrics-server, a system to collect CPU and memory usage from pods, and tools to act upon changes of it, for example auto scaling policies. First, download the configuration:

In your favorite editor, open components.yaml, and locate the metrics-server deployment. At spec.template.spec.containers.args, add the following elements to the list:

Next, at spec.template.spec add the following property:

Apply the configuration:

After a few minutes, CPU and memory usage counters and graphs show in the Dashboard, and one of my favorite simple usage commands work:

I use this a lot to check up on the load of the Pis, especially when running pipelines and building images, which I describe in Step 5.

Next: Set up Persistent Volumes with Rook and Ceph

Part 3: Set up Persistent Volumes with Rook and Ceph

Now that the cluster is set-up and running, you are off to host any software on your basic Pi Kubernetes setup. In reality, most production software configurations on Kubernetes use Persistent Volumes to store data, for usage with databases, simple file storage, and more. If you are like me, and want to simulate a production environment as close as possible, you’re going to need this. Cloud solutions like AWS, Azure and GCP offer Persistent Volumes out of the box, and can be used immediately without caring much about the availability and performance. In my case, I use Rook with Ceph to make Persistent Volumes work.

For my setup, I use decently large 128GB drives, because the Ceph cluster allows for data replication across nodes for availability purpose. Additionally, I usually over-provision volumes. Although I have a total of 640 GB to use, If I configure a 30GB volume, 90GB will be provisioned to ensure availability. This is useful in case of node failure. However, it is possible to over-provision your Ceph-cluster, allocating much more than the 640GB available. Filling up the volumes may cause the cluster to become unstable.

Setting up Rook with Ceph is straightforward, but requires a few configuration changes to make it work with arm64. I use the Rook with Ceph quickstart documentation.

  1. Plug in the USB thumb drives reserved for the data storage use. This goes into the second USB3 port on each worker node. Ensure it is detected by the operating system by entering:

It shows the boot USB thumb drive as sda, and the data USB thumb drive as sdb. Now, I need to remove the default partition(s) that are set-up when bought new. To do, enter:

Clearing the thumb drive entirely makes it ready for use with Ceph. If you check again, sdb shows as empty:

2. Clone the repository, change configuration to work with arm64. Enter:

The default configuration of Rook uses CSI (Container Storage Interface) images hosted on quay.io. These are mostly built for x86, so I need to change these to use arm64 versions. Fortunately, the Raspbernetes repository has images specifically built for Raspberry Pis and arm64. To do so, open operator.yaml, enable unsupported ceph-csi images on line 44:

At line 48, uncomment and change the image paths:

3. Deploy the Rook operator:

The pods associated with Rook should be in a Running status. Then, deploy the Ceph cluster. The default values in cluster.yaml suffice for clusters with three or more workers. Apply the cluster configuration:

After fifteen minutes or so, the csi pods should show up and get a Running status.

If all pods in the rook-ceph namespace are Running (and jobs Completed), you are ready to configure a Storage Class to use the Rook-Ceph pool you just created.

4. Set up the block pool and storage class, and set it as default. From the ceph directory, enter:

The rook-ceph-block storage class should show up in the Kubernetes dashboard. For convenience, set it up as default storage class:

This allows you to ignore the storage class definition of your Persistent Volumes, one less configuration item to change between your Pi cluster and your production cluster.

5. Test the setup with the example from the Rook with Ceph documentation. In the cluster/examples/kubernetes folder, open mysql.yaml with your editor, locate the wordpress-mysql deployment. Under spec.template.spec.containers, change the image: mysql:5.6 to image: mariadb. Unfortunately, mysql doesn’t support arm64 yet.

Similarly for wordpress.yaml, change the wordpress deployment image from wordpress:4.6.1-apache to wordpress:5-apache.

Afterwards, apply the configurations:

Both these apps should make two Persistent Volume Claims, and can be checked by entering:

It can take a few minutes for the volumes to show up, and get bound by the pods.

Your Pi cluster is now set up for use with Persistent Volumes.

If you seek for more detailed configuration options, this post covers more information.

Next: Setup MetalLB to access your services

Part 4: Set up MetalLB to access your services

Now that your Pi cluster is set up and able to use Persistent Volumes for data storage, I want to access services running in my cluster from other computers in my local network. Currently, all configured services get a cluster-ip which is only accessible from the Pis themselves. If you still have the WordPress example running from previous step, you can check this with:

The Cluster-IP assigned to the service is a virtual network address, which you cannot reach from the rest of your local network. It also shows a <pending> External-IP, which will never be fulfilled, the part where MetalLB comes in.

MetalLB acts as a ‘virtual’ load balancer, which is created automatically, just like cloud vendors do when you configure a service as type: LoadBalancer. With MetalLB, no physical external load balancer is provisioned, but does it virtually within the cluster itself. It works for arm64, perfect for my development Pi cluster.

MetalLB also supports Load Balancing features using BGP which talks with my UniFi Security Gateway router. This is very exciting to configure and try out, but I’m not interested in configuring a high-performance environment for my development needs.

Setting up MetalLB is easy and requires little configuration. I use snippets from this detailed blog post.

The only requirement for this set-up is that you have a small range of IP addresses to spare on the network of your Pis. In my case, I use 192.168.3.100 — 192.168.3.199.

  1. Apply the MetalLB Kubernetes manifests. These are ready for use with arm64, no change required:

This will start the MetalLB deployment, but it won’t work until the IP address range is set up in a ConfigMap. Make a file metallb-config.yaml with the following content:

Apply:

MetalLB automatically activates, and accepts services with type: LoadBalancer.

If you still have the WordPress example running, it should automatically obtain an external IP now. To check this out, look in the Kubernetes Dashboard or execute:

Here you go, you should now be able to access the WordPress example site from any machine on the same network with IP address 192.168.3.100.

Well, that was unexpectedly easy to do.

Next: Organize workflow with GitLab

Part 5: Organize workflow with GitLab

I arrived at the point that my cluster and it’s tools are ready for use as a development cluster. To summarize, you should have:

  • Six Pis running Ubuntu Server 20.04 LTS booting from USB, with a Kubernetes cluster consisting of one master and five workers
  • Kubernetes dashboard with metrics-server
  • Persistent Volumes with Rook and Ceph, using the other USB thumb drives
  • MetalLB for assigning network IP addresses to services
  • GitLab (private or public) hosting your repository

With this set up, I am going to extend my development street to use the Pi cluster to build and deploy artifact images. I use GitLab to host my private repositories, mainly because it comes with Pipeline tools I like the most. It allows me to organize how to test and build my software, make artifact images and deploy it to Kubernetes clusters. I assume basic knowledge how to use GitLab and how Pipelines work.

For this guide, I am using a full-stack JavaScript web app as example. This consists of a NodeJS API as backend and an Angular.IO app as front-end. Both have their own docker image, which I am building in the pipeline.

Before, a common pipeline of mine would look like:

  • Build image artifact, tag with ‘develop’, push to image registry
  • Deploy ‘develop’ Kubernetes configuration on development cluster
  • Inspect the software on development cluster. If I am satisfied, continue pipeline
  • Add ‘production’ tag to ‘develop’ image, push to image registry
  • Deploy ‘production’ Kubernetes configuration on production cluster

I left out steps for additional things like software testing and hardening. Unless you are planning to use an arm64 production cluster, this will not work in my desired situation with my new Pi cluster. I cannot re-tag the develop image for production, as the images made for my arm64 Pi cluster won’t run on a x86 production cluster. Thus, I need to build images separately for arm64 and x86.

Although the free GitLab runners might support cross-architecture building, I am not exploring this because using the buildx feature is experimental, and building images can cost fair amounts of time, which is limited in the free tier. I have six Pis with arm64 quad-cores, why not try to use those instead?

So the change in the pipeline is simple:

  • Build arm64 image artifact, tag with ‘arm64-develop’, push to image registry.
  • Deploy ‘develop’ Kubernetes configuration on development cluster
  • Inspect the software on development cluster. If I am satisfied, continue pipeline.
  • Build x86 image artifact, tag with ‘x86-production’, push to image registry
  • Deploy ‘production’ Kubernetes configuration on production cluster

Left aside the additional x86 image build step, this looks pretty straightforward. In reality, it is not quite like that. You will find out later on.

To make this work, I need to configure and install gitlab-runner and docker-in-docker on the Pi cluster. For the former, I modify and use the official Helm chart. Here are the steps:

  1. Install Helm (if you haven’t done yet). Look up the documentation if you’d like to choose a method appropriate for your operating system. A simple, universal method is a local installation:

2. Download and modify the values.yaml configuration file for gitlab-runner. The original chart repository is here; my changed helm values file is here.

I won’t discuss all changes, a list of important changes:

  • Reduce concurrency from 10 to 2, to not accidentally overburden the cluster.
  • Tag runners with ‘arm64’, to specifically schedule jobs to the Pi cluster.
  • Disable untagged pipelines, to keep ‘regular’ pipeline jobs to run on x86.
  • Enable container privileges to allow for docker-in-docker execution.

3. Set up the runner registration token (line 25). You will find this token at Settings -> CI/CD -> Runners, in your repository or group in GitLab.

If you run your own GitLab server, you will need to change the registration URL at line 19.

I specifically do not set up caching at this point. I’ve tried several methods, and chose for caching using a shared docker-in-docker service next to the GitLab runners, which works for docker layer caching. If you want artifact caching between pipeline steps, you can set up a S3 cache using Minio in your cluster.

For all other configuration options, refer to the GitLab runner configuration page.

4. Install gitlab-runner with Helm. First, add the GitLab repository:

Next, install the chart with your values file:

It should set up the gitlab-runner namespace with the runner deployment. When the runner pod is running, it should automatically register itself in GitLab, visible in the GitLab Runners settings section:

Two private group runners set-up, one for arm64 and one for x86 (named arm64 here)

4. Set up service account to allow the runner to manager Kubernetes resources. Although you won’t need this for building images, I use the runners for setting up resources in my cluster as well. In the values.yaml file, on line 271 the service account name is gitlab-sa, which doesn’t yet exist. Add the role by applying:

This is enough to build images, so you can continue with the next step. If you’re interested in managing cluster resources with the runners, you’ll need to attach a role to the service account. The simplest to do is attach a cluster-admin cluster role binding, which gives access to the runners to manage all resources in the cluster. This is generally a bad idea, as there’s no security or boundaries what the runners can do with your cluster. This extends to anyone who’s able to access your repository on GitLab and commanding pipelines of their own. Otherwise, you can create a role with the appropriate permissions for the runner. If you want to apply cluster-admin role, do:

5. Set up the docker-in-docker service to enable docker layer caching. This is a specific solution to a big problem building docker images with different architectures. I summarize the differences briefly, to give an idea what the intricacies are when adding package dependencies in a Dockerfile.

In my front-end Angular.IO app, I have node-sass as dependency. Looking up the release artifacts page on GitHub, there are pre-built bindings for x86, but not for arm64. When installing node-sass on a x86 computer, it will automatically detect and download the right binding. On unlisted architectures, it builds it’s own binding using node-gyp. The good of this is, it builds automatically if the operating system has Python and general build tools installed (e.g. build-essential for Ubuntu, build-base for alpine). The bad of this is, it takes forever to build. Below is an excerpt of a build log.

That’s 12 minutes of installing packages for arm64. In comparison, the same build on x86:

No build required for x86, 12 minutes faster. And it hasn’t started building the Angular.IO artifact yet. For every push, I need to wait 34 minutes to build an image:

That’s more than one coffee of waiting.

I sorely need docker layer caching in the build step, that works on my Pi cluster.

Unfortunately, the issue post on GitLab is old, long, and not really resolved for distributed runners. The directions in the official guide are correct, but does not work for distributed runners either, because the cache gets deleted with the pod it’s in after each run. A workable solution is mentioned in the middle of the issue post, using a separate docker-in-docker service alongside the gitlab-runner in your cluster, which I am going to use.

Essentially, the docker images are to be built by a perpetually running docker-in-docker service, backed by persistent storage to use as cache. Each pipeline job uses the docker-in-docker service as host, sharing the compute resources, while it can run multiple jobs in parallel.

To set up the docker-in-docker service, apply this gist:

This starts one pod in the gitlab-runner namespace, next to the runner installed by the helm chart above. The deployment is not set-up to scale, this would cause a cache split by two pods, potentially missing cache layers per run.

Your runner(s) and cache are set-up now. In the next step, I verify it works with the speed upgrade from the cache service.

6. Set up an arm64 build job and verify it runs. As mentioned earlier, I have two images to build; one for back-end and one for front-end. For brevity, I demonstrate with only the build job for the front-end, which is the most complex of the two.

My .gitlab-ci.yaml file (left out stages for brevity):

The build log:

Much better.

You’re all set with GitLab connected to your Pi cluster to build images.

Next: Comparison and conclusion

Part 6: Comparison and conclusion

To find out how well this setup integrates in to my daily workflow, I spend a couple of weeks using it to run several apps simultaneously. To compare performance I’ve benchmarked the Pi cluster against two AWS EKS clusters, using either arm64 or x86 instances. The goal here was to look at affordability, so I focussed on hardware I could get at a fixed price point. The only additional constraint was that the hardware needs to have at least 4 GB of memory, otherwise the cluster is not usable for the use-cases I looked at.

Benchmark

I test the performance of the cluster using sysbench, testing the cpu, memory and I/O performance of the environment. This runs in a single pod on ubuntu:latest. It’s a simple test, not running bare-metal and working on one node. It does not reflect the performance of the entire cluster, but it shows what you can get from a Pi configured to work in a Kubernetes cluster.

The clusters in the test:

  • My Pi cluster as built in this cookbook, using Pi 4 4 GB
  • AWS EKS cluster with m6g.medium arm64 instances, 1 vCPU and 4 GB memory
  • AWS EKS cluster with t3.medium x86 instances, 2 vCPU and 4 GB memory

For the EKS clusters, I choose 3 nodes per cluster, and 64GB root EBS volume.

Although the vCPU count varies across the clusters, I want to simulate budget conditions with a fixed 4 GB memory constraint. Both the m6g.medium and t3.medium are the minimum viable choice which both cost USD 31 and USD 33 per month, respectively (on demand, without EBS, eu-west-1).

To fire up a temporary, shell, do:

Install sysbench:

Run benchmarks:

Results

Looking at the results, the AWS instances are on average faster on all benchmarks. Especially the memory and I/O speed are comparatively outstanding. The CPU performance is noteworthy, but only because the Pi can use all four cores simultaneously in the test.

The big differentiator in this comparison is the quality of the hardware. My Pi cluster is made of inexpensive and power-efficient components, while the AWS instances are enterprise-grade. This is especially true for the hard disks, the cheap USB thumb drives are at least five times slower than the high-speed SSD EBS drives of AWS.

I am happy with the CPU performance. Although it has twice the cores of the t3.medium and four times more than the m6g.medium; it is nearly two times faster. Not bad for a budget cluster.

Cost

Owning a development cluster can be very costly if run on a commercial cloud provider. For exploring Kubernetes and the occasional tinkering, a set-up with 24/7 availability is remarkably expensive for the needs I have. I compare the run costs of my Pi cluster with a budget offering of AWS. Details of the comparison:

  • I use AWS pricing based on the eu-west-1 region.
  • AWS EKS has a flat-fee of USD 0.10 per hour per cluster, but doesn’t require a dedicated instance to operate as cluster master.
  • I add 64 GB of EBS storage as root volume per worker node, but no upfront storage for Persistent Volumes, as these work on-demand. No snapshots.
  • No Elastic IPs or Load Balancers. This is not required for the GitLab setup to build images.
  • No Data Transfer cost. This should be negligible.
  • No other AWS-specific EKS features enabled.
  • On-demand pricing.

Total cost of AWS EKS:

At USD 246.14 per month, the AWS EKS cluster has enough power and features use reliably as development cluster. For a solo developer, this is obviously not an economical choice, running a home-brew Pi cluster may be much more worthwhile at a one-off price of USD 594.85.

Other observations

Changing package dependencies to work with arm64 is cumbersome, as some do not have prebuilt binaries. A usual fix is to add the required compilers such as gcc or Python. However, building both x86 and arm64 images from one Dockerfile means unnecessary for one platform or the other. Separation of Dockerfiles into platform versions may work, but duplicates any maintenance.

While building an image with a large number of dependencies, the node became unresponsive, showing as NotReady with the kubectl get nodes command. I couldn’t reach it using ssh, and waited for 15 minutes to find it responsive again. Nothing to worry about.

To test for a more severe situation, I pulled the power plug out of one of the worker nodes. Like the situation described above, it reported as NotReady until I plugged it in again. The image building job expectedly failed, needing a manual restart. After about 5 minutes, everything was working again.

Conclusion and what’s next

I’m very happy with how this Pi Kubernetes cluster worked out for me. The biggest take-away was how reliable this setup ended up being. Once the cluster was up and running, I could confidently use it during long development sessions, making dozes of artifacts on the cluster without worrying that it would fail. After going through the process of building this cluster several times while writing this article and actively using it for a few weeks, I’m fully ready to recommend it as a great tool to work and play with.

After Kubernetes installation and setting up the resources, there’s plenty of interesting things to try next. Some topics I didn’t touch in this cookbook:

Thanks for reading, please let me know what you think of it!

Photo by Craig Dennis from Pexels

Thanks to Shabaz Sultan for reviewing

--

--

Remco Hendriks
The Startup

Javascript Web Developer, DevOps Engineer, Mandarin Chinese Learner