Kubernetes: a story about custom images, packer and sysdig
We have just updated our Kubernetes cluster (QA for now) from 1.5.2 to 1.9.1. There’s been a lot of changes in Kubernetes since 1.5.2 but most notoriously, at least for me, it’s been the Kubernetes Authorization switch from ABAC (Attribute-based access control) to RBAC (Role-Based Access Control). However, today I’m not going to write about Kubernetes Authorization, I’ll keep that for some other time. Today it’s about deploying Kubernetes to GCE with a custom OS image.
The default Kubernetes node OS in 1.5.2 was Debian but it switched to Container-Optimized OS (COS) at some point. After solving the authorization issues mentioned above and some other minor things, I had our clusters running with COS without issues, except for one thing: sysdig. As described in GCP documentation sysdig is only supported in Ubuntu right now. One of the problems of sysdig in COS is that COS doesn’t provide the kernel headers needed in order to build the sysdig kernel module. And if that’s the case, sysdig tries to download a pre-compiled binary but that didn’t work either.
At that point I realized I had to use Ubuntu instead of COS, so I used the following Kubernetes environment variables:
export KUBE_OS_DISTRIBUTION=ubuntu
export KUBE_GCE_MASTER_PROJECT=ubuntu-os-cloud
export KUBE_GCE_MASTER_IMAGE=ubuntu-1604-xenial-v20180109
export KUBE_GCE_NODE_PROJECT=ubuntu-os-cloud
export KUBE_GCE_NODE_IMAGE=ubuntu-1604-xenial-v20180109
Unfortunately, that didn’t make it very far and Kubernetes deployment failed. I checked that by sshing into Kubernetes master node which will told me:
Broken (or in progress) Kubernetes node setup!Check the cluster initialization status using the following commands.Master instance:
- sudo systemctl status kube-master-installation
- sudo systemctl status kube-master-configuration
When running sudo systemctl status kube-master-installation I saw that python’s yaml package didn’t exist. So now, what? What if I try a different OS? That’s what I did, I changed the Kubernetes variables mentioned above to use CoreOS (actually it is now called Container Linux) but that went also bad. The problem with Container Linux was not a missing python package but worst, the Container Linux scripts were not being longer supported. Interestingly, I just checked and support for Container Linux was removed 10 hours ago.
So, what’s wrong with the GCE Ubuntu image? To test one more thing, I tried launching a GKE cluster with Ubuntu and that worked. That’s when I realized that the GCE Ubuntu image must be different than the GKE one (now it seems obvious). The GCE one is basically a vanilla Ubuntu with some specific GCP things, like the kernel for example.
The next question was: how do I install missing dependencies before Kubernetes launches? My initial gut, since we manually deploy Kubernetes, was to add that to the Kubernetes startup scripts, but things were getting hairy and didn’t feel very clean. So, I asked in the GCP slack channel and someone (mickej) suggested to build my own custom image. I had no idea how to do that, but GCP has pretty good documentation about how to create custom images, actually it has pretty good documentation for everything. I went ahead and wrote a script to create an image based of an existing one and install all the dependencies that Kubernetes was missing.
The script below creates a new instance based on an existing GCE Ubuntu image, it then uploads another script (see below) that will perform the installation step, then it stops the instance and creates a new image based on the instance’s disk and finally it removes the instance.
So far so good. At this point my shiny new Kubernetes 1.9.1 cluster was up and running except, again, sysdig. So, now what? I checked sysdig’s pod logs and the driver was being built but for some reason it was not being loaded into the kernel. Running dmesg in a Kubernetes minion showed my why:
[ 507.078542] sysdigcloud_probe: failed to find page_fault_user tracepoint
[ 507.142649] sysdigcloud_probe: driver loading, sysdigcloud-probe 0.75.0
My reaction was: what is this? What are these tracepoints? Now what? I cloned the sysdig repo from github and looked for page_fault_user. The code was newer and those messages had proper error handling. So, I looked at the history and voilà! The commit was a few hours old and it was caused by a very recent change that went into Ubuntu. But how did I get that? Well, remember the name of my base Ubuntu image? ubuntu-1604-xenial-v20180109. I was just caught in the middle of very recent changes. The good news is that the sysdig guys were incredibly fast and in a couple of hours sysdig agent 0.76.0 was available. sysdig is now a happy running citizen of our cluster.
Wait, didn’t you mention packer in the title? Ah, right. Well, after the sysdig findings it was already 1:30am and I was already thinking to write an article about all this. One thing felt awkward; may be there’s a better way to create the custom image. I felt asleep.
This morning, a colleague was reviewing my merge request and said something like: this is all cool but have you considered using packer or similar? Shame on me but I didn’t know anything about packer. Quick search and I found exactly what I needed: Google Compute Builder. In about 30 minutes and a few reviews I replaced the create-image.sh script by calling packer with the following file:
It all ended up pretty good: I learned about the manual steps of how to create custom images, what are Linux kernel tracepoints, about packer and a few other things. And that’s the end of the story.