Kubernetes the illumos way, part two

Tony Norlin
8 min readDec 16, 2021

--

This is the second part of the proof of concept on creating a functional illumos based control plane and is inspired of the excellent official Kubernetes the Hard Way guide by Kelsey Hightower, now we are at chapter 08 — https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/08-bootstrapping-kubernetes-controllers.md

the complete illumos Control Plane

Bootstrapping the control plane components

Previously we bootstrapped the etcd, which for that part runs nicely together with an existing ordinary kubernetes cluster as an external etcd. At least it has done so for me for a couple of months without issues.

Bootstrap the kube-apiserver

As with the etcd component, we need to adapt this section too.

zadm create -b pkgsrc kube-apiserver < kube-apiserver.json

Boot the zone

zadm boot kube-apiserver

As it is a pkgsrc zone we are not required to logon through a console, just run

zlogin kube-apiserver

Copy the kube-apiserver binary from the build zone into /opt/local/bin, and copy the ca.pem, ca-key.pem, service-account.pem, service-account-key.pem, encryption-config.yaml, kubernetes.pem, kubernetes-key.pem into /var/lib/kubernetes.

If you, as me opt without SSH into the zones, then a simple way of transfer is on the host where keys generated, issue a base64 blob

tar -czf - ca.pem ca-key.pem service-account.pem service-account-key.pem encryption-config.yaml kubernetes.pem kubernetes-key.pem  |base64

then, on the kube-apiserver zone transfer it as following

mkdir -p /var/lib/kubernetes; base64 -d |(cd /var/lib/kubernetes; gtar -xzf -)

The kube-apiserver (or your DNS) needs to know about the worker nodes

cat <<EOF>>/etc/hosts
192.168.200.1 kube-apiserver
192.168.200.2 etcd
192.168.200.3 kube-ctrlmgr
192.168.200.4 kube-sched
192.168.200.5 worker0
192.168.200.6 worker1
192.168.200.7 worker2
EOF

Declare the variables to SMF

INTERNAL_IP=192.168.200.1 # $(ipadm show-addr)
ETCD_IP=192.168.200.2

Create the SMF method

cat <<EOF | sudo tee /lib/svc/method/kube-apiserver
#!/sbin/sh
#
# CDDL HEADER START
#
# The contents of this file are subject to the terms of the
# Common Development and Distribution License (the "License").
# You may not use this file except in compliance with the License.
#
# You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
# or http://www.opensolaris.org/os/licensing.
# See the License for the specific language governing permissions
# and limitations under the License.
#
# When distributing Covered Code, include this CDDL HEADER in each
# file and include the License file at usr/src/OPENSOLARIS.LICENSE.
# If applicable, add the following below this CDDL HEADER, with the
# fields enclosed by brackets "[]" replaced with your own identifying
# information: Portions Copyright [yyyy] [name of copyright owner]
#
# CDDL HEADER END
#
#
# Copyright 2008 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
#ident "%Z%%M% %I% %E% SMI"
#
# Start/Stop client LDAP service
#
. /lib/svc/share/smf_include.sh
case "\$1" in
'start')
exec /opt/local/bin/kube-apiserver --advertise-address=${INTERNAL_IP} --allow-privileged=true --apiserver-count=3 --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/var/log/audit.log --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --client-ca-file=/var/lib/kubernetes/ca.pem --enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota --etcd-cafile=/var/lib/kubernetes/ca.pem --etcd-certfile=/var/lib/kubernetes/kubernetes.pem --etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem --etcd-servers=https://${ETCD_IP}:2379 --event-ttl=1h --encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml --kubelet-certificate-authority=/var/lib/kubernetes/ca.pem --kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem --kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem --runtime-config='api/all=true' --service-account-key-file=/var/lib/kubernetes/service-account.pem --service-account-signing-key-file=/var/lib/kubernetes/service-account-key.pem --service-account-issuer=https://:6443 --service-cluster-ip-range=10.32.0.0/24 --service-node-port-range=30000-32767 --tls-cert-file=/var/lib/kubernetes/kubernetes.pem --tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem --v=2 > /var/log/kube-apiserver.log 2>&1 &
;;
'stop')
exec /usr/bin/pkill kube-apiserver
;;
*)
echo "Usage: \$0 { start | stop }"
exit 1
;;
esac
EOF

Create the SMF Manifest for kube-apiserver

cat <<EOF | sudo tee /lib/svc/manifest/kube-apiserver.xml
<?xml version="1.0"?>
<!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1">
<!--
Manifest automatically generated by smfgen.
-->
<service_bundle type="manifest" name="application-apiserver" >
<service name="application/apiserver" type="service" version="2" >
<create_default_instance enabled="true" />
<dependency name="dep0" grouping="require_all" restart_on="error" type="service" >
<service_fmri value="svc:/milestone/multi-user:default" />
</dependency>
<exec_method type="method" name="start" exec="/lib/svc/method/kube-apiserver start" timeout_seconds="30" />
<exec_method type="method" name="stop" exec=":kill" timeout_seconds="30" />
<template >
<common_name >
<loctext xml:lang="C" >Kube Apiserver v1.23.0</loctext>
</common_name>
</template>
</service>
</service_bundle>
EOF

Enable the kube-apiserver

chmod +x /lib/svc/method/kube-apiserver
svccfg import /lib/svc/manifest/kube-apiserver.xml

Bootstrap the kube-controller-manager

zadm create -b pkgsrc kube-ctrlmgr < kube-ctrlmgr.json

Boot the zone

zadm boot kube-ctrlmgr

As it is a pkgsrc zone we are not required to logon through a console, just run

zlogin kube-ctrlmgr

Copy the kube-controller-manager binary into /opt/local/bin, and copy the ca.pem, ca-key.pem, service-account.pem, service-account-key.pem, encryption-config.yaml, kubernetes.pem, kubernetes-key.pem and kube-controller-manager.kubeconfig into /var/lib/kubernetes

As in previous section, issue a base64 blob

tar -czf - ca.pem ca-key.pem service-account.pem service-account-key.pem encryption-config.yaml kubernetes.pem kubernetes-key.pem kube-controller-manager.kubeconfig  |base64

then, on the kube-ctrlmgr zone transfer it as following

mkdir -p /var/lib/kubernetes; base64 -d |(cd /var/lib/kubernetes; gtar -xzf -)

Define the CLUSTER_CIDR

CLUSTER_CIDR=10.200.0.0/16

Create the SMF method script for kube-controller-manager

cat <<EOF | sudo tee /lib/svc/method/kube-controller-manager
#!/sbin/sh
#
# CDDL HEADER START
#
# The contents of this file are subject to the terms of the
# Common Development and Distribution License (the "License").
# You may not use this file except in compliance with the License.
#
# You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
# or http://www.opensolaris.org/os/licensing.
# See the License for the specific language governing permissions
# and limitations under the License.
#
# When distributing Covered Code, include this CDDL HEADER in each
# file and include the License file at usr/src/OPENSOLARIS.LICENSE.
# If applicable, add the following below this CDDL HEADER, with the
# fields enclosed by brackets "[]" replaced with your own identifying
# information: Portions Copyright [yyyy] [name of copyright owner]
#
# CDDL HEADER END
#
#
# Copyright 2008 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
#ident "%Z%%M% %I% %E% SMI"
#
# Start/Stop client LDAP service
#
. /lib/svc/share/smf_include.shcase "\$1" in
'start')
exec /opt/local/bin/kube-controller-manager --bind-address=0.0.0.0 --cluster-cidr=10.200.0.0/16 --cluster-name=kubernetes --cluster-signing-cert-file=/var/lib/kubernetes/ca.pem --cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem --kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig --leader-elect=true --root-ca-file=/var/lib/kubernetes/ca.pem --service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem --service-cluster-ip-range=10.32.0.0/24 --use-service-account-credentials=true --v=2 > /var/log/kube-controller-manager.log 2>&1 &
;;
'stop')
exec /usr/bin/pkill kube-controller-manager
;;
*)
echo "Usage: \$0 { start | stop }"
exit 1
;;
esac
EOF

Create the SMF Manifest as follows

cat <<EOF | sudo tee /lib/svc/manifest/kube-controller-manager.xml
<?xml version="1.0"?>
<!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1">
<!--
Manifest automatically generated by smfgen.
-->
<service_bundle type="manifest" name="application-controller-manager" >
<service name="application/controller-manager" type="service" version="2" >
<create_default_instance enabled="true" />
<dependency name="dep0" grouping="require_all" restart_on="error" type="service" >
<service_fmri value="svc:/milestone/multi-user:default" />
</dependency>
<exec_method type="method" name="start" exec="/lib/svc/method/kube-controller-manager start" timeout_seconds="30" />
<exec_method type="method" name="stop" exec=":kill" timeout_seconds="30" />
<template >
<common_name >
<loctext xml:lang="C" >Kubernetes Controller Manager v1.23.0</loctext>
</common_name>
</template>
</service>
</service_bundle>
EOF

Enable the Kubernetes Controller Manager

chmod +x /lib/svc/method/kube-controller-manager
svccfg import /lib/svc/manifest/kube-controller-manager.xml

Bootstrap the kube-scheduler

zadm create -b pkgsrc kube-sched < kube-scheduler.json

Boot the zone

zadm boot kube-sched

As it is a pkgsrc zone we are not required to logon through a console, just run

zlogin kube-sched

Copy the kube-scheduler into /opt/local/bin and the scheduler does not need any more configuration files than the kube-scheduler.yaml ( copy that file into /etc/kubernetes/config/kube-scheduler.yaml) and the kube-scheduler.kubeconfig into /var/lib/kubernetes/kube-scheduler.kubeconfig.

As in previous section, issue a base64 blob

tar -czf - ca.pem ca-key.pem service-account.pem service-account-key.pem encryption-config.yaml kubernetes.pem kubernetes-key.pem kube-scheduler.kubeconfig  |base64

then, on the kube-sched zone transfer it as following

mkdir -p /var/lib/kubernetes /etc/kubernetes/config; base64 -d |(cd /var/lib/kubernetes; gtar -xzf -)

Configure the Kubernetes Scheduler

Create the kube-scheduler.yaml configuration file:

cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
apiVersion: kubescheduler.config.k8s.io/v1beta2
kind: KubeSchedulerConfiguration
clientConnection:
kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
leaderElection:
leaderElect: true
EOF

Create the SMF Method for kube-scheduler

cat <<EOF | sudo tee /lib/svc/method/kube-scheduler
#!/sbin/sh
#
# CDDL HEADER START
#
# The contents of this file are subject to the terms of the
# Common Development and Distribution License (the "License").
# You may not use this file except in compliance with the License.
#
# You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
# or http://www.opensolaris.org/os/licensing.
# See the License for the specific language governing permissions
# and limitations under the License.
#
# When distributing Covered Code, include this CDDL HEADER in each
# file and include the License file at usr/src/OPENSOLARIS.LICENSE.
# If applicable, add the following below this CDDL HEADER, with the
# fields enclosed by brackets "[]" replaced with your own identifying
# information: Portions Copyright [yyyy] [name of copyright owner]
#
# CDDL HEADER END
#
#
# Copyright 2008 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
#ident "%Z%%M% %I% %E% SMI"
#
# Start/Stop client LDAP service
#
. /lib/svc/share/smf_include.shcase "\$1" in
'start')
exec /opt/local/bin/kube-scheduler --config=/etc/kubernetes/config/kube-scheduler.yaml --v=2 > /var/log/kube-scheduler.log 2>&1 &
;;
'stop')
exec /usr/bin/pkill kube-scheduler
;;
*)
echo "Usage: \$0 { start | stop }"
exit 1
;;
esac
EOF

Create the SMF Manifest for kube-scheduler

cat <<EOF | sudo tee /lib/svc/manifest/kube-scheduler.xml
<?xml version="1.0"?>
<!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1">
<!--
Manifest automatically generated by smfgen.
-->
<service_bundle type="manifest" name="application-scheduler" >
<service name="application/kube-scheduler" type="service" version="2" >
<create_default_instance enabled="true" />
<dependency name="dep0" grouping="require_all" restart_on="error" type="service" >
<service_fmri value="svc:/milestone/multi-user:default" />
</dependency>
<exec_method type="method" name="start" exec="/lib/svc/method/kube-scheduler start" timeout_seconds="30" />
<exec_method type="method" name="stop" exec=":kill" timeout_seconds="30" />
<template >
<common_name >
<loctext xml:lang="C" >Kube Scheduler v1.23.0</loctext>
</common_name>
</template>
</service>
</service_bundle>
EOF

Enable the kube-scheduler

chmod +x /lib/svc/method/kube-scheduler
svccfg import /lib/svc/manifest/kube-scheduler.xml

Verification

Through verification you should now see that the cluster is responding, although we cannot utilize the readiness or liveness probes (as the probing is now part of SMF responsibility) .

kubectl cluster-info --kubeconfig admin.kubeconfig

output

Kubernetes control plane is running at https://192.168.200.1:6443To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

skip the rest of the chapter as we will skip adding a load balancer (at least I am not, for now) in front of the cluster. But should you wish, it is just a matter of creating a pkgsrc zone, make up a dedicated IP and install haproxy — but most importantly, you’ll have to go back and generate a certificate for that HA API endpoint.

RBAC for Kubelet Authorization

This section is also identical to the official guide

cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
verbs:
- "*"
EOF
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubernetes
EOF

The Control Plane is alive!

As of now, there should be a functional control plane. But how would you know besides asking simple questions with the kubectl — you have no workload at all visible in the cluster.

Bootstrapping the Workers (on bhyve)

Next, I’ll show how I bootstrapped my worker nodes and managed to create a functional cluster that is reachable externally. Click here to continue to the last part, and let me know if you want to here more about illumos or kubernetes in the homelab.

--

--

Tony Norlin

Homelab tinkerer. ❤ OpenSource. illumos, Linux, kubernetes, networking and security. Recently eBPF. Connect with me: www.linkedin.com/in/tonynorlin