Build a managed Kubernetes cluster from scratch — part 2

Tony Norlin
15 min readMay 12, 2022

--

Its been a while since part one of this series where I intend to describe how one can build a managed cluster from scratch. It all started when I tried to build etcd on illumos and realized that it not only was executable binaries, but also etcd was more responsive (at least in my homelab environment, as I could run it directly on bare metal instead in a vm guest), so I proceeded with getting the Kubernetes data plane components executable within illumos and while I first had issues with communication, I found a highly desired feature that felt too good to be true and it almost turned out to be too good to realize — at least by then.

Cilium released a impressive beta for their upcoming sidecarless service-mesh with Kubernetes Ingress built-in, so I installed the beta and suddenly I had an Ingress working smoothly and I changed direction from the hassle with a deployed Ingress Controller to this one instead. I got hooked immediately, but it turned out to be just one tiny obstacle — after 15–16s (what felt random by then) Ingress for Hubble UI stopped working, while Ingress for Grafana “mostly” worked all the time as long as a dashboard was loaded, an Ingress for the Longhorn frontend-ui page could be loaded like 4 times out of 5, so the issue felt very intermittent and I tried (naturally, especially as this is kind of a unicorn installation) to find faults within my own setup.

When I couldn’t find an issue within 1–2 hours of troubleshooting I lost a bit of traction (as this is mostly my late evening hobby) and other daily matters started to claim time from this project. I saw some issue with ping against the Ingress IP externally and thought that the issue could be related to the BGP/routing/anti spoof protection/networking so I made some attempts to tear down/up various components, even a 2-node 100% Linux setup on bare metal, but it turned out to be a known issue with ping in the metalLB design and unrelated to my issue.

Anyway, I recently just dropped a line regarding my issue and the Cilium team quickly responded that they had identified that issue to be related to configuration settings of the underlying Envoy proxy and suggested a workaround. I built an image of the cilium operator with the suggested change and my Ingress was finally all great now, in this almost too good to be true state — a sidecarless Ferrari. This change is now merged into 1.12.0-rc2 and I felt it was time to move on with the writing.

Managed Cluster

This series intend to describe how to form a managed Kubernetes cluster with a separation between Control Plane and the Worker Plane, where the Control Plane components is managed outside of Kubernetes. In previous steps I described how the certificates are generated and in this part it is time to bootstrap the components to form the Control Plane.

Forming the Control Plane

Bootstrapping the control plane components

This will almost be a repeat of the Kubernetes the hard (illumos) way series, but pay attention to details. Platform doesn’t really matter here, and I expect it to work similar great if all components are in Linux (darwin or another operating system where components are ported. At least the runtimes starts and apparently seem to work in Oracle Solaris, but as I don’t have a license it feels non-inspiring to me to test it deeper at the moment), but I’ll describe steps I’ve done in illumos. More specifically OmniOS with the excellent tool zadm that makes the creation of zones a breeze. I welcome readers feedback if it also works in Oracle Solaris.

Likewise, the zone brand doesn’t really matter, pick the one you are comfortable with. I like pkgsrc brand as it is both rather slim but still have a package catalogue with over 20,000 packages, but for the sake of this guide I tried the sparse brand.

Bootstrap the nodes

etcd:

The etcd should only really be exposed from the kube-apiserver so it doesn’t necessarily be on the same subnet.

for instance in {1..3}; do 
cat <<EOF | sudo tee /var/tmp/etcd${instance}.json
{
"autoboot" : "true",
"bootargs" : "",
"brand" : "sparse",
"cpu-shares" : "1",
"dns-domain" : "cloud.mylocal",
"fs-allowed" : "",
"hostid" : "",
"ip-type" : "exclusive",
"limitpriv" : "default",
"net" : [
{
"allowed-address" : "10.100.0.1${instance}/26",
"defrouter" : "10.100.0.62",
"global-nic" : "aggr0",
"physical" : "etcd${instance}",
"vlan-id" : "100"
}
],
"pool" : "",
"resolvers" : [
"1.1.1.1",
"9.9.9.9"
],
"scheduling-class" : "",
"zonename" : "etcd${instance}",
"zonepath" : "/zones/etcd${instance}"
}
EOF
zadm create -b sparse \
etcd${instance} < /var/tmp/etcd${instance}.json
zadm boot etcd${instance}done

kube-apiserver:

For the sake of redundancy there should also be at least three apiserver instances, but let’s keep that for a later excercise and for now satisfy with a single kube-apiserver. The apiserver needs to be reachable from (atleast) the client, the kube-scheduler and kube-controller-manager and needs to reach the etcd, webhook controllers and the kubelets. Have that in mind when considering firewall rules and routing.

cat <<EOF | sudo tee /var/tmp/kube-apiserver.json
{
"autoboot" : "true",
"bootargs" : "",
"brand" : "sparse",
"cpu-shares" : "1",
"dns-domain" : "cloud.mylocal",
"fs-allowed" : "",
"hostid" : "",
"ip-type" : "exclusive",
"limitpriv" : "default",
"net" : [
{
"allowed-address" : "10.100.0.1/26",
"defrouter" : "10.100.0.62",
"global-nic" : "aggr0",
"physical" : "kubeapisrv0",
"vlan-id" : "100"
}
],
"pool" : "",
"resolvers" : [
"1.1.1.1",
"9.9.9.9"
],
"scheduling-class" : "",
"zonename" : "kube-apiserver",
"zonepath" : "/zones/kube-apiserver"
}
EOF
zadm create -b sparse \
kube-apiserver < /var/tmp/kube-apiserver.json
zadm boot kube-apiserver

kube-controller-manager:

The kube-controller-manager talks only to apiserver so it could be on an isolated etherstub.

cat <<EOF | sudo tee /var/tmp/kube-ctrlmgr.json
{
"autoboot" : "true",
"bootargs" : "",
"brand" : "sparse",
"cpu-shares" : "1",
"dns-domain" : "cloud.mylocal",
"fs-allowed" : "",
"hostid" : "",
"ip-type" : "exclusive",
"limitpriv" : "default",
"net" : [
{
"allowed-address" : "10.100.0.4/26",
"defrouter" : "10.100.0.62",
"global-nic" : "aggr0",
"physical" : "kubectrlmgr0",
"vlan-id" : "100"
}
],
"pool" : "",
"resolvers" : [
"1.1.1.1",
"9.9.9.9"
],
"scheduling-class" : "",
"zonename" : "kube-ctrlmgr",
"zonepath" : "/zones/kube-ctrlmgr"
}
EOF
zadm create -b sparse \
kube-ctrlmgr < /var/tmp/kube-ctrlmgr.json
zadm boot kube-ctrlmgr

kube-scheduler:

The same principle applies to the kube-scheduler.

cat <<EOF | sudo tee /var/tmp/kube-sched.json
{
"autoboot" : "true",
"bootargs" : "",
"brand" : "sparse",
"cpu-shares" : "1",
"dns-domain" : "cloud.mylocal",
"fs-allowed" : "",
"hostid" : "",
"ip-type" : "exclusive",
"limitpriv" : "default",
"net" : [
{
"allowed-address" : "10.100.0.7/26",
"defrouter" : "10.100.0.62",
"global-nic" : "aggr0",
"physical" : "kubesched0",
"vlan-id" : "100"
}
],
"pool" : "",
"resolvers" : [
"1.1.1.1",
"9.9.9.9"
],
"scheduling-class" : "",
"zonename" : "kube-sched",
"zonepath" : "/zones/kube-sched"
}
EOFzadm create -b sparse \
kube-sched < /var/tmp/kube-sched.json
zadm boot kube-sched

Distribution of certificates

This guide expects that you have generated the certificates in the prescribed steps, but don’t let that hinder you from bringing in own external certificates from another established Certificate Authority of your choice. Just remember that the certificates needs to be either of type client, peer or server and sometimes a combination of the different profiles. Look into part 1 for reference.

Also, no worries if the the certs happen to be close to expiry time, just go ahead and run the cfssl commands again to regenerate the certificates with a new expiry date for each service. No need to recreate the authorities as they have end date ~3 years ahead (unless you are reading this story in like 2024 and it happens to still remain relevant).

Corresponding certificates for each node needs to be distributed and while I, for the sake of this guide, describe a way to do it manually it would make sense to use the favourite automation tool of your choice for this task (as in the previous part, your favourite CA).

Open a terminal window for the location where the certificates was generated, hereby declared as terminal1.

etcd:

In terminal1, enter the directory where the etcd certificates was generated and execute the following to verify that the correct files (./etcd-ca/etcd-ca.pem, etcd1-server-key.pem, etcd1-server.pem, etcd1-peer.pem, etcd1-peer-key.pem, etcd-healthcheck-client-key.pem, etcd-healthcheck-client.pem) was indeed “selected” and renamed:

instance=1

(tar --transform=s,./etcd-ca/etcd-,, --transform=s,-key.pem,.key,\
--transform=s,pem,crt, --transform=s,etcd-,, --transform=s,etcd${instance}-,, \
--transform=s,etcd${instance},server, \
-czf - ./etcd-ca/etcd-ca.pem etcd${instance}-key.pem \
etcd${instance}.pem etcd${instance}-peer.pem \
etcd${instance}-peer-key.pem etcd-healthcheck-client-key.pem \
etcd-healthcheck-client.pem|gtar -tzf -)

output

-rw-rw-r —tnorlin/tnorlin 1830 2022–01–01 00:03 ca.crt
-rw — — — — tnorlin/tnorlin 1675 2022–01–01 11:43 server.key
-rw-rw-r —tnorlin/tnorlin 1631 2022–01–01 11:43 server.crt
-rw-rw-r —tnorlin/tnorlin 1631 2022–01–01 11:15 peer.crt
-rw — — — — tnorlin/tnorlin 1679 2022–01–01 11:15 peer.key
-rw — — — — tnorlin/tnorlin 1679 2022–01–01 00:05 healthcheck-client.key
-rw-rw-r —tnorlin/tnorlin 1562 2022–01–01 00:05 healthcheck-client.crt

As we now know that the certificates are output in the correct format, open a new terminal, terminal2, and get into the etcd node, type the following command:

(mkdir -p /etc/kubernetes/pki/etcd; cd /etc/kubernetes/pki/etcd; \
base64 -d | gtar -xzf -)

The terminal2 now waits for input…

Run the below command in terminal1 and copy the base64 encoded output (protip: pipe the output to something like xclip transfer the data) that are printed by below command and paste it into the terminal2, press enter to get a new empty line and issue a EOF (^d), voila — if everything went as planned all relevant certificates would now be in place.

instance=1
(tar --transform=s,./etcd-ca/etcd-,, --transform=s,-key.pem,.key,\
--transform=s,pem,crt, --transform=s,etcd-,, --transform=s,etcd${instance}-,, \
--transform=s,etcd${instance},server, \
-czf - ./etcd-ca/etcd-ca.pem etcd${instance}-key.pem \
etcd${instance}.pem etcd${instance}-peer.pem \
etcd${instance}-peer-key.pem etcd-healthcheck-client-key.pem \
etcd-healthcheck-client.pem|base64)

Repeat for etcd2 and etcd3.

kube-controller-manager:

In a new terminal, get into the kube-ctrl-mgr node, type the following command:

(mkdir -p /etc/kubernetes/pki; cd /etc/kubernetes/pki; \
base64 -d | gtar -xzf -)

In terminal1 execute the following to collect the correct certificates (ca.crt, ca.key and sa.key):

(tar --transform='s,./kubernetes-ca/kubernetes-,,' \
--transform='s,-key.pem,.key,' --transform='s,pem,crt,'\
-czf - ./kubernetes-ca/kubernetes-ca.pem \
./kubernetes-ca/kubernetes-ca-key.pem sa.key |base64)

Copy the base64 encoded output from terminal1 and paste it into the terminal, end with a new line and issue the EOF.

In terminal1 execute the following to collect the client configuration file:

In terminal1 execute the following:

base64 < controller-manager.conf

In a new terminal, get into the kube-ctrlmgr, type the following command:

(base64 -d > /etc/kubernetes/controller-manager.conf)

kube-scheduler:

In terminal1 execute the following:

base64 < scheduler.conf

In a new terminal, get into the kube-sched, type the following command:

(mkdir -p /etc/kubernetes/config; \
base64 -d > /etc/kubernetes/scheduler.conf)

Copy the base64 encoded output from terminal1 and paste it into the terminal, end with a new line and issue the EOF.

kube-apiserver:

In a new terminal, get into the kube-apiserver, type the following command:

(mkdir -p /etc/kubernetes/pki/etcd; cd /etc/kubernetes/pki; \
base64 -d | gtar -xzf -)

In terminal1 execute the following:

Note: I apparently missed one line in the part 1, but the apiserver certificate needs to be generated as well:

cfssl gencert -ca=kubernetes-ca/kubernetes-ca.pem -ca-key=kubernetes-ca/kubernetes-ca-key.pem --config=kubernetes-ca/kubernetes-ca-config.json -profile=www apiserver-csr.json | cfssljson -bare apiserver

In terminal1 execute the following:

(tar --transform=s,./etcd-ca/etcd-ca.pem,etcd/ca.crt, \
--transform=s,./kubernetes-ca/kubernetes-ca.pem,ca.crt, \
--transform=s,./kubernetes-front-proxy-ca/kubernetes-front-proxy-ca.pem,front-proxy-ca.crt, \
--transform=s,-key.pem,.key, --transform=s,.pem,.crt, \
-czf - ./etcd-ca/etcd-ca.pem apiserver.pem apiserver-key.pem \
sa.key sa.pub \
./kubernetes-front-proxy-ca/kubernetes-front-proxy-ca.pem \
front-proxy-client.pem front-proxy-client-key.pem \
apiserver-kubelet-client.pem apiserver-kubelet-client-key.pem \
./kubernetes-ca/kubernetes-ca.pem apiserver-etcd-client.pem \
apiserver-etcd-client-key.pem | base64 )

Copy the base64 encoded output from terminal1 and paste it into the terminal, end with a new line and issue the EOF.

Installation of etcd cluster

Either copy the binaries from my github repository or build etcd version 3.5.3 and copy the binaries (etcdctl, etcdutl, etcd) into /opt/local/bin and make them executable:

(cd /opt/local/bin; ls -l etcd etcdctl etcdutl)
-rwxr-xr-x 1 root root 31498230 May 6 15:11 etcd
-rwxr-xr-x 1 root root 24153193 May 6 15:11 etcdctl
-rwxr-xr-x 1 root root 20117614 May 6 15:11 etcdutl

Create the SMF Method script:

APISERVER_PUBLIC_IP=10.100.0.1
CURRENT_ETCD_IP=10.100.0.11 # change for each of etcd1, etcd2, etcd3
CURRENT_ETCD_NAME=etcd1 # change for each of etcd1, etcd2, etcd3
ETCD1_IP=10.100.0.11
ETCD2_IP=10.100.0.12
ETCD3_IP=10.100.0.13
ETCD1_NAME=etcd1
ETCD2_NAME=etcd2
ETCD3_NAME=etcd3
cat << EOF > /lib/svc/method/etcd
#
# The contents of this file are subject to the terms of the
# Common Development and Distribution License (the "License").
# You may not use this file except in compliance with the License.
#
# You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
# or http://www.opensolaris.org/os/licensing.
# See the License for the specific language governing permissions
# and limitations under the License.
#
# When distributing Covered Code, include this CDDL HEADER in each
# file and include the License file at usr/src/OPENSOLARIS.LICENSE.
# If applicable, add the following below this CDDL HEADER, with the
# fields enclosed by brackets "[]" replaced with your own identifying
# information: Portions Copyright [yyyy] [name of copyright owner]
#
# CDDL HEADER END
#
#
# Copyright 2008 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
#ident "%Z%%M% %I% %E% SMI"

#
# Start/Stop client LDAP service
#

. /lib/svc/share/smf_include.sh

case "\$1" in
'start')
exec /opt/local/bin/etcd \
--advertise-client-urls=https://${CURRENT_ETCD_IP}:2379 \
--cert-file=/etc/kubernetes/pki/etcd/server.crt \
--client-cert-auth=true --data-dir=/var/lib/etcd \
--initial-advertise-peer-urls=https://${CURRENT_ETCD_IP}:2380 \
--initial-cluster=${ETCD1_NAME}=https://${ETCD1_IP}:2380,\
${ETCD2_NAME}=https://${ETCD2_IP}:2380,${ETCD3_NAME}=https://${ETCD3_IP}:2380 \
--initial-cluster-state=new \
--key-file=/etc/kubernetes/pki/etcd/server.key \
--listen-client-urls=https://${CURRENT_ETCD_IP}:2379 \
--listen-metrics-urls=http://127.0.0.1:2381 \
--listen-peer-urls=https://${CURRENT_ETCD_IP}:2380 \
--name=${CURRENT_ETCD_NAME} \
--peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt \
--peer-client-cert-auth=true \
--peer-key-file=/etc/kubernetes/pki/etcd/peer.key \
--peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt \
--snapshot-count=10000 \
--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt > /var/log/etcd.log\
2>&1 &
;;

'stop')
exec /usr/bin/pkill etcd
;;

*)
echo "Usage: \$0 { start | stop }"
exit 1
;;
esac
EOFchmod +x /lib/svc/method/etcd

Create (and import) the SMF Manifest:

cat << EOF > /lib/svc/manifest/etcd.xml<?xml version="1.0"?>
<!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1">
<!--
Manifest automatically generated by smfgen.
-->
<service_bundle type="manifest" name="application-etcd" >
<service name="application/etcd" type="service" version="2" >
<create_default_instance enabled="true" />
<dependency name="dep0" grouping="require_all" restart_on="error" type="service" >
<service_fmri value="svc:/milestone/multi-user:default" />
</dependency>
<exec_method type="method" name="start" exec="/lib/svc/method/etcd start" timeout_seconds="30" />
<exec_method type="method" name="stop" exec=":kill" timeout_seconds="30" />
<template >
<common_name >
<loctext xml:lang="C" >ETCD</loctext>
</common_name>
</template>
</service>
</service_bundle>
EOFsvccfg import /lib/svc/manifest/etcd.xml

Repeat for etcd2 and etcd3.

Verify that the cluster is up and running.

/opt/local/bin/etcdctl --cert /etc/kubernetes/pki/etcd/server.crt \
--key /etc/kubernetes/pki/etcd/server.key \
--cacert /etc/kubernetes/pki/etcd/ca.crt \
--endpoints ${ETCD1_IP}:2379 member list --write-out=table

Sample output:

+------------------+---------+------------+-------------------------+-------------------------+------------+
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER |
+------------------+---------+------------+-------------------------+-------------------------+------------+
| 7a19edc04c6ca232 | started | etcd2 | https://10.100.0.12:2380 | https://10.100.0.12:2379 | false |
| 8913811d4fdc6b6c | started | etcd1 | https://10.100.0.11:2380 | https://10.100.0.11:2379 | false |
| 9fcda4af677ca0d2 | started | etcd3 | https://10.100.0.13:2380 | https://10.100.0.13:2379 | false |
+------------------+---------+------------+-------------------------+-------------------------+------------+

Installation of Kubernetes Control Plane

kube-apiserver:

First we generate an Encryption Config for encryption of etcd:

ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)mkdir -p /etc/kubernetes
cat > /etc/kubernetes/encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: ${ENCRYPTION_KEY}
- identity: {}
EOF

Either copy the latest binaries from my github repository or build Kubernetes version 1.24.0 and copy the binary (kube-apiserver and, optionally, kubectl) into /opt/local/bin and make them executable:

Then, create the SMF Method script:

APISERVER_PUBLIC_IP=10.100.0.1
ETCD1_IP=10.100.0.11
ETCD2_IP=10.100.0.12
ETCD3_IP=10.100.0.13
cat << EOF > /lib/svc/method/kube-apiserver
#!/sbin/sh
#
# CDDL HEADER START
#
# The contents of this file are subject to the terms of the
# Common Development and Distribution License (the "License").
# You may not use this file except in compliance with the License.
#
# You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
# or http://www.opensolaris.org/os/licensing.
# See the License for the specific language governing permissions
# and limitations under the License.
#
# When distributing Covered Code, include this CDDL HEADER in each
# file and include the License file at usr/src/OPENSOLARIS.LICENSE.
# If applicable, add the following below this CDDL HEADER, with the
# fields enclosed by brackets "[]" replaced with your own identifying
# information: Portions Copyright [yyyy] [name of copyright owner]
#
# CDDL HEADER END
#
#
# Copyright 2008 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
#ident "%Z%%M% %I% %E% SMI"#
# Start/Stop client LDAP service
#. /lib/svc/share/smf_include.sh
case "\$1" in
'start')
exec /opt/local/bin/kube-apiserver --advertise-address=${APISERVER_PUBLIC_IP} --allow-privileged=true --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/var/log/audit.log --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://${ETCD1_IP}:2379,https://${ETCD2_IP}:2379,https://${ETCD3_IP}:2379 --event-ttl=1h --encryption-provider-config=/etc/kubernetes/encryption-config.yaml --kubelet-preferred-address-types=Hostname,InternalIP,ExternalIP,Hostname --kubelet-certificate-authority=/etc/kubernetes/pki/ca.crt --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-account-signing-key-file=/etc/kubernetes/pki/sa.key --service-account-issuer=https://kubernetes.default.svc.custer.local:6443 --service-cluster-ip-range=10.96.0.0/12 --service-node-port-range=30000-32767 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key --v=0 > /var/log/kube-apiserver.log 2>&1 &
;;'stop')
exec /usr/bin/pkill kube-apiserver
;;*)
echo "Usage: \$0 { start | stop }"
exit 1
;;
esac
EOFchmod +x /lib/svc/method/kube-apiserver

Create (and import) the SMF Manifest:

cat << EOF > /lib/svc/manifest/kube-apiserver.xml
<?xml version="1.0"?>
<!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1">
<!--
Manifest automatically generated by smfgen.
-->
<service_bundle type="manifest" name="application-apiserver" >
<service name="application/apiserver" type="service" version="2" >
<create_default_instance enabled="true" />
<dependency name="dep0" grouping="require_all" restart_on="error" type="service" >
<service_fmri value="svc:/milestone/multi-user:default" />
</dependency>
<exec_method type="method" name="start" exec="/lib/svc/method/kube-apiserver start" timeout_seconds="30" />
<exec_method type="method" name="stop" exec=":kill" timeout_seconds="30" />
<template >
<common_name >
<loctext xml:lang="C" >Kubernetes Apiserver</loctext>
</common_name>
</template>
</service>
</service_bundle>
EOFsvccfg import /lib/svc/manifest/kube-apiserver.xml

kube-controller-manager:

Create the SMF Method script:

cat << EOF > /lib/svc/method/kube-controller-manager
#!/sbin/sh
#
# CDDL HEADER START
#
# The contents of this file are subject to the terms of the
# Common Development and Distribution License (the "License").
# You may not use this file except in compliance with the License.
#
# You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
# or http://www.opensolaris.org/os/licensing.
# See the License for the specific language governing permissions
# and limitations under the License.
#
# When distributing Covered Code, include this CDDL HEADER in each
# file and include the License file at usr/src/OPENSOLARIS.LICENSE.
# If applicable, add the following below this CDDL HEADER, with the
# fields enclosed by brackets "[]" replaced with your own identifying
# information: Portions Copyright [yyyy] [name of copyright owner]
#
# CDDL HEADER END
#
#
# Copyright 2008 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
#ident "%Z%%M% %I% %E% SMI"#
# Start/Stop client LDAP service
#. /lib/svc/share/smf_include.sh
case "\$1" in
'start')
exec /opt/local/bin/kube-controller-manager --bind-address=0.0.0.0 --cluster-name=infracluster --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt --cluster-signing-key-file=/etc/kubernetes/pki/ca.key --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=true --root-ca-file=/etc/kubernetes/pki/ca.crt --service-account-private-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/12 --use-service-account-credentials=true --v=2 > /var/log/kube-controller-manager.log 2>&1 &
;;
'stop')
exec /usr/bin/pkill kube-controller-manager
;;*)
echo "Usage: \$0 { start | stop }"
exit 1
;;
esac
EOFchmod +x /lib/svc/method/kube-controller-manager

Create (and import) the SMF Manifest:

cat << EOF > /lib/svc/manifest/kube-controller-manager.xml
<?xml version="1.0"?>
<!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1">
<!--
Manifest automatically generated by smfgen.
-->
<service_bundle type="manifest" name="application-controller-manager" >
<service name="application/controller-manager" type="service" version="2" >
<create_default_instance enabled="true" />
<dependency name="dep0" grouping="require_all" restart_on="error" type="service" >
<service_fmri value="svc:/milestone/multi-user:default" />
</dependency>
<exec_method type="method" name="start" exec="/lib/svc/method/kube-controller-manager start" timeout_seconds="30" />
<exec_method type="method" name="stop" exec=":kill" timeout_seconds="30" />
<template >
<common_name >
<loctext xml:lang="C" >Kubernetes Controller Manager</loctext>
</common_name>
</template>
</service>
</service_bundle>
EOF
svccfg import /lib/svc/manifest/kube-controller-manager.xml

kube-scheduler:

Create the Kubernetes Scheduler configuration file:

cat > /etc/kubernetes/config/kube-scheduler.yaml <<EOF
apiVersion: kubescheduler.config.k8s.io/v1beta2
kind: KubeSchedulerConfiguration
clientConnection:
kubeconfig: "/etc/kubernetes/scheduler.conf"
leaderElection:
leaderElect: true
EOF

Create the SMF Method script:

cat << EOF > /lib/svc/method/kube-scheduler
#!/sbin/sh
#
# CDDL HEADER START
#
# The contents of this file are subject to the terms of the
# Common Development and Distribution License (the "License").
# You may not use this file except in compliance with the License.
#
# You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
# or http://www.opensolaris.org/os/licensing.
# See the License for the specific language governing permissions
# and limitations under the License.
#
# When distributing Covered Code, include this CDDL HEADER in each
# file and include the License file at usr/src/OPENSOLARIS.LICENSE.
# If applicable, add the following below this CDDL HEADER, with the
# fields enclosed by brackets "[]" replaced with your own identifying
# information: Portions Copyright [yyyy] [name of copyright owner]
#
# CDDL HEADER END
#
#
# Copyright 2008 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
#ident "%Z%%M% %I% %E% SMI"#
# Start/Stop client LDAP service
#. /lib/svc/share/smf_include.sh
case "\$1" in
'start')
exec /opt/local/bin/kube-scheduler --config=/etc/kubernetes/config/kube-scheduler.yaml --v=2 > /var/log/kube-scheduler.log 2>&1 &
;;
'stop')
exec /usr/bin/pkill kube-scheduler
;;*)
echo "Usage: \$0 { start | stop }"
exit 1
;;
esac
EOF
chmod +x /lib/svc/method/kube-scheduler

Create (and import) the SMF Manifest:

cat << EOF > /lib/svc/manifest/kube-scheduler.xml
<?xml version="1.0"?>
<!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1">
<!--
Manifest automatically generated by smfgen.
-->
<service_bundle type="manifest" name="application-scheduler" >
<service name="application/kube-scheduler" type="service" version="2" >
<create_default_instance enabled="true" />
<dependency name="dep0" grouping="require_all" restart_on="error" type="service" >
<service_fmri value="svc:/milestone/multi-user:default" />
</dependency>
<exec_method type="method" name="start" exec="/lib/svc/method/kube-scheduler start" timeout_seconds="30" />
<exec_method type="method" name="stop" exec=":kill" timeout_seconds="30" />
<template >
<common_name >
<loctext xml:lang="C" >Kube Scheduler</loctext>
</common_name>
</template>
</service>
</service_bundle>
EOF
svccfg import /lib/svc/manifest/kube-scheduler.xml

done!

Verify the cluster

Check respective log in /var/log (as state by the SMF Method script) and verify that no errors reported. If everything went okay, then just copy the admin.conf in a location of desire (I go by the default location; /etc/kubernetes/admin.conf) and try the kubectl command.

export KUBECONFIG=/etc/kubernetes/admin.conf/opt/local/bin/kubectl get all
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8h
/opt/local/bin/kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"24+", GitVersion:"v1.24.0-1+9872372681f707", GitCommit:"9872372681f707db8fd1885bdf5f26f1f34afa29", GitTreeState:"clean", BuildDate:"2022-05-03T21:13:37Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"illumos/amd64"}
Kustomize Version: v4.5.4
Server Version: version.Info{Major:"1", Minor:"24+", GitVersion:"v1.24.0-1+9872372681f707", GitCommit:"9872372681f707db8fd1885bdf5f26f1f34afa29", GitTreeState:"clean", BuildDate:"2022-05-03T21:14:02Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"illumos/amd64"}

To be continued…

--

--

Tony Norlin

Homelab tinkerer. ❤ OpenSource. illumos, Linux, kubernetes, networking and security. Recently eBPF. Connect with me: www.linkedin.com/in/tonynorlin