Kubernetes the hard way on bare metal/VMs — Setting up the workers

Part of the Kubernetes the hard way on bare metal/VM. This is designed for beginners.

Drew Viles
5 min readDec 14, 2021
Kubernetes Logo

Introduction

This guide is part of the Kubernetes the hard way on bare metal/VMs series. On its own this may be useful to you however since it’s tailored for the series, it may not be completely suited to your needs.

Configure the workers

You should switch over to the worker nodes now. These commands should not be run on any controller nodes unless it is a single node setup and the controller and worker are the same.

Again, use tmux/screen as described before to run in parallel.

Getting more binaries

Get fetching those binaries again (ignore kubectl if single node as you should already have it):

wget -q --show-progress --https-only --timestamping \
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.22.0/crictl-v1.22.0-linux-amd64.tar.gz \
https://github.com/opencontainers/runc/releases/download/v1.0.3/runc.amd64 \
https://github.com/containernetworking/plugins/releases/download/v1.0.1/cni-plugins-linux-amd64-v1.0.1.tgz \
https://github.com/containerd/containerd/releases/download/v1.5.8/containerd-1.5.8-linux-amd64.tar.gz \
https://storage.googleapis.com/kubernetes-release/release/v1.23.0/bin/linux/amd64/kubectl \
https://storage.googleapis.com/kubernetes-release/release/v1.23.0/bin/linux/amd64/kube-proxy \
https://storage.googleapis.com/kubernetes-release/release/v1.23.0/bin/linux/amd64/kubelet

Create some dirs

sudo mkdir -p \
/etc/cni/net.d \
/opt/cni/bin \
/var/lib/kubelet \
/var/lib/kube-proxy \
/var/lib/kubernetes \
/var/run/kubernetes

and now get it all installed

mkdir containerd
tar -xvf crictl-v1.22.0-linux-amd64.tar.gz
tar -xvf containerd-1.5.8-linux-amd64.tar.gz -C containerd
sudo tar -xvf cni-plugins-linux-amd64-v1.0.1.tgz -C /opt/cni/bin/
sudo mv runc.amd64 runc
chmod +x crictl kubectl kube-proxy kubelet runc
sudo mv crictl kubectl kube-proxy kubelet runc /usr/local/bin/
sudo mv containerd/bin/* /bin/

CNI networking

This is the part where some people can fall over because it can cause a little confusion for some.
The POD_CIDR for THIS worker node is the range that the pods will use for networking on THIS worker node. It is different per node and is sliced out of the main POD_CIDR that was defined at the top of the article.
Remember that? It’s 10.200.0.0/16.

Now, when you follow Kelsey’s guide, you’ll see that when he creates the instances in GCE he does:

for i in 0 1 2; do

--metadata pod-cidr=10.200.${i}.0/24

done

From this you can see is that the “ — metadata pod-cidr” is set to the 10.200.x.0/24 network and since you set 10.200.0.0/16 as the POD_CIDR at the top of the article to be the main network you’ll use you can specify ANY subnet within 10.200.0.0/16 for each worker node.
A subnet such as 10.200.1.0/24.

For the many worker nodes, you can ensure this is set to 10.200.0.0/24, 10.200.1.0/24, 10.200.2.0/24 and so on for each worker node. This doesn’t need to be within the 192.168.0.0/24 network that the nodes work on because it is an internal network used by the cluster.

So, now that’s sorted, let’s set that environment variable per node.

POD_CIDR=10.200.x.0/24

Config files

Now you can create the CNI bridge network config file.

cat <<EOF | sudo tee /etc/cni/net.d/10-bridge.conf
{
"cniVersion": "0.4.0",
"name": "bridge",
"type": "bridge",
"bridge": "cnio0",
"isGateway": true,
"ipMasq": true,
"ipam": {
"type": "host-local",
"ranges": [
[{"subnet": "${POD_CIDR}"}]
],
"routes": [{"dst": "0.0.0.0/0"}]
}
}
EOF

Now the loopback config….

cat <<EOF | sudo tee /etc/cni/net.d/99-loopback.conf
{

"cniVersion": "0.4.0",
"name": "lo",
"type": "loopback"
}
EOF

Containerd setup

Of course you can use docker if you wish, it’ll just take some minor alterations… but why bother when this works, is lighter-weight and is what Docker users under the hood?

Create the Containerd configuration file:

sudo mkdir -p /etc/containerd/
cat << EOF | sudo tee /etc/containerd/config.toml
[plugins]
[plugins.cri.containerd]
snapshotter = "overlayfs"
[plugins.cri.containerd.default_runtime]
runtime_type = "io.containerd.runtime.v1.linux"
runtime_engine = "/usr/local/bin/runc"
runtime_root = ""
EOF

Create the containerd.service systemd unit file:

cat <<EOF | sudo tee /etc/systemd/system/containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target
[Service]
ExecStartPre=/sbin/modprobe overlay
ExecStart=/bin/containerd
Restart=always
RestartSec=5
Delegate=yes
KillMode=process
OOMScoreAdjust=-999
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity
[Install]
WantedBy=multi-user.target
EOF

Configure kubelet

The kubelet sits on the worker node and is what makes pods “real”. It receives an instruction to make a pod and in turn creates one within Containerd.

Copy/Move files

sudo cp ~/${HOSTNAME}-key.pem ~/${HOSTNAME}.pem /var/lib/kubelet/
sudo cp ~/${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
sudo cp ~/ca.pem /var/lib/kubernetes/

Generate config

cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
enabled: true
x509:
clientCAFile: "/var/lib/kubernetes/ca.pem"
authorization:
mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
- "10.32.0.10"
podCIDR: "${POD_CIDR}"
resolvConf: "/run/systemd/resolve/resolv.conf"
runtimeRequestTimeout: "15m"
tlsCertFile: "/var/lib/kubelet/${HOSTNAME}.pem"
tlsPrivateKeyFile: "/var/lib/kubelet/${HOSTNAME}-key.pem"
EOF

Generate the SystemD service

cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service
[Service]
ExecStart=/usr/local/bin/kubelet \\
--config=/var/lib/kubelet/kubelet-config.yaml \\
--container-runtime=remote \\
--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\
--image-pull-progress-deadline=2m \\
--kubeconfig=/var/lib/kubelet/kubeconfig \\
--network-plugin=cni \\
--register-node=true \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF

Configure kube-proxy

The Kube-proxy service is much like the kubelet, but for the network side of things.

This is what allows NodePorts and alike to work. It receives instruction to create the requirements of a Service and makes them “real” within IPtables or IPVS.

Generate config

sudo cp kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfigcat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
kubeconfig: "/var/lib/kube-proxy/kubeconfig"
mode: "iptables"
clusterCIDR: "10.200.0.0/16"
EOF

Generate the SystemD Service

cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-proxy \\
--config=/var/lib/kube-proxy/kube-proxy-config.yaml
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF

Now enable and start it all

**Before you do, make sure SWAP is off with sudo swapoff -a if you have SWAP space**

You shouldn’t have created the server with it.
If you do have swap then this will only disable swap until your next reboot. To disable it permanently, comment it out of `/etc/fstab`

Confirm SWAP is off with:

free##Results
total used free shared buff/cache available
Mem: 7.7G 692M 339M 104M 6.7G 6.6G
Swap: 0B 0B 0B

Now you can continue by enabling and starting the services.

sudo systemctl daemon-reload
sudo systemctl enable containerd kubelet kube-proxy
sudo systemctl start containerd kubelet kube-proxy

Verify it all

Jump back over to the controllers, since this where you currently have the admin.kubeconfig stored and run:

kubectl get nodes --kubeconfig admin.kubeconfig##Results
NAME STATUS ROLES AGE VERSION
worker-0 Ready <none> 52s v1.12.0
worker-1 Ready <none> 52s v1.12.0
worker-2 Ready <none> 52s v1.12.0

Conclusion

You’ve configured the services required for the workers.

Next: Setting up remote access

Unlisted

--

--