Kubernetes the hard way on bare metal/VMs — Setting up the workers

Part of the Kubernetes the hard way on bare metal/VM

Drew Viles
4 min readDec 14, 2018
Kubernetes Logo

Introduction

This guide is part of the Kubernetes the hard way on bare metal/VMs series. On its own this may be useful to you however since it’s tailored for the series, it may not be completely suited to your needs.

Configure the workers

You should switch over to the worker nodes now. These commands should not be run on any controller nodes unless it is a single node setup and the controller and worker are the same.

Again, use tmux/screen as described before to run in parallel.

Getting more binaries

Get fetching those binaries again (ignore kubectl if single node as you should already have it):

wget -q --show-progress --https-only --timestamping \
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.12.0/crictl-v1.12.0-linux-amd64.tar.gz \
https://storage.googleapis.com/kubernetes-the-hard-way/runsc-50c283b9f56bb7200938d9e207355f05f79f0d17 \
https://github.com/opencontainers/runc/releases/download/v1.0.0-rc5/runc.amd64 \
https://github.com/containernetworking/plugins/releases/download/v0.6.0/cni-plugins-amd64-v0.6.0.tgz \
https://github.com/containerd/containerd/releases/download/v1.2.0-rc.0/containerd-1.2.0-rc.0.linux-amd64.tar.gz \
https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl \
https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-proxy \
https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubelet

Create some dirs

sudo mkdir -p \
/etc/cni/net.d \
/opt/cni/bin \
/var/lib/kubelet \
/var/lib/kube-proxy \
/var/lib/kubernetes \
/var/run/kubernetes

and now get it all installed

sudo mv runsc-50c283b9f56bb7200938d9e207355f05f79f0d17 runsc
sudo mv runc.amd64 runc
chmod +x kubectl kube-proxy kubelet runc runsc
sudo mv kubectl kube-proxy kubelet runc runsc /usr/local/bin/
sudo tar -xvf crictl-v1.12.0-linux-amd64.tar.gz -C /usr/local/bin/
sudo tar -xvf cni-plugins-amd64-v0.6.0.tgz -C /opt/cni/bin/
sudo tar -xvf containerd-1.2.0-rc.0.linux-amd64.tar.gz -C /

CNI networking

This is the part where some people can fall over because it can cause a little confusion for some.
The POD_CIDR for THIS worker node is the range that the pods will use for networking on THIS worker node. It is different per node and is sliced out of the main POD_CIDR that was defined at the top of the article.
Remember that? It’s 10.200.0.0/16.

Now, when you follow Kelsey’s guide, you’ll see that when he creates the instances in GCE he does:

for i in 0 1 2; do

--metadata pod-cidr=10.200.${i}.0/24

done

From this you can see is that the “ — metadata pod-cidr” is set to the 10.200.x.0/24 network and since you set 10.200.0.0/16 as the POD_CIDR at the top of the article to be the main network you’ll use you can specify ANY subnet within 10.200.0.0/16 for each worker node.
A subnet such as 10.200.1.0/24.

For the many worker nodes, you can ensure this is set to 10.200.1.0/24, 10.200.2.0/24, 10.200.3.0/24 and so on for each worker node. This doesn’t need to be within the 192.168.0.0/24 network that the nodes work on because it is an internal network used by the cluster.

So, now that’s sorted, let’s set that environment variable per node.

POD_CIDR=10.200.x.0/24

Config files

Now you can create the CNI bridge network config file.

cat <<EOF | sudo tee /etc/cni/net.d/10-bridge.conf
{
"cniVersion": "0.3.1",
"name": "bridge",
"type": "bridge",
"bridge": "cnio0",
"isGateway": true,
"ipMasq": true,
"ipam": {
"type": "host-local",
"ranges": [
[{"subnet": "${POD_CIDR}"}]
],
"routes": [{"dst": "0.0.0.0/0"}]
}
}
EOF

Now the loopback config….

cat <<EOF | sudo tee /etc/cni/net.d/99-loopback.conf
{
"cniVersion": "0.3.1",
"type": "loopback"
}
EOF

Containerd setup

Of course you can use docker if you wish, it’ll just take some minor alterations… but why bother when this works! (and is slightly lighter weight)

Create the containerd configuration file:

sudo mkdir -p /etc/containerd/
cat << EOF | sudo tee /etc/containerd/config.toml
[plugins]
[plugins.cri.containerd]
snapshotter = "overlayfs"
[plugins.cri.containerd.default_runtime]
runtime_type = "io.containerd.runtime.v1.linux"
runtime_engine = "/usr/local/bin/runc"
runtime_root = ""
[plugins.cri.containerd.untrusted_workload_runtime]
runtime_type = "io.containerd.runtime.v1.linux"
runtime_engine = "/usr/local/bin/runsc"
runtime_root = "/run/containerd/runsc"
[plugins.cri.containerd.gvisor]
runtime_type = "io.containerd.runtime.v1.linux"
runtime_engine = "/usr/local/bin/runsc"
runtime_root = "/run/containerd/runsc"
EOF

Untrusted workloads will be run using the gVisor (runsc) runtime.

Create the containerd.service systemd unit file:

cat <<EOF | sudo tee /etc/systemd/system/containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target
[Service]
ExecStartPre=/sbin/modprobe overlay
ExecStart=/bin/containerd
Restart=always
RestartSec=5
Delegate=yes
KillMode=process
OOMScoreAdjust=-999
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity
[Install]
WantedBy=multi-user.target
EOF

Configure kubelet

Copy/Move files

sudo cp ~/${HOSTNAME}-key.pem ~/${HOSTNAME}.pem /var/lib/kubelet/sudo cp ~/${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfigsudo cp ~/ca.pem /var/lib/kubernetes/

Generate configs

cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
enabled: true
x509:
clientCAFile: "/var/lib/kubernetes/ca.pem"
authorization:
mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
- "10.32.0.10"
podCIDR: "${POD_CIDR}"
resolvConf: "/run/systemd/resolve/resolv.conf"
runtimeRequestTimeout: "15m"
tlsCertFile: "/var/lib/kubelet/${HOSTNAME}.pem"
tlsPrivateKeyFile: "/var/lib/kubelet/${HOSTNAME}-key.pem"
EOF

Now the service!

cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service
[Service]
ExecStart=/usr/local/bin/kubelet \\
--config=/var/lib/kubelet/kubelet-config.yaml \\
--container-runtime=remote \\
--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\
--image-pull-progress-deadline=2m \\
--kubeconfig=/var/lib/kubelet/kubeconfig \\
--network-plugin=cni \\
--register-node=true \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF

kube-proxy configs and service

sudo cp kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfigcat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
kubeconfig: "/var/lib/kube-proxy/kubeconfig"
mode: "iptables"
clusterCIDR: "10.200.0.0/16"
EOF
cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-proxy \\
--config=/var/lib/kube-proxy/kube-proxy-config.yaml
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF

Now enable and start it all

**Before you do, make sure swap is off with sudo swapoff -a if you have swap space**

Confirm with:

free##Results
total used free shared buff/cache available
Mem: 7.7G 692M 339M 104M 6.7G 6.6G
Swap: 0B 0B 0B

If you do have swap then this will only disable swap until your next reboot. To disable it permanently, comment it out of /etc/fstab

Now you can continue by enabling and starting the services.

sudo systemctl daemon-reload
sudo systemctl enable containerd kubelet kube-proxy
sudo systemctl start containerd kubelet kube-proxy

Verify it all

Jump back over to the controllers, since this where you currently have the admin.kubeconfig stored and run:

kubectl get nodes --kubeconfig admin.kubeconfig##Results
NAME STATUS ROLES AGE VERSION
worker-0 Ready <none> 52s v1.12.0
worker-1 Ready <none> 52s v1.12.0
worker-2 Ready <none> 52s v1.12.0

Conclusion

You’ve configured the services required for the workers.

Next: Setting up remote access

--

--