Lowest Price for your own kubernetes using Hetzner Cloud(incl. Storage Provisioner)

Thoren Lederer
8 min readMar 15, 2024

In my day job as a software architect and developer, I use Kubernetes day by day. It is an amazing software and helps a lot. Not only when scaling super high availability but also in starting testing and deploying docker containers in a cloud environment.

But when it comes to my home lab, all options for a Kubernetes cloud are very expensive. (Scaleway is 150$ for a “basic” home lab with some power, DigitalOcean is much more expensive).

10vcpu, 24GB, 100GB Disk for 50$ / Month

So I decided to give it a try to self-host it in the Hetzner cloud environment.

Here is a step-by-step tutorial on how you can create your own Kubernetes home lab with more than 30 GB and not more than 60$ per month.

1. Create a Hetzner Cloud Account

You need to create a Hetzner cloud account.

Create a project: “homelab”. The goal is to create 3 different nodes inside of the project so that in the end it looks like this:

2. Create each server (Master, Worker and Persistence)

We choose Helsinki because it's the cheapest location.

For the kube-master and the kube-worker1 select a CPX31. So you have enough power.

For the master and worker select Ubuntu in the latest version available on Hetzner cloud.

For persistence select CX31 flavor and CentOS.

And create a volume with 100GB for later.

Finally, your servers should look similar to this:

2. Private Network Configuration

Create a private network and add all nodes to this private network.

3. Prepare Everything

To make it more understandable we have 3 nodes:

  1. kube-master — 10.0.0.4
  2. kube-worker1–10.0.0.2
  3. kube-persistence — 10.0.0.3

You can even have more than 3 nodes — the steps above can be reproduced for each. But you can also do it for the 3 and then create an image from your kube-worker1.

Update all packages on all machines to the newest versions.

kube-master:~# 
kube-worker:~#

sudo apt-get update
kube-persistence:~#

sudo yum update

Create an ssh key pair on the master and save it in your password tool. You will need it later.

kube-master:~#

sudo ssh-keygen -t rsa -b 4096
sudo cat /root/.ssh/id_rsa.pub

Next go to the kube-master, kube-worker1, and kube-persistence and add the publickey to the file /root/.ssh/authorized_keys.

kube-worker1:~# 
&
kube-master:~#

echo "$yourpublickey" >> /root/.ssh/authoirzed_keys
# restart ssh service
sudo systemctl restart ssh.service
kube-persistence:~#

echo "$yourpublickey" >> /root/.ssh/authoirzed_keys
# restart ssh service
service sshd restart

Now lets validate if you can connect to the clients from the master.

kube-master1:~#

ssh root@10.0.0.3
# you should now be connected from the master to persistence and worker node
[root@kube-persistence ~]#

4. Install Kubernetes with kubespray and ansible

kube-master1:~#

sudo apt install python3-pip -y
sudo apt install python3-virtualenv -y

git clone https://github.com/kubernetes-sigs/kubespray.git

VENVDIR=kubespray-venv
KUBESPRAYDIR=kubespray
ANSIBLE_VERSION=2.12
virtualenv --python=$(which python3) $VENVDIR
source $VENVDIR/bin/activate
(kubespray-venv) kube-master1:~# 
cd $KUBESPRAYDIR

pip install -U -r requirements.txt

cp -rfp inventory/sample inventory/mycluster

Next, declare the IP addresses before starting kubespray.

(kubespray-venv) kube-master1:~#

declare -a IPS=(10.0.0.4 10.0.0.2)
CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}

Now check if “calico” is your configuration for the cloud network plugin.

(kubespray-venv) kube-master1:~#

nano inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml

# ----
# find this Line:
## Choose the network plugin (cilium, calico, kube-ovn, weave, or flann...
# ...
kube_network_plugin: calico

Finally, run the Ansible script and install Kubernetes.

(kubespray-venv) kube-master1:~#

ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml

Now wait for 10–20 minutes until installation finished.

Error Handling

TASK [bootstrap-os : Fetch /etc/os-release] **************************************************************************************************************
fatal: [node2]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added '10.0.0.4' (ED25519) to the list of known hosts.\r\nroot@10.0.0.4: Permission denied (publickey,password).", "unreachable": true}

If you see the error above, that means you missed the step to add your public key from step #1 to the /root/.ssh/authorized_keys file.

Let's validate the installation

If you can see something like this

...
container-engine/containerd : Containerd | Unpack containerd archive ------------------------------------------ 3.53s
kubernetes-apps/ansible : Kubernetes Apps | Start Resources --------------------------------------------------- 3.44s
container-engine/crictl : Extract_file | Unpacking archive ---------------------------------------------------- 3.37s

That means your installation is finished. Now let's execute some commands to check.

(kubespray-venv) kube-master1:~#

kubectl get pods -A

Hooray!

5. Download config and test with client

Now you can show and download the kube config file.

(kubespray-venv) kube-master1:~#

cat /root/.kube/config

# Copy the output of this to your local ~/.kube/config

# You only need to change this line:
# server: https://127.0.0.1:6443
# to: https://YOUR_PUBLIC_MASTER_P:6443

You can also merge multiple kubernetes clouds into one config file by merging all the attributes like “clusters”, “contexts” and “users” together.

Now let's see if we can connect to the cloud.

your-local-env: ~#

kubectl get pods -A
After running kubectl get pods you should see all your pods from the hetzner cloud.

6. (Optional) Use Aptakube to have a good dashboard for your cloud.

When you switch between different kube systems it becomes very annoying to change your config file every time before connecting. But with the “aptakube” application it becomes very easy to connect and see all your relevant information or connect to shells.

7. Install Metrics Server

When you use aptakube you will find that in the “nodes” tab you won’t see any CPU usage or memory usage. To change this, you need to install the metrics server in Kubernetes.

Missing the metrics of the nodes.
(kubespray-venv) kube-master1:~#

curl -LO https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

nano components.yaml

Now add this line to the components yaml in case of self-signed certificates.

- --kubelet-insecure-tls

Next, run the kubectl apply command.

kubectl apply -f components.yaml

Check if the pod is running by entering the kubectl list pods command. If you now check the kubectl list nodes or aptakube it should look like this:

Hooray! We see some performance metrics from our nodes.

Finally getting some metrics.

8. Install a Persistence Server

After installing Kubernetes it is necessary to install a persistence server inside of your Kubernetes environment, because otherwise you cannot launch pods that require persistent volume claims. For that you need to install a storage class.

To make it very simple, create a NFS server with a shared directory and add it as a storage class into Kubernetes. Kubernetes then creates directories for each persistent volume and saves the data inside. For more flexibility create this in a hetzner volume so you can shrink or extend the volume as needed.

Connect with the shell to your persistence machine.

kube-persistence:~#

sudo yum install -y nfs-utils
systemctl start nfs-server rpcbind
systemctl enable nfs-server rpcbind

Next go into your mounted volume in the path: cd /mnt/HC_Volume_XXXX/ and create a directory.

kube-persistence:~#

cd /mnt/HC_Volume_X/
mkdir nfsshare

chmod 777 nfsshare

In the next step add the directory to the shared entries.

kube-persistence:~#

nano /etc/exports

# Add this entry:
/mnt/HC_Volume_XXXXX/nfsshare 10.0.0.0/24(rw,sync,no_root_squash)

# save the file

Now restart the export process.

kube-persistence:~#

exportfs -r

Lets check the mounted directory from the server and worker machines.

kube-master1:~# 
+
kube-worker1:~#

apt install nfs-common

showmount -e 10.0.0.3

This command should then show you:

Export list for 10.0.0.3:
/mnt/HC_Volume_XXXX/nfsshare 10.0.0.0/24

Create a namespace for the provisionier

kubectl create namespace k8s-nfs-storage

Next checkout the repository for the NFS provisionier.

git clone https://github.com/kubernetes-incubator/external-storage.git kubernetes-incubator 

cd kubernetes-incubator/nfs-client/

sed -i'' "s/namespace:.*/namespace: k8s-nfs-storage/g" ./deploy/rbac.yaml
sed -i'' "s/namespace:.*/namespace: k8s-nfs-storage/g" ./deploy/deployment.yaml

kubectl create -f ./deploy/rbac.yaml

Now edit the provisioner yaml.

nano ./deploy/deployment.yaml

Modify the parts that are orange in the original file.

Modify the storage class and add the same identifier as above for the provisioner name.

nano deploy/class.yaml

Next create the provisioner and the storage class.

kubectl create -f deploy/class.yaml
kubectl create -f deploy/deployment.yaml

After that you should see the storage class in your Kubernetes. Set this storage class as default.

kubectl patch storageclass managed-nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Hooray here is our new storage class.

9. Let's test with a postgresql Database (Optional)

If you want to test your Kubernetes cloud, use helm to create a postgresql database that creates a persistent volume.

The command will deploy a helm chart with all the needed environment for a postgres database.

helm install postgresdb1 oci://registry-1.docker.io/bitnamicharts/postgresql

After that, you will see some persistent volume claims and persistent volumes inside of your Kubernetes.

And if you look into your shared mount inside of the kube-persistence you will see this kind of directory:

You made it! Congrats!

--

--