Percolation Labs

Percolation labs is a publication that explores agentic systems in the data tier. It explores how generative AI can augment retrieval.

Setting up K8s on a dedicated Hetzner “Robot” Server

--

I am documenting my experience setting up for the first time with Hetzner a dedicated server provider that has a great service at great value. Compared to what I would have done in the past on AWS, GCP, Azure or even Digital Ocean, Hetzner offers much cheaper cloud options even if it requires a little bit more DIY.

In my case I want to set up a Kubernetes. So compared to AWS EKS, its a little bit more work but thanks to Hetzner’s continuously improving offering and open source projects like CAPH its quite manageable — at least to get up and running.

In this article I will describe the quick setup I followed to try it out. It does not address all production grade actions you would need to take but what we will do is go from zero to a running cluster and serve a web page using HTTP/SSL using a lets encrypt issuer.

This will be step one of putting Percolate (P8s)on K8s on Hetzner.

P8s on K8s on Hetnzer

[I] Creating the Hetzner account and server

First create an account on Hetzner.

To create an account you may need to do some security checks and identify yourself — I did so using my passport

Once you have done this, there are different server options but in this article I’ll describe what I did with a dedicated bare metal server. There was a one time setup fee of $44 and I got this server for $66 per month for now -

You will get an email on setup — provides IP so you can ssh in

The menu top right in Hetzner’s console has a few different areas including Cloud, Robot etc. In Cloud you can create a project from which you can add SSH keys and API keys. You can add a new web user in the robot console. The layout is a little counterintuitive to me switching between Robot and Cloud and even after I did this once or twice I forget where the menus where. So I’m adding pictures. In the Robot console you can find settings and then add a web service user -

The user name is auto-generated and the password is what you choose. If you do this an add an API key and SSH Key you should be able to fill in the following environment variables. Some of these we will need later -

export HETZNER_CLOUD_TOKEN=YOUR_TOKEN #duplicate of bottom. can recall if i need both
export HCLOUD_TOKEN=YOUR_TOKEN
export HETZNER_ROBOT_USER=USER_FROM_ABOVE
export HETZNER_ROBOT_PASSWORD=PASSWORD_FROM_ABOVE
export SSH_KEY_NAME=YOUR_KEY_NAME
export HETZNER_SSH_PUB_PATH="~/.ssh/YOUR_KEY_NAME.pub"
export HETZNER_SSH_PRIV_PATH="~/.ssh/YOUR_KEY_NAME"
# im adding a note that machines are for VMs which is separate to the bare metal stuff
export HCLOUD_CONTROL_PLANE_MACHINE_TYPE="cpx31" # im using an example VM for
export HCLOUD_WORKER_MACHINE_TYPE="cpx31" #example
#you can mix baremetal with VMs e.g. maybe baremetal for heavy workflows and VM for control planes or something like that
export HCLOUD_REGION=hel1 #i set this one up in helsinki
export KUBERNETES_VERSION="1.30.5"
export WORKER_MACHINE_COUNT=3 #example setting

For the Kubernetes part we can use the Cluster API for Hetzner. I wrote about Cluster API last year on Medium here and I found it quite interesting at the time. Its a great way to manage clusters in the same dialect that you manage all your other Kubernetes resources. Their logo shows that its turtles (k8s) all the way down,

Cluster API — its K8s all the way down

[II] Setting up the K8 Cluster

Cluster API requires that you set up a management cluster. You can bootstrap locally from Kind or use any other cluster you may have already. Lets use kind. Im on mac but you can follow their getting started guide for your context — its easy to set up.

brew install kind
brew install clusterctl
#create a management cluster
kind create cluster --name kind-control
# im also going to install hetzners cli
brew install hcloud

For the Hetzner flavour we use this specific Cluster API project CAPH — there are two guides I followed for trying this out. Neither of them were crystal clear but they were good, and good enough to get started. Its probably good to get a general idea first in the Syself docs — in my case im going baremetal route, which they discuss in this section but they also contributed an article here.

In their notes they seem to assume a default setting of the infra provider — throughout you may need to add ` — infrastructure hetzner` to the `clusterctl` commands

With Cluster API you need to keep in mind you have a control cluster which we use kind for here and then we can generate actual workload clusters.

We will use the environment variables that we already set up above in following sections -

# Initialize cluster api
clusterctl init --core cluster-api --bootstrap kubeadm --control-plane kubeadm --infrastructure hetzner

We are going to create some K8s secrets from our env

kubectl create secret generic hetzner --from-literal=hcloud=$HCLOUD_TOKEN --from-literal=robot-user=$HETZNER_ROBOT_USER --from-literal=robot-password=$HETZNER_ROBOT_PASSWORD

kubectl create secret generic robot-ssh --from-literal=sshkey-name=$SSH_KEY_NAME \
--from-file=ssh-privatekey=$HETZNER_SSH_PRIV_PATH \
--from-file=ssh-publickey=$HETZNER_SSH_PUB_PATH

Patch them to attach to targets as per the guide

kubectl patch secret hetzner -p '{"metadata":{"labels":{"clusterctl.cluster.x-k8s.io/move":""}}}'
kubectl patch secret robot-ssh -p '{"metadata":{"labels":{"clusterctl.cluster.x-k8s.io/move":""}}}'

Kubernetes has kinds (object types) for different resources. One of these is a machine type which CAPH provides; HetznerBareMetalHost

apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: HetznerBareMetalHost
metadata:
name: baremetal-1
annotations:
capi.syself.com/wipe-disk: all
spec:
description: My first bare metal machine
serverID: # If you go to Robot in Hetzner and list server, the number part of the ID

You can apply the above to create the resource and you can generate a manifest to create new workflow clusters as below -

clusterctl generate cluster my-cluster --infrastructure hetzner --flavor hetzner-hcloud-control-planes | kubectl apply -f -

When you create a cluster via apply you can use the below to get the kube config to connect -

clusterctl get kubeconfig my-cluster

Carefuly add the contents of this config to sections of your existing kube config or use it in place of your existing — for example below in a terminal session we can just use it in place. We are going to add some essentials to our cluster -

  • CCM (Hetzner’s) manages, among other things, load balancers
  • CNI is important (flannel version)
  • Ill also add an ingress controller
export KUBECONFIG=hetzner-cluster-kubeconfig.yaml

# Install Hetzner CCM
helm repo add syself https://charts.syself.com/
helm repo update syself
helm install ccm syself/ccm-hetzner -n kube-system

# Install Flannel CNI - You can use your preferred CNI instead, e.g. Cilium
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

# Ingress controller
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx --namespace ingress-nginx --create-namespace

We need to edit the config for CCM after we first install it with the command just above — I have set VS Code as my editor — we are doing this because we need to provide the secrets that we setup for our Robot user (as per the screenshots earlier). Be careful with indentation when adding the elements.

#we are editing it after the fact to add secrets 
kubectl edit deployment ccm-ccm-hetzner -n kube-system
- name: HCLOUD_TOKEN
valueFrom:
secretKeyRef:
key: hcloud
name: hetzner
- name: ROBOT_USER
valueFrom:
secretKeyRef:
key: robot-user
name: hetzner
- name: ROBOT_PASSWORD
valueFrom:
secretKeyRef:
key: robot-password
name: hetzner

When we create Ingress artifacts in K8s later, we will just need one extra thing for Hetzner which is annotation that says where to put things physically. For example you can annotate after the fact with the command below — in this case im using one of their sites in Helsinki hel1The CCM auto provisions load balancers to the correct locations in Kubernentes when you have configured things properly. When you add an ingress with the right annotation a load balancer will be added with a pulic IP.

kubectl annotate svc ingress-nginx-controller -n ingress-nginx load-balancer.hetzner.cloud/location=hel1

We will add a service with ingress in a moment.

Note that if you need to check that CCM is configured with secrets or re-add them these commands might be handy.

#make this secrete if not exists
kubectl create secret generic hcloud --from-literal=token=<HCLOUD_TOKEN_ENV> -n kube-system
kubectl get secrets -n kube-system | grep hcloud
#RESTART
kubectl rollout restart deployment hcloud-cloud-controller-manager -n kube-system

At this stage you have reached an important milestone

  • you setup a local kind control cluster and generated artifacts to create a new workload cluster
  • you made sure CCM, CNI and Ingress Controller were on this new cluster when you created it

If you want to generate a dummy service you can use an nginx docker image to do that. You can ask GPT to just generate something standard like the example I placed in the notes. You will need a deployment, a service and an ingress. Two points -

  • Make sure your service is set to ClusterIP and not load balancer so that ingress controller can be used
  • Do not add any host at first unless you have configured the DNS — we will do that in the last section

When you deploy this deployment-service-ingress manifest, you should be able to check the ingress’ public ip -

kc get ingress # check the ip and curl or browse

[III] Setting up for SSH and your domain

The last thing to do is configure your domain so that you can use HTTPS to browse. We will use lets encrypt to provide the SSL certificate.

  1. In Hetzner cloud console I double checked that I can see my load balancer in the Load Balancers section and confirmed the IP matches what my test ingress showed for its external IP.
  2. I have a GoDaddy domain so i updated the A records to point to this IP — Hetzner can manage DNS too but I did not mess with this for now

Now install and check cert manager -

helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --set installCRDs=true
#check it - should be three pods happily running
kubectl get pods -n cert-manager

Three pods should be running.

Now apply a cluster issuer — I have added an example yaml below — change your email address and use as is.

You can also use a staging rather than prod version of this for testing

Next optionally apply a certificate — I have also added an example of this in the notes. This is an OPTIONAL step if you want to instead add cluster-issuer annotation as we do on the ingress below. But I added it for reference anyway.

This process creates a whole bunch of objects and you can use kubectl to keep on eye on them. kc get X where x is cert, order, secret and later challenge etc.

Ingress update

We need another annotation for our ingress — add this second one under annotations — the issuer matches the name in our cluster issuer yaml — see the full Ingress example in the notes near the bottom -

  annotations:
load-balancer.hetzner.cloud/location: hel1 # Adjust to your Hetzner region
cert-manager.io/cluster-issuer: letsencrypt-prod

Also add hosts (now that we are ready to link our domain name) and use the secret from our cert manifest (as per the example in the notes) — you add the TLS section as below but also add host to the regular route. See the example.

spec:
ingressClassName: nginx
tls:
- hosts:
- example.com
- www.example.com
secretName: my-tls-secret

Update your manifest by applying the ingress and keep an eye on the cert manager

kubectl logs -n cert-manager deploy/cert-manager

Below some checks to do on your dns — it maps to your load balancer IP — and check that the challenge response provides an expected result

nslookup nslookup site.com
#curl -I http://percolationlabs.ai/.well-known/acme-challenge/test

Browse to your site in your preferred browser and make sure you get no security warnings — you should see the NGinx test site or whatever you are running

In my case I setup a simple landing page for PercolationLabs as part of the Percolate project which you can read more about in my other articles. You can check out this minimal site here assuming im not messing with it at the time. Pease dont judge my design skills.

Summary

That was not so painful. I have done things on say EKS many times in the past and even though I said this was “DIY” by comparison, I’m feeling right now that its actually not so painful — and certainly the price hurts less than AWS. I might even go so far as to say it was fun.

If you made it this far, please clap for the article and see you next time!

Commands and manifest

hcloud context create [YOUR CONTEXT]
kubectl describe node:
#ProviderID: hcloud://PID
hcloud load-balancer list
hcloud load-balancer describe NAME
#manually add target
hcloud load-balancer target-add --load-balancer LB_ID --server PID
#list servers
hcloud server-type list
#create a new load balancer - should not be needed as CCM adds them
hcloud load-balancer create --name test --type lb11 --location hel1
#manually add to target - should not be ne
hcloud load-balancer target-add --load-balancer test --type label-selector --label-selector "node-role.kubernetes.io/worker"
#annotate ingress if not already done examples
kubectl annotate svc ingress-nginx-controller -n ingress-nginx load-balancer.hetzner.cloud/location=hel1
# OR
kubectl annotate svc ingress-nginx-controller -n ingress-nginx load-balancer.hetzner.cloud/network-zone=eu-central
# big picture on a new cluster
kubectl get all --all-namespaces

#check cert manager logs
kubectl logs -n cert-manager deploy/cert-manager
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: your-email@example.com # Change this!
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginxcluster

cert request

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: my-tls-cert
namespace: default # Change to your service's namespace
spec:
secretName: my-tls-secret # Will store the TLS certificate
issuerRef:
name: letsencrypt-prod
kind: ClusterIssuer
dnsNames:
- example.com # Change this to your domain
- www.example.comcert-request.yaml

Complete deployment-service-ingress example

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
annotations:
load-balancer.hetzner.cloud/location: hel1 # Adjust to your Hetzner region

spec:
selector:
app: nginx
type: ClusterIP
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-nginx
namespace: default
annotations:
load-balancer.hetzner.cloud/location: hel1 # Adjust to your Hetzner region
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
ingressClassName: nginx
tls:
- hosts:
- test.com
secretName: test-secret
rules:
- host: test.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-service # Replace with your actual service name
port:
number: 80

Notes

Its useful to set up DNS challenge using this guide — be sure to add base 64 encoded DNS API token which you can request from the Hetzner cloud console. The secret I thought should be added in the cert-manager namespace but it may look in the same namespace where you’re cert is. I also added RBAC to allow hetzner cert manager hook to list secrets in the namespace.

Links

--

--

Percolation Labs
Percolation Labs

Published in Percolation Labs

Percolation labs is a publication that explores agentic systems in the data tier. It explores how generative AI can augment retrieval.

Sirsh Amarteifio
Sirsh Amarteifio

Written by Sirsh Amarteifio

I write about many things from DevOps to AI. My computer science background is ML and my PhD was in statistical physics and these roughly describe my interests.

Responses (1)