There’s no place like K3d continued

Haggai Philip Zagury
Israeli Tech Radar
Published in
4 min readMay 18, 2024

This my paraphrase on “there’s no place like home” 2nd time around in the context of Kubernetes.

As part of my journey as a DevOps Engineer and Architect at Tikal, I often explain the internals of Kubernetes. In my recent post series (parts 1, 2, and 3), I discussed various aspects of Kubernetes.
In this post, I’d like to introduce several tools that can help developers and DevOps engineers set up a local Kubernetes environment using K3d, Docker, mkcert, hostctl, and other tools on a Mac.
The setup includes reloader, reflector, ingress-nginx, and uses go-task, which acts as the glue for all these components.

DALL-E | image of k8s via k3d with ingress nginx mkcerts and ssl termination

Let’s get started 🏁

Please note this was tested on a MacOs and may require some adjustments for other operating systems …

QuickDip using Taskfile — see it in action ( ran task all -- yes ) :

Asciinema | another useful tool that can help capture the terminal experience

So what did we just launch ?

A k8s cluster based on k3d | configured + deployed 4 workloads

Let’s walk through this:

  1. Local K3d Cluster
    To resemble a production-grade cluster, I created a K3d cluster with one master and two nodes. K3d also deploys a container simulating a load balancer and configures port forwarding for ports 80 and 443. Running docker ps should reveal the load balancer container and the forwarded ports:

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
edb6ffde042e ghcr.io/k3d-io/k3d-tools:5.6.3 "/app/k3d-tools noop" 35 seconds ago Up 34 seconds k3d-demo-tools
de23199f3639 ghcr.io/k3d-io/k3d-proxy:5.6.3 "/bin/sh -c nginx-pr…" 35 seconds ago Up 27 seconds 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 127.0.0.1:6445->6443/tcp k3d-demo-serverlb
439040b5e4bb rancher/k3s:v1.29.4-k3s1 "/bin/k3d-entrypoint…" 35 seconds ago Up 30 seconds k3d-demo-agent-1
24b0ab39e687 rancher/k3s:v1.29.4-k3s1 "/bin/k3d-entrypoint…" 35 seconds ago Up 30 seconds k3d-demo-agent-0
dc6e2974975b rancher/k3s:v1.29.4-k3s1 "/bin/k3d-entrypoint…" 35 seconds ago Up 33 seconds k3d-demo-server-0
eeee4177497f registry:2 "/entrypoint.sh /etc…" 36 seconds ago Up 27 seconds 0.0.0.0:5002->5000/tcp registry.localhost

2. Hostctl for Localhost Aliases
Used hostctl to add aliases to our localhost (or ip 127.0.0.1) — in our case whoami.k8s.localhost once wev’e done it we should be able to:

ping -c2 whoami.k8s.localhost
PING whoami.k8s.localhost (127.0.0.1): 56 data bytes
64 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.054 ms
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms
--- whoami.k8s.localhost ping statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.054/0.115/0.177/0.062 ms

3. Mkcert for SSL Certificates
Used mkcert to generate an SSL certificate so our browser will accept it. The certificate and key were added to our cluster as a Kubernetes secret. The certificate is generated for the *.k8s.localhost domain. The task for creating the certs can be found here:

  certs:
desc: Creates and uploads local certificates to the cluster as tls secrets
dir: .config/tls
generates:
- .config/tls/cert.pem
- .config/tls/key.pem
cmds:
- cp -r {{.ROOT_DIR}}/config/tls/* {{.ROOT_DIR}}/.config/tls
- mkcert -install
- mkcert -cert-file cert.pem -key-file key.pem -p12-file p12.pem "*.k8s.localhost" k8s.localhost "*.localhost" ::1 127.0.0.1 localhost 127.0.0.1 "*.internal.localhost" "*.local" 2> /dev/null
- echo -e "Creating certificate secrets on Kubernetes for local TLS enabled by default\n"
- kubectl config set-context --current --namespace=kube-system --cluster=k3d-{{.CLUSTER_NAME}}
- kubectl create secret tls tls-secret --cert=cert.pem --key=key.pem --dry-run=client -o yaml >base/tls-secret.yaml
- kubectl apply -k ./
- echo -e "\nCertificate resources have been created.\n"

4. Ingress-nginx

Used ingress-nginx, which creates the ingress class named nginx. This helps route traffic from an external network into the cluster. Ingress-nginx listens for changes in ingress resources, such as the ingress resource of the whoami workload:


# kubectl get ingress -n whoami -oyaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
labels:
app.kubernetes.io/instance: whoami
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: whoami
app.kubernetes.io/version: 1.10.1
helm.sh/chart: whoami-5.1.0
name: whoami
namespace: whoami
spec:
ingressClassName: nginx
rules:
- host: whoami.k8s.localhost
http:
paths:
- backend:
service:
name: whoami
port:
number: 80
path: /
pathType: ImplementationSpecific
tls:
- hosts:
- whoami.k8s.localhost
secretName: tls-secret

5. Reflector and Reloader

As mentioned in my previous blog post on the topic of secret management and config rotation this type of addons are required for configuration to be applied / present in the namespace we are using.
Installed reflector and reloader. Reflector copies the tls-secret to the whoami namespace, with annotations to enable reflection. Reloader watches for changes in the secret and performs a rolling update on the workload consuming it:

# kubectl get secrets tls-secret -oyaml
kind: Secret
metadata:
annotations:
reflector.v1.k8s.emberstack.com/reflection-allowed: "true"
reflector.v1.k8s.emberstack.com/reflection-allowed-namespaces: ""
reflector.v1.k8s.emberstack.com/reflection-auto-enabled: "true"
...
# kubectl get secrets tls-secret -oyaml
kind: Secret
metadata:
annotations:
reloader.stakater.com/auto: "true"

6. Whoami Helm Chart
Finally, install whoami, a Helm chart that provides the resources required to run whoami on our local laptop behind our self-signed certificate *.k8s.localhost:

# helm values for our local setup 
whoami:
ingress:
enabled: true
ingressClassName: nginx
pathType: ImplementationSpecific
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
hosts:
- host: &host whoami.k8s.localhost
paths:
- /
tls:
- secretName: tls-secret
hosts:
- *host

Conclusion

This setup is straightforward and should be well-known to anyone who wants to test things locally. Debugging applications remotely can be challenging, and often, we need to bring them down to our local environment. This setup is one of the most mature options available.

As always, I would love to hear your feedback. Collaboration on this project can be done here.

Sincerely, Haggai Philip

--

--