Managing my Home with Kubernetes, Traefik, and Raspberry Pi’s

Chris Evans
6 min readJan 16, 2018

--

I recently bought some Wifi enabled smart switches to allow me to control some table lamps at home. They’re great: super cheap in comparison to some other offerings, simple to setup, and they work well with Google Assistant and and my Google Home. The problem is that they depend on an external server to work properly, and the reliability of that service has been pretty poor.

Handily there’s an open source project providing custom firmware for these devices, so I decided to dust off my soldering iron and flash the devices (this warrants a whole article in itself — one for the future 😉). With the devices under my control I needed somewhere to run the software stack to manage them, so set to work on a couple of spare Raspberry Pi’s I had to hand.

I had a few requirements:

  • I wanted to run a home automation platform with Android App support, and had chosen (semi-arbitrarily) Home Assistant.
  • I wanted the web interface to be accessible outside of my home, so I could check and manage devices while away.
  • I wanted my Google Home to be able to control the devices, which required me to have an external HTTPS endpoint.
  • I needed to manage dynamic DNS, since I don’t have a static IP.

Whilst I could have run all these natively on a single Raspberry Pi, I decided instead to use Docker to keep things well isolated and easy to reason about. Moreover, I didn’t want to spend hours working over SSH so settled on Kubernetes for a more API driven deployment approach.

In this post, I’ll walk through the steps I took to setup the core platform. Specifically this includes:

  • Setting up a master + single node Kubernetes cluster
  • Deploying my DNS updater as a Kubernetes CronJob object.
  • Deploying Traefik as a Kubernetes Ingress Controller, and configuring it to manage SSL with Let’s Encrypt.

Setting up a Pi Kubernetes Cluster

I claim no credit for this part, and simply copied the excellent guide written by Alex Ellis here. I pretty much followed it to the letter, using kubeadm to initialise a cluster on the master and then join in my single node.

$ kubectl get nodesNAME         STATUS    ROLES     AGE       VERSION
k8s-master Ready master 2d v1.9.1
k8s-node-1 Ready <none> 2d v1.9.1

Big thanks to Alex for sharing his guide 👏

DNS and Routing

To satisfy the need for external access and provide a stable endpoint for my dynamic IP, I needed something to periodically update a DNS record to the right address. I use Cloudflare to manage DNS for my website, so decided to use a subdomain for things running at home.

I wrote a simple (read hacky) bash script to do the Cloudflare update, deployed it into Kubernetes as a ConfigMap, and mounted it as a volume in a CronJob object set to run every 15 mins:

apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: dns-update
namespace: k8s-home
spec:
schedule: "*/15 * * * *"
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
containers:
- name: dns-update
image: evns/rpi-utilities
command: [ "/bin/sh", "-c", "chmod +x /scripts/update.sh && /scripts/update.sh" ]
env:
- name: RECORD_NAME
valueFrom:
secretKeyRef:
name: cloudflare
key: record_name
- name: API_KEY
valueFrom:
secretKeyRef:
name: cloudflare
key: api_key
...
volumeMounts:
- name: config-volume
mountPath: /scripts
volumes:
- name: config-volume
configMap:
name: update-script

At this point I had a DNS entry *.home.evns.io pointed at my dynamic public IP 👌

To pave the way for running web applications, I knew I’d need to open at least ports 80 and 443 on my router’s firewall, and since the destination would be a container running on my Kubernetes node I configured the ports to forward to that IP. Obviously, at this stage there was nothing running there so I wasn’t expecting any kind of response.

Traefik and Let’s Encrypt

With a functioning cluster, and the networking setup complete, the next task was to deploy a reverse proxy to manage the application routing. In Kubernetes we can deploy an Ingress Controller to achieve this. An Ingress Controller is an implementation of a reverse proxy which listens for changes to KubernetesIngress resources and updates it’s configuration accordingly. I’ve previously used Nginx for this purpose, but having heard good things about Traefik decided to give it a whirl.

Traefik provide detailed instructions to get their implementation running, but I had to customise it slightly to get things working with my setup.

First, since I wanted all communication over HTTPS, I modified the config to force anything on HTTP over to the secure entrypoint. This was as simple as adding the following to the config file:

# Force HTTPS
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]

To support the HTTPS endpoints, I also needed to configure Let’s Encrypt to enable it to automatically fetch certificates. Having Let’s Encrypt as a built capability is one of the areas where Traefik excels. In previous Kubernetes configurations I’ve had to deploy a separate application like kube-lego to handle the certificate generation, so having it built in is a big positive. I used Cloudflare as the DNS provider which configures Traefik to use DNS records for domain validation.

# Let's encrypt configuration
[acme]
email="<my-email>"
storage="/etc/traefik/acme.json"
entryPoint="https"
acmeLogging=true
onDemand=true
onHostRule=true
dnsProvider="cloudflare"

Next I updated the Deployment to use the new config, and also to store the Let’s Encrypt state on the host to ensure persistence in the event of the container being updated. One issue I had here was that the file created by Traefik had the wrong permissions set; Kubernetes creates the file on the host with 755 which Traefik deems overly permissive, expecting only 600. In the interest of getting things working, I’ve deferred the investigation into this and simply changed the permission manually on the host. The final change was to provide the Cloudflare credentials to enable control of my DNS zone.

kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress-controller
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik:v1.5
name: traefik-ingress-lb
env:
- name: CLOUDFLARE_EMAIL
valueFrom:
secretKeyRef:
name: cloudflare
key: email
- name: CLOUDFLARE_API_KEY
valueFrom:
secretKeyRef:
name: cloudflare
key: api_key
volumeMounts:
- mountPath: "/config"
name: "config"
- mountPath: "/etc/traefik/acme.json"
name: "acme"
args:
- --configfile=/config/traefik.toml
volumes:
- name: config
configMap:
name: traefik-config
- name: acme
hostPath:
type: FileOrCreate
path: /etc/traefik/acme.json

The final change was to edit the Traefik Service to allow external access into the cluster. By default it exposes ports 80 and 443 to other services in the cluster, as is the nature with a service of type ClusterIP. In a cloud installation it’s common to change to the service type to LoadBalancer, which would provision an external load balancer with a stable IP/DNS address, but since I have no load balancers in my Pi setup and only one node, I instead updated the service definition to include an externalIP. This essentially binds ports 80 and 443 on the host interface to ensure all traffic is routed to Pods running Traefik.

kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: http
- protocol: TCP
port: 443
name: https
externalIPs:
- 192.168.0.101 # This is the node address

With all this in place, I deployed a Service and Ingress for the Traefik dashboard, and sat back while it automatically configured the routing and SSL certs 😎 The result is a secure site accessible on the internet as can be seen here 👇

PiVPN

Whilst not an original requirement, I also decided to deploy a VPN solution so could manage everything whilst away from home. I used PiVPN to deploy an OpenVPN server on the master. The instructions here provided everything I needed to get it working.

Bringing it all Together

My home setup now looks roughly like this:

I have a cluster which I can manage from anywhere, and the ability to run and expose applications over HTTPS using only Kubernetes resources. Is it over engineered for a home solution? Probably. But I’ve now got something super flexible which I can use to for running apps in my home, and more importantly I got to play with some interesting tech along the way.

What’s next?

The astute among you will notice I haven’t discussed the Home Automation side of things. I’ll post something on this over the next few weeks.

--

--