Kubernetes Hardware-Accelerated Home Streamer

Zac Pollack
6 min readMay 2, 2024

--

TL;DR

Helm Chart to manage GPU-enabled k3s deployments for a self-hosted streaming service. Great if you have a digitized DVD or Blu-Ray collection you want to share with friends and family! May be a little overkill for those not also trying to practice their IaC skills though (:

Tech Stack

  • Kubernetes → I used k3s with two nodes; any Kubernetes cluster will be fine, however.
  • Helm → Chart located here.
  • Jellyfin → FOSS media server >>> but Plex and Emby should also work.
  • NFS → Synology NAS for media storage, optionally use local storage.
  • Intel GPU → Required for hardware acceleration. Plugin is Intel specific.

Configuration

Enable GPU for Kubernetes Pods

Install the Intel GPU plugin for Kubernetes

Ensure that the verification method is successful:

$ kubectl get nodes -o=jsonpath="{range .items[*]}{.metadata.name}{'\n'}{' i915: '}{.status.allocatable.gpu\.intel\.com/i915}{'\n'}"
master
i915: 1

Once Jellyfin has been deployed, we’ll need to do further validation to confirm GPU is available.

Create Persistent Volume(s)

For each drive containing media that your server must access, a corresponding Persistent Volume (PV) and Persistent Volume Claim (PVC) must be created.

The PVCs will likely be fairly standard Kubernetes PVC files. Be sure that the spec.volumeName of the PVC matches the metadata.name of the PV.

To have read/write access to the target drive if located on a NAS, you will need the spec.mountOptions and spec.nfs fields from the below sample.

PV + PVC with NFS

apiVersion: v1
kind: PersistentVolume
metadata:
name: external-drive1-media-server-pv
spec:
storageClassName: ""
capacity:
storage: 930Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
mountOptions:
- hard
- nfsvers=4.1
nfs:
server: NAS.SERVER.LOCAL.IP
path: "/volumeUSB1/usbshare"
readOnly: false
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: external-drive1-pvc
namespace: media-server
spec:
storageClassName: ""
volumeName: external-drive1-media-server-pv
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 930Gi

In addition to the NAS media drives, you will need access to a local drive for Jellyfin and other services to write to. According to Jellyfin documentation, the database should be run on a local drive and not over NFS. From personal experience, most services with a database do not play nice with NFS, so this PV will be handy across deployments.

Local Drive PV + PVC

apiVersion: v1
kind: PersistentVolume
metadata:
name: config-media-server-pv
labels:
type: local
spec:
storageClassName: ""
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/config"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: config-pvc
namespace: media-server
spec:
storageClassName: ""
volumeName: config-media-server-pv
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi

For a multi-node Kubernetes cluster, the above PV + PVC set will need to be converted to Longhorn drives. For now though, I have provided the simple solution as horizontal scaling within Jellyfin requires additional configuration. See here for more information on horizontal scaling transcoding.

Finally, you will need access to the local GPU to perform transcodes and leverage hardware acceleration (required for external playback).

GPU PV + PVC

apiVersion: v1
kind: PersistentVolume
metadata:
name: render-device-pv
labels:
type: local
spec:
storageClassName: ""
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /dev/dri/renderD128
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: card-device-pv
labels:
type: local
spec:
storageClassName: ""
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /dev/dri/card0
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: render-device-pvc
namespace: media-server
spec:
storageClassName: ""
volumeName: render-device-pv
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: card-device-pvc
namespace: media-server
spec:
storageClassName: ""
volumeName: card-device-pv
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi

Deploy the Helm Chart

Prior to Helm installation, update all appropriate details. Particularly volume mounts (see above), node affinities, and ingress fields (see below).

git clone https://github.com/zep283/helm-media-server-stack.git
cd helm-media-server-stack
helm upgrade --install mss . --values values.yaml --namespace media-server

Node affinity

affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: kubernetes.io/node.type
operator: In
values:
- high-perf

high-perf can be replaced with any label appropriate to your environment or removed entirely if a service can be run on any node in the cluster.

If using the local config PV/PVC set up from this article, each service is locked into the corresponding node its config folder is on, which may require using affinity to avoid improper scheduling.

Configure Ingress and External Access

Enabling access to the Jellyfin server and associated services will require setting up Ingress and a few additional components.

The most basic way to achieve this would actually be using a nodePort, but I prefer having a shorthand name rather than an IP for services I regularly use. Feel free to skip this section if you don’t mind the IP:PORT format.

Intranet Custom Domain Access

For local intranet access to the deployed services, I used dnsmasq hosted on a Pi4. This will allow custom names like jellyfin.home.cloud.org to send us to Jellyfin.

Installing dnsmasq is distro-specific, but should be included in most package managers.

dnsmasq requires two files to be edited: /etc/hosts and /etc/dnsmasq.conf

/etc/hosts will have LOAD_BALANCER_IP CUSTOM_DOMAIN_NAME added; i.e. 10.0.0.230 home.cloud.org to redirect all local traffic to home.cloud.org addresses to the server 10.0.0.230

/etc/dnsmasq.conf will consist of server and address fields. server points to a DNS server (in this case, google’s). address contains the details for the redirect for our services.

server=8.8.8.8  
server=8.8.4.4
address=/home.cloud.org/10.0.0.230

Inside the Helm chart, the Ingress section will look like this for Jellyfin to be available at jellyfin.home.cloud.org:

ingress:
enabled: true
annotations: {}
hosts:
- host: jellyfin.home.cloud.org
paths:
- path: /
pathType: Prefix

Finally, you’ll need to use the custom DNS configured above. At a device level, this is typically in the advanced network settings.

For iOS, WiFI → Network name → Configure DNS → Manual; then adding the IP of the dnsmasq server.

External Access

For setting up external access, I primarily followed this guide: https://opensource.com/article/20/3/ssl-letsencrypt-k3s

If not using a Raspberry Pi to host the cert-manager services, you can use these commands to install the helm chart instead of the ones in the guide:

kubectl create namespace cert-manager
helm repo add jetstack https://charts.jetstack.io
helm upgrade --install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --values helm/cert-manager/values.yaml --set installCRDs=true

The ingress for jellyfin.home.cloud.org would be as below:

ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: "traefik"
cert-manager.io/cluster-issuer: letsencrypt-staging
acme.cert-manager.io/http01-edit-in-place: "true"
hosts:
- host: jellyfin.home.cloud.org
paths:
- path: /
pathType: Prefix
tls:
- hosts:
- jellyfin.home.cloud.org
secretName: jf-tls

You should end up with:

  • Cloudflare Account
  • Cloudflare DNS A Record
  • cert-manager Namespace, services, and secret
  • Accessible external domain

If your ISP provides you with a Dynamic IP address (very common), you may want to set up some automation to ensure the Cloudflare DNS A Record stays accurate.

Placing scripts on your server at /etc/network/if-up.d will cause them to run on successful network connection, which should be the only time your IP is being updated.

Sample DNS correction script:

#!/bin/sh

rm /tmp/dns-log.txt
exec >> /tmp/dns-log.txt
IP=$(curl https://ipinfo.io/ip)
echo "Current IP is $IP"

TOKEN="YOUR_TOKEN"
ZONE_ID="ZONE_ID"
RECORD_ID="RECORD_ID"

API="https://api.cloudflare.com/client/v4/zones/$ZONE_ID/dns_records/$RECORD_ID"
CF_RAW=$(curl -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" $API)
CF_IP=$(echo $CF_RAW | grep -o '"content":"[^"]*' | grep -o '[^"]*$')

echo "$CF_IP"

if [ "$IP" != "$CF_IP" ]
then
echo "Current IP does not match DNS record!"
PAYLOAD_RAW='{"type":"A","name":"jf","content":"%s"}'
PAYLOAD=$(printf "$PAYLOAD_RAW" "$IP")
echo "$PAYLOAD"
CF_UPDATE=$(curl -X PATCH -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" -H "X-Auth-Email: zac927@live.com" $API --data $PAYLOAD)
echo "$CF_UPDATE"
else
echo "Current IP matches DNS record"
exit 0
fi

Configuring Jellyfin

For basic configuration options, the official Jellyfin documentation should be sufficient.

To enable Hardware Acceleration (GPU transcoding), you can use UI -> Dashboard -> Playback and select VAAPI. QSV may also work but is untested with my configuration.

To verify GPU access, you can use exec to get the shell of the Jellyfin container, then run:

/usr/lib/jellyfin-ffmpeg/vainfo

Keep in mind, the above command may have different permissions than the Jellyfin server. Jellyfin server gets User and Group from the serverr.env values whereas the Kubernetes pod’s shell derives User and Group from the Kubernetes securityContext.

For additional troubleshooting, please see the official Hardware Acceleration documentation page: https://jellyfin.org/docs/general/administration/hardware-acceleration/intel#configure-with-linux-virtualization

--

--

No responses yet