Creating an NFS server within Kubernetes

Aron Fyodor Asor
2 min readMay 8, 2018

--

As part of our organization’s move to Kubernetes (and GCE) as the main deployment platform, we had to create a way for all the app containers (who handle file uploading) to communicate all the uploaded file to our NGINX proxies, which serve our static assets.

This wasn’t a problem previously. We just had one beefy Rackspace server handling both the database, the application and the NGINX proxy. They could all communicate within the same localhost network and filesystem.

With our move to Ephemeral containers handling only one process, the interfaces between each process had to be hashed out. And that includes how they should share files in the face of a variable amount of application servers running in different host systems.

The solution I arrived to, was to create another NFS server container hosting all the media files.

Implementation

“Easy enough,” I thought. Kubernetes has native support for mounting NFS volumes. And with the advantage of being one of the few Kubernetes volumes supporting Read and Write access on multiple containers, it was almost a no-brainer.

I started out with this Dockerfile based on cpuguy3’s image:

FROM cpuguy83/nfs-server:latest


Org-specific directories go here
COPY nfs-server-entrypoint.sh /
ENTRYPOINT "/nfs-server-entrypoint.sh"
!/bin/bash

set -e

EXPORTS_DIR=/exports

Mount the content storage directory here

Run the original NFS server entrypoint
ENTRYPOINT/usr/local/bin/nfs_setup $EXPORTS_DIR

However, I encountered errors starting up the pod:

* Starting NFS kernel daemon
rpc.nfsd: unable to resolve ANYADDR:nfs to inet address: Servname not supported for ai_socktype
rpc.nfsd: unable to resolve ANYADDR:nfs to inet6 address: Servname not supported for ai_socktype
rpc.nfsd: unable to set any sockets for nfsd

Great success:

To solve this, I had to add these two lines to the Dockerfile:

# Allow ports to be readable by NFS. See https://github.com/cpuguy83/docker-nfs-server/pull/11
RUN echo "nfs 2049/tcp" >> /etc/services
RUN echo "nfs 111/udp" >> /etc/services

Looks like NFS resolves its ports through names instead of explicit ports. TIL!

That led to a successful start to our pod!

File system has been successfully mounted.
fgetty: could not chown/chmod tty device
  • Exporting directories for NFS kernel daemon…
...done.
  • Starting NFS kernel daemon
...done.
Setting up watches.
Watches established.

However, looks like Kubernetes expects an explicit IP address, and doesn’t allow a Kubernetes Service. To fix that, I added a clusterIP field on my Service definition:

apiVersion: v1
kind: Service
metadata:
name: "{{ .Release.Name }}-{{ .Values.content_server.service.name }}"
labels:
app: "{{ .Chart.Name }}"
release: "{{ .Release.Name }}"
tier: content-server

spec:
type: "{{ .Values.content_server.service.type }}"
clusterIP: "10.119.241.195" # <--- IP within the cluster here
selector:
app: "{{ .Chart.Name }}"
release: "{{ .Release.Name }}"
tier: content-server
ports:
{{- range .Values.content_server.ports }}
- name: {{ .name }}
port: {{ .port }}
targetPort: {{ .port }}
{{- end }}

That allows us to add that IP address on our PersistentVolume definition:

---
apiVersion: v1
kind: PersistentVolume
metadata:
name: "{{ .Release.Name }}-nfs-content-server"
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
nfs:
server: "10.119.241.195" # <---- IP address from our NFS server's clusterIP
path: "/exports"

--

--