Google Kubernetes Engine(GKE) — Persistent Volume with Persistent Disks (NFS) on Multiple Nodes (ReadWriteMany)

Athul RAVINDRAN
Nov 6 · 4 min read
Mounting NFS on Multiple PODS (Single Cluster)

If you are programming a micro services application with a shared file system underneath to share files / information across multiple services and kubernetes as container orchestrator, then you would have faced the same challenge as I and many others did.

Here is how I solved it after reading many posts and several trial and errors.

According to Google Cloud documentation, only a few volume plugins support ReadWriteMany and one of them is NFS

What is ReadWriteMany ?? — A volume that can be mounted on multiple pods and hosts with write access to all the pods.

NFS stands for Network File System — it’s a shared filesystem that can be accessed over the network. The data stored in NFS is permanently persisted, which means, your pod/container could be restarted zillion times, you can add new nodes or even share across clusters. NFS can be mounted on multiple nodes/containers/pods and data can be shared.

Following are the steps to create NFS and mount on Kube Pods.

gcloud filestore instances create nfs-server --project=my-test-project --zone=us-central1-c --tier=STANDARD --file-share=name=myVolume,capacity=1TB --network=name="default",reserved-ip-range="10.0.0.0/29"

The above command creates a NFS share in Google Cloud File Store. You can also try creating a NFS share via Google Cloud Console as well.

In GKE, a container has its own file system but the data is lost when the container is destroyed.

Similarly a pod which can spin multiple containers also has a file system but the data is lost when the pod is destroyed.

GKE has a solution to this and that solution is called “Persistent Volumes”. A persistent volume slightly has a bigger life span. It is a resource in a cluster and it lasts as long as the cluster is alive.

What does that mean ?? A NFS server is a volume plugin to create a Persistent Volume. The created volume is a resource of that cluster from which the request for PV was executed. If the cluster is destroyed, PV resource of that cluster will also be destroyed, however NFS storage will be alive and the data is also safe.

Let’s discuss a use case:

Cluster 1 (Primary) and Cluster 2 (DR) are created and both has the same NFS server plugged in as its PV. When Cluster 1 goes down completely due to a disaster, any information that was saved to NFS will still remain intact and Cluster 2 will also have access to that information. Cluster 2 can now become primary and resume the work or Cluster 1 is brought back up, in either case, data saved to NFS will not be destroyed.

Back to work ….After NFS is created, the next step is to create a Persistent Volume and specify the nfs path on it . PV should also mention the type of access a POD is requesting for (ReadWriteOnce / ReadOnlyMany / ReadWriteMany) and the yaml will look like this …

Note : Storage is 1T (1 TB), access is readWriteMany and nfs path is the nfs share created using the gcloud command in step 1.

apiVersion: v1
kind: PersistentVolume
metadata:
name: my-file-server
spec:
capacity:
storage: 1T
accessModes:
- ReadWriteMany
nfs:
path: /myVolume
server: 10.0.0.2

In order for a pod to use the volume, GKE has a concept called “Persistent Volume Claim”. A Persistent volume claim is a request for storage.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-volume-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: ""

Notice that the storage request here is only for 1GB although we created a NFS and PV for 1TB. PVC doesn’t have to use all of the disk, you can use a slice of the disk.

Now the infrastructure is ready. Next step is to use the PVC in Deployment or POD creation yaml. Below is my Deployment yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
labels:
app: my-vol-service
spec:
selector:
matchLabels:
app: my-vol-service
replicas: 5
template:
metadata:
labels:
app: my-vol-service
spec:
containers:
- name: my-vol-service
image: gcr.io/my-service/my-vol-service:latest
imagePullPolicy: Always
ports:
- containerPort: 8181
volumeMounts:
- name: files-storage
mountPath: /tmp/files
volumes:
- name: files-storage
persistentVolumeClaim:
claimName: my-volume-claim

Note that the volumes: section claims the storage using claimName : my-volume-claim and references it using name: files-storage. The reference name files-storage can be then used in the volumeMounts section of the container to mount it on a path.

Hope this helps and works best !!!

Clap if you like this !!

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade