ReadWriteMany Persistent Volumes in Google Kubernetes Engine

Sushil Kumar
4 min readJul 20, 2019

--

Persistent Volumes in GKE are supported using the Persistent Disks (both SSDs and Spinning Disks). The problem with these disks is that they only support ReadWriteOnce(RWO) (the volume can be mounted as read-write by a single node) and ReadOnlyMany (ROX)(the volume can be mounted read-only by many nodes) access modes. If you try to create a ReadWriteMany(RWX) (the volume can be mounted as read-write by many nodes) PV that will never come in a success state. Also if any pod tries to attach a ReadWriteOnce volume on some other node, you’ll get following error.

FailedMount Failed to attach volume "pv0001" on node "xyz" with: googleapi: Error 400: The disk resource 'abc' is already being used by 'xyz'

Rightfully so, because a GCP Persistent Disk can only be attached to one node in Read-Write mode. To resolve the problem, Google offers Cloud Filestore which is GCP NAS offering. You can mount the Filestore in Compute Engine and Kubernetes Engine instances. However, the problem with Filestore is that it designed with large file storage systems in mind and has minimum capacity of 1TB which is expensive for small use cases.

One really inexpensive way to solve the issue is to setup a NFS server in your cluster backed up by a ReadWriteOnce PV and then create NFS based PV (which supports ReadWriteMany) using this NFS server. In this post I’m going to walk you through how to do that.

Lets get started.

Provision GCP Persistent Disk

gcloud compute disks create --size=2GB --zone=europe-west1-b nfs-disk

I’m using gcloud command line here, you can also use either Web Console or Terraform to provision the disk.

GCP Persistent Disk

Setup a NFS Server in GKE

This section assumes you have a running GKE cluster and you have kubectl installed with you cluster credentials setup.

gcloud container clusters get-credentials standard-cluster-1 — zone europe-west1-b — project just-landing-231706

Add this resource to your cluster using following command

kubectl create -f nfs-server-deployment.yaml

You can check that its successfully deployed by checking if pods are running fine.

kubectl get pods

You can describe the pod and check if volume is correctly mounted.

kubectl describe pod nfs-server-6d99db46c8-f6bb2

You’ll see a section like this.

Volume attached to Pod

Now to make this NFS server accessible at a fixed IP/DNS we in case of pod restarts we need to create a ClusterIP service.

Create this resource as well.

kubectl create -f nfs-clusterip-service.yaml

Check if the service got created.

kubectl get svc

You’ll see something like following screenshot.

ClusterIP Service

You’ll see Kubernetes default ClusterIP service there as well. Now your nfs server pods are accessible either at the IP 10.0.13.140 ( note yours from the service output) or via its name nfs-server.default.svc.cluster.local. By default every service is addressable via name <service-name> .<namespace>.svc.cluster.local

Create NFS backed PV and PVC

Now that we have setup of NFS server and its associate service we can create a PersistentVolume backed by this NFS server.

As you can see although my persistent disk was of size 2GB but I have used only 1GB in my PV. You can have as big PV as you want, just make sure you do not exceed the limit of the original disk that you created.

Go ahead and create this resource as well.

kubectl create -f nfs-pv-pvc.yaml
NFS backed PV and PVC Set 1

You can see that the PVC status is Bound and the volume it is bounded to is nfs-pv. There is 1 to 1 mapping between a PV and PVC. Once a PV is bounded to a PV that PV can’t be used to serve any other claim.

Now lets create another set of PV and PVC using same NFS server and then mount both of them to a pod.

NFS backed PV and PVC Set 1

As you can see we created 2 PVs and in turn 2 PVCs backed by same NFS server. We can create as may PV and PVC as we want backed by same NFS server as long as we are not exceeding the original disk size that is attached to the NFS server.

Mount the PVC in a Pod

Now let us create a Pod and attach both of our PVC to it.

We’ll use busybox image, which kind of hello world image for docker, in our example.

Create the pod.

kubectl create -f pod-nfs-pv.yaml

Now lets inspect the pod and see if both of our volumes are attached correctly or not. Note that the pod will be in completed state as busybox image does not have any long running application.

Both NFS volumes attached

And there you have it. A simple way to have ReadWriteMany volumes in your GKE cluster.

In case you find any bug in the codes, do let me know via comments.

Till then happy coding :)

--

--

Sushil Kumar

A polyglot developer with a knack for Distributed systems, Cloud and automation.