Table of content
NFS is a 36-years old filesystem but still extremely popular, even in this time and age of Cloud-Native-Everything. It’s simple, reliable and extremely useful when you need an RWX (ReadWriteMany) filesystem for your Pods; Microsoft Azure recently announced its preview support for the latest incarnation of NFS, NFSv4 for Azure Files shared volumes and I wanted to see how easy would be to use it with Azure Kubernetes Service.
Now we need to create an NFS file share and a Private Endpoint to reach the Storage Account. Since I’m not entirely sure how to do it reliably via the CLI (I think the NFSv4 share is not supported in the CLI yet) I resorted using the trustworthy Azure Portal; follow this guide for the share and this one for the Private Endpoint to the storage subnet.
Note how the chosen capacity for the NFS share will also influence the performance of the share; I did some quick performance test at the end of this article.
You’ll notice that the the private endpoint will also create a Private DNS zone in your resource group; we will need to add the AKS peered VNET to the list of VNETs that can resolve DNS names in that zone, so the nodes in AKS will be able to mount the NFS share using the DNS name (in our case,
nfs4aks.privatelink.file.core.windows.net ). The command is:
az network private-dns link vnet create -n dnslink \
-g $RG -v aksnet -z privatelink.file.core.windows.net -e false
Azure Kubernetes Service
Lastly, let’s create a simple AKS cluster in the peered VNET:
az aks create \
-g $RG \
-s Standard_B4ms -c 1 -n nfs4aks \
-l $REGION \
--vnet-subnet-id /subscriptions/$SUBSCRIPTION/resourceGroups/$RG/providers/Microsoft.Network/virtualNetworks/aksnet/subnets/aksaz aks get-credentials -g $RG -n nfs4aks
The final architecture should look like this:
Time to provision some persistent storage. Here, we could have used the
azure-file CSI driver but I wanted to keep things very simple and use the
nfs-client provisioner deployed via Helm (set the
nfs.path to the name of the NFS share you created before):
Note that NFS doesn’t do any authentication and relies on network security (that is, the presence of the Private Endpoint in the peered vnet).
You can look at the logs of the
dbench pod to get the summary of the benchmark:
kubectl logs -l job-name=dbench -f
After each test you’ll need to clean up, like so:
kubectl delete job dbench
kubectl delete pvc dbench
As you can see it seems that the IOPS scale linearly with increased capacity (while bandwidth seems capped by network throughput):
I hope this helps and let me know if you plan to use NFSv4!