Cloud Native DevOps 10A: Kubernetes NFS-Client Provisioner

Persistent Volumes: NFS

Jody Wan
7 min readJul 14, 2020

Updated on 09 August 2020:
Storage Performance Benchmarking with FIO-PLOT

Understanding Kubernetes Persistent Volumes

In this post, we’ll see how to dynamically provision NFS volume using the Kubernetes NFS-Client Provisioner and take a look at the advantages of using it.

NFS can support:

  • ReadWriteOnce — the volume can be mounted as read-write by a single node
  • ReadOnlyMany — the volume can be mounted read-only by many nodes
  • ReadWriteMany — the volume can be mounted as read-write by many nodes

ReadWriteMany Horizontal Scale-out Architecture

Strategies to Manage Persistent Data (Static vs Dynamic Content)

Static content is files that don’t change based on user input, and they consist of things like source code and images etc.

Dynamic content requires processing by an application server and typically invokes backend services via user input. For example upload files.

What Are Suitable Use Cases?

Static; Create a Docker Image for running a static content, the primary benefit over NFS is performance: local disks offer higher IOPS (SSD/NVME) and throughput and lower latency compared to remote storage systems.

Bear in mind when a container is deleted, any dynamic content did not mount to the volume, those changes are lost.

Dynamic; Many nodes may simultaneously read and write to the NFS storage volume, the disadvantages of using NFS as a shared storage is slower or more limited, compared to local disk.

Prerequisites

In order to install the nfs-client-provisioner, I assume that you already have an NFS server where the correct ACLs, IP/hostname and the exported path are known. Or follow my home lab example, I'm using the following setup:

Testing Environment

NFS
QNAP TS-451 8GB RAM
4 x SSD Sunsamg 860 EVO 250GB -> 550/520 MB/s ( Sequential R/W SPEED)

LOCAL STORAGE
AData SX6000 Lite NVMe -> 1800/1200 MB/s (Random R/W SPEED)

Configuring Linux NFS Storage with QNAP (Optional)

How to enable and setup host access for NFS connection

Configure NFS Client mount under the Linux console

Use this procedure to manually mount to NFS on a Linux client, OR use ansible-role-nfs

(1) Install the NFS client.

$ sudo yum install nfs-utils

(2) Enable and start nfs service and rpcbind:

$ sudo systemctl enable rpcbind
$ sudo systemctl enable nfs
$ sudo systemctl start rpcbind
$ sudo systemctl start nfs

(3) Edit /etc/fstab

adding an entry;

<server>:</remote/export> </local/directory> <nfs-type> <options> 0 0

Example:

192.168.12.102:/nerv-system/k8s /data/nas-02/nfs nfs vers=4.1,rsize=32768,wsize=32768,noatime,intr

(4) Mount the new /etc/fstab

$ mount -va

(5) Testing NFS access from client and server

Setup an NFS client provisioner in Rancher

Launching a Catalog App

From the Global view, main navigation bar, Tools dropdown Catalogs.

Enable/Add Catalog

From the Global view, open the Cluster that you want to deploy an app to.

From the main navigation bar, choose Apps.

Click Launch

Search “nfs-client”

Configuration Options

Click PREVIEW, select Template Files “nfs-client-provisioner/values.yaml”, then copy the code, open any text editor past on to it.

Modify the code, an example that I modified:

nfs:
server: 192.168.12.101
path: /k8s-dev
mountOptions: [ "vers=4.1", "rsize=32768", "wsize=32768", "noatime", "intr" ]
# Set a StorageClass name
# Ignored if storageClass.create is false
name: nfs-01
# Set a StorageClass name
# Ignored if storageClass.create is false
name: nfs-01
resources:
limits:
cpu: 1000m
memory: 1024Mi
requests:
cpu: 500m
memory: 512M

then save as value.yaml, go back to the Rancher UI, Edit as YAML, click Read from a file, upload your value.yaml file then Launch it. Simple!

To test it you can launch any app and select nfs-01 storage class.

Storage Performance Benchmarking with FIO

What is FIO?

fio is a tool that will spawn a number of threads or processes doing a particular type of I/O action as specified by the user. The typical use of fio is to write a job file matching the I/O load one wants to simulate.

To make graphs of Fio benchmark data, I’ve used fio-plot which is created by louwrentius. To perform a benchmarking analysis on Kubernetes/Docker storage, I have been transformed louwrentius/fio-plot project into Docker/Helm Chart.

To generates charts from fio-plot on Kubernetes storage, you need to follow the below process:

Install helm-fio-plot on Kubernetes with Helm 3

  1. git clone https://github.com/jodykpw/helm-fio-plot.git
  2. Modify values.yaml:
  3. Change service type to NodePort, exposing with a node port.
  4. Uncomment and change the stroageClass name to <my-local-storage-stroage-class-name>
  5. Save and exit it.
  6. Run:
  7. helm install fio-plot ./

Generate fio storage benchmark data

kubectl get pods
kubectl exec -it fio-plot-874b5b479-n2cqq -- /bin/bash

Executing a benchmark script in a container

  • Test random reads 4k
/app/benchmark_script/bench_fio -j /app/benchmark_script/fio-job-template.fio -d /mnt -t directory -s 5g --mode randread -o /mnt/benchmarks --iodepth 1 2 4 8 16 32 64 --numjobs 1  --block-size 4k
  • Test random reads 32k
/app/benchmark_script/bench_fio -j /app/benchmark_script/fio-job-template.fio -d /mnt -t directory -s 5g --mode randread -o /mnt/benchmarks --iodepth 1 2 4 8 16 32 64 --numjobs 1  --block-size 32k
  • Test random reads 64k
/app/benchmark_script/bench_fio -j /app/benchmark_script/fio-job-template.fio -d /mnt -t directory -s 5g --mode randread -o /mnt/benchmarks --iodepth 1 2 4 8 16 32 64 --numjobs 1  --block-size 64k
  • Test random writes 4k
  • Test random writes 32k
  • Test random writes 64k

Creating a 2D Bar Chart based on randread data and numjobs = 1.

Executing the following command in a container, to output to a web directory navigate this folder:

cd /mnt/benchmarks/mnt

Example Usage

/app/fio_plot/fio_plot -i <benchmark_data_folder> -T "Title" -s https://louwrentius.com -l -n 1 -r randread

Full Usage

To view the graph on web broswer

http://<ip>:<nodport>/benchmarks/mnt/

To find out the nodeport:

kubectl get svc

Example benchmark results

  • Random read 4k
/app/fio_plot/fio_plot -i /mnt/benchmarks/mnt/4k -T "Samsung SSD 860 250GB BLOCK_SIZE 4K On K8S QNAP NFS" -s https://louwrentius.com -l -n 1 -r randread
  • Random read 32k
/app/fio_plot/fio_plot -i /mnt/benchmarks/mnt/32k -T "Samsung SSD 860 250GB BLOCK_SIZE 32K On K8S QNAP NFS" -s https://louwrentius.com -l -n 1 -r randread
  • Random read 64k
/app/fio_plot/fio_plot -i /mnt/benchmarks/mnt/64k -T "Samsung SSD 860 250GB BLOCK_SIZE 64K On K8S QNAP NFS" -s https://louwrentius.com -l -n 1 -r randread
  • Random write 4k
/app/fio_plot/fio_plot -i /mnt/benchmarks/mnt/4k -T "Samsung SSD 860 250GB BLOCK_SIZE 4K On K8S QNAP NFS" -s https://louwrentius.com -l -n 1 -r randwrite
  • Random write 32k
/app/fio_plot/fio_plot -i /mnt/benchmarks/mnt/32k -T "Samsung SSD 860 250GB BLOCK_SIZE 32K On K8S QNAP NFS" -s https://louwrentius.com -l -n 1 -r randwrite
  • Random write 64k
/app/fio_plot/fio_plot -i /mnt/benchmarks/mnt/64k -T "Samsung SSD 860 250GB BLOCK_SIZE 64K On K8S QNAP NFS" -s https://louwrentius.com -l -n 1 -r randwrite

Benchmark Result

Unfortunately my network environment the bottleneck is 1 Gigabit LAN, I couldn’t make the best use of SSD.

There are alternative persistent volumes such as ROOK, OpenEBS, AWS Elastic File System‎ and many more stay tune how to set up on Kubernetes Cluster :)

--

--