Auto Provisioning NFS as a Persistent Volume in Kubernetes using Storage Classes, Ansible, and Terraform
Introduction
The Kubernetes ecosystem is highly dynamic and involves various moving components that interact with each other. Managing storage is a separate concern within Kubernetes, which introduces the concepts of Persistent Volumes (PVs) and Persistent Volume Claims (PVCs). In this guide, we’ll explore how to auto-provision NFS (Network File System) as a Persistent Volume in Kubernetes using Storage Classes. For the automation, we’ll use Ansible for the NFS server setup and Terraform along with Helm for the NFS client setup.
Prerequisites
- A running Kubernetes cluster.
- Helm installed on your local machine or within the cluster.
- Ansible installed on your local machine.
- Terraform installed on your local machine.
Setting Up an NFS Server with Ansible
- Create an Ansible playbook for the NFS server setup. Here’s an example of what this might look like in YAML format:
---
- hosts: nfs-server
become: yes
tasks:
- name: Update all packages
apt:
update_cache: yes
- name: Install NFS server package
apt:
name: nfs-kernel-server
state: present
- name: Create a directory to share
file:
path: /var/nfs_share
state: directory
mode: '777'
- name: Modify ownership & permissions of the shared directory
file:
path: /var/nfs_share
owner: nobody
group: nogroup
mode: '777'
- name: Add the directory to the NFS configuration file
lineinfile:
path: /etc/exports
line: '/var/nfs_share *(rw,sync,no_subtree_check,no_root_squash)'
- name: Export the shared directory and restart NFS service
command:
cmd: exportfs -a && systemctl restart nfs-kernel-server
2. Run the playbook with Ansible. Here’s an example of what this might look like in your shell:
ansible-playbook -i inventory.ini nfs_server.yaml
This playbook will install the NFS server package on the targeted host, create and configure the shared directory, and restart the NFS service.
Setting Up the NFS Client with Terraform and Helm
erraform can be used to deploy Helm charts on a Kubernetes cluster. Here is an example of how you can define a Terraform script for installing the NFS client using the nfs-subdir-external-provisioner
Helm chart.
- Install the Helm provider for Terraform:
provider "helm" {
kubernetes {
config_path = "~/.kube/config"
}
}
2. Add the nfs-subdir-external-provisioner
Helm chart repository:
data "helm_repository" "nfs" {
name = "nfs-subdir-external-provisioner"
url = "https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/"
}
3.Install the NFS client provisioner. Replace <nfs-server>
and <nfs-path>
placeholders with your NFS server IP and exported NFS directory path:
resource "helm_release" "nfs_client_provisioner" {
name = "nfs-client-provisioner"
repository = data.helm_repository.nfs.metadata[0].name
chart = "nfs-subdir-external-provisioner"
set {
name = "nfs.server"
value = "<nfs-server>"
}
set {
name = "nfs.path"
value = "<nfs-path>"
}
}
4. Apply the Terraform script. Here’s an example of what this might look like in your shell:
terraform init
terraform apply
The Terraform script will initialize the Helm provider, add the nfs-subdir-external-provisioner
Helm chart repository, and install the NFS client provisioner on your Kubernetes cluster.
Verifying the Installation
- Check if the NFS client provisioner pod is running:
kubectl get pods
You should see a pod named nfs-client-provisioner-...
in the RUNNING state.
2. Check if the storage class has been created:
kubectl get sc
You should see a storage class named nfs-client
(or the name you provided during installation) in the list.
Now, when you create a Persistent Volume Claim with the storage class nfs-client
, Kubernetes will dynamically provision a Persistent Volume using the NFS server and path you've provided.
Install Postgresql wit our provisioner
First, we need to declare the helm_release
for PostgreSQL:
data "helm_repository" "bitnami" {
name = "bitnami"
url = "https://charts.bitnami.com/bitnami"
}
resource "helm_release" "postgresql" {
name = "postgresql"
chart = "bitnami/postgresql"
repository = data.helm_repository.bitnami.metadata[0].name
set {
name = "persistence.storageClass"
value = "nfs-client"
}
set {
name = "persistence.size"
value = "1Gi"
}
}
This Helm chart deployment creates a PostgreSQL database that uses a PersistentVolumeClaim for data storage. The PVC uses the nfs-client
storage class, which means it is backed by the NFS server and path we defined in our previous Terraform script.
Once you’ve added the above to your Terraform file, you can apply the configuration:
terraform apply
After the Terraform script has completed, verify that your PostgreSQL deployment and services are running:
kubectl get deployments
kubectl get svc
You should see a deployment and a service for PostgreSQL.
To further verify that the PostgreSQL data is indeed stored on the NFS server, you can check the pod’s mount points:
kubectl describe pod <postgresql-pod-name>
You’ll see a mount for /bitnami/postgresql
, and its PersistentVolumeClaim should be using the NFS storage class.
Conclusion
In this guide, we’ve shown how to auto-provision NFS as a Persistent Volume in Kubernetes using Storage Classes, Ansible for NFS server setup.
Author: Moeid Heidari
Linkedin: linkedin.com/in/moeidheidari