Volume Snapshots in GKE : USING CRD

Bhakareashutosh
Google Cloud - Community
4 min readFeb 24, 2024

While dealing with Persistent Volume Claim (PVC) using dynamic volume provisioning concept in GKE , Sometimes we need to take backup of our PVC, which can be achieved at persistent disk (storage level of GCP) level which requires an GCP level IAM role and GCP / SDK Console access.

Despite of that we can take snapshots at kuberenetes level itself using two ways

  1. Container storage interface (CSI) volume snapshot
  2. Backup for GKE

In this article we are going to explore the CSI way of taking snapshots and restoring them them. ( CSI Drivers Reference : https://kubernetes-csi.github.io/docs/drivers.html )

In GKE the Volume Snapshot CRD is preinstalled, which can be checked by doing

kubectl get crd

While taking snapshot we should consider below points

  1. VolumeSnapshotClass — it is pointing to CSI driver with deletion policy
  2. VolumeSnapshot — It is pointing to our PVC for which we are taking snapshot.
  3. VolumeSnapshotContents — Once the volumesnapshot is created our GKE creates Volumesnapshotcontents

Lets try to explorer it practically, All yml files can be found here -> link

kubectl apply -f mypvc.yml

We are using dynamic provisioning the pvc1 will be in pending state as we are using standard-rwo storage class which has a volumebinding mode called “WaitForFirstConsumer”. It is mapping my /var/lib/mysql directory to pvc1 using below partial code

Lets create the mysql deployment using below commands

kubectl apply -f mysql.yml

We can see that PV got created automatically because of our storageclass, which can be referred as dynamic volume provisioning in GKE. Now lets create some records in mysql now, for that we need to get into the pod by doing , for sql we are using create database, create table and insert queries.

kubectl exec -it wordpress-mysql-847dbc7dfc-2t7xc — bash

We can create an snapshot class by pointing it to our CSI drivers, once that is create we are creating actual snapshot which points to our pvc1 as per below screenshots,

kubectl apply -f snapclass.yml

kubectl apply -f snap.yml

This is creating the volumesnapshotcontents which is nothing but our backup of pvc1

To test the volumesnapshot , we will delete the gke databases; and try to restore using the snapshot which we have taken at a GKE level by using drop database command — drop database gke; using the snapshot we will create a new pvc and map it to our deployment

To restore the snapshot we are creating a new pvc using below code

kubectl apply -f restore.yml

Once we created a new pvc called restore, we will map our mysql deployment to claminName : restore, we need to updated our deployment using

kubectl apply -f mysql.yml

Once the container is in running state we can see two pvc as below,

If we exec to a new pod we should able to see our delete database back !!!

In this way we were able to successfully test the working of volumesnapshot at a GKE level.

--

--