OKE Disaster Recovery: Notes on Velero and OKE — part 3: Stateful Pods with Persistent Volumes and File Storage

Fernando Harris
Oracle Developers
Published in
7 min readMar 15, 2024
Photo by jesse orrico on Unsplash

Welcome to the third and final part of this series. In part one we learned how to use Velero for a simple backup of a stateless application running in an OKE Kubernetes cluster in Frankfurt to object storage in Frankfurt, to then restore the application in an OKE Kubernetes cluster in London. The second part took us on a journey to achieve the same results but with a stateful application dependent on Persistent Block Volume. The third part which you are about to read continues with what was done in part two. However, this time, instead of depending on Block Storage, the application will depend on File Storage.

Before starting to play with File Storage and OKE, we recommend you visit its OCI documentation here and learn about the network configuration and specific policies you might need to set up in advance.

Let’s start. Confirm that your kubectl client is targeting Frankfurt.

kubectl get nodes

The first thing you will need to do is to create a storage class in Region 1, Frankfurt. Create a file called fss-dyn-storage-frankfurt.yaml with the manifest below. For simplicity, the File Storage Mount Target will be placed in the same subnet as the worker nodes. Learn more here. For that reason, the value in mountTargetSubnetOcid contains the id of the Kubernetes cluster worker nodes subnet. For production, check what is recommended here.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: fss-dyn-storage
provisioner: fss.csi.oraclecloud.com
parameters:
availabilityDomain: EU-FRANKFURT-1-AD-3
mountTargetSubnetOcid: ocid1.subnet.oc1.eu-frankfurt-1.aaaaa...
encryptInTransit: "false"

Run the following command to create the storage class:

kubectl apply -f fss-dyn-storage-frankfurt.yaml

Let’s now create a namespace for the exercise:

kubectl create ns nginx-fss

Once the namespace is created, create the file fss-dyn-claim.yaml with the PVC manifest content below:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: fss-dynamic-claim
namespace: nginx-fss
spec:
accessModes:
- ReadWriteMany
storageClassName: "fss-dyn-storage"
resources:
requests:
storage: 50Gi

Run the command to apply and create it in the namespace nginx-fss:

kubectl apply -f fss-dyn-claim.yaml

After a few seconds, the result should be a PVC and PV created and bounded. Run the following command to validate that:

kubectl get pvc -n nginx-fss

The fss-dynamic-claim PVC should be with Status=Bound and assigned to a csi-fss-(…) volume with the respective fss-dyn-storage storage class. You can also visually confirm that in the OCI File Storage console where you can see the File Storage created, the Mount Target and its respective IP within the worker node subnet:

Next, let’s create the stateful application with a persistent-storage volume, to be mounted on /usr/share/nginx/html. Create a file called fss-pod-nginx.yaml with the pod manifest below:

apiVersion: v1
kind: Pod
metadata:
name: fss-dynamic-app
namespace: nginx-fss
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- name: http
containerPort: 80
volumeMounts:
- name: persistent-storage
mountPath: /usr/share/nginx/html
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: fss-dynamic-claim

Then, run the following command:

kubectl apply -f fss-pod-nginx.yaml

Wait for the pod to be running without errors:

kubectl get pod -n nginx-fss

When running, execute the following command to execute a terminal in the fss-dynamic-app pod container:

kubectl -n nginx-fss exec -it fss-dynamic-app -- bash

Once inside the container run the following command to create a new file called Hello.World inside the folder mounted on the volume:

echo "Hello World" > /usr/share/nginx/html/Hello.World;

Follow the below image if you have any doubts:

To prepare the backup we need first to annotate the pod:

kubectl -n nginx-fss annotate pod/fss-dynamic-app \
backup.velero.io/backup-volumes=persistent-storage

Where persistent-storage is the name of the volume to backup.

All set to start backup creation :

./velero backup create nginx-backup-fss \
--include-namespaces nginx-fss --default-volumes-to-fs-backup

Describe the backup to confirm that is completed successfully:

./velero backup describe nginx-backup-fss

If you wish, take a look as well at the OCI Object Storage console to visually confirm that the backup was successfully created. You should be able to see a nginx-backup-fss folder inside the folder backups, and the namespace volume data in the folder nginx-fss inside the folder kopia:

The last thing to do in Frankfurt is to copy the object storage structure and content to London (if you have any doubts about this process revisit part 1 of this series):

./oci-copy-objects-to-region.sh

Work done in Frankfurt, let’s move to London.

First, please confirm that the Object storage structure from the bucket in Frankfurt was copied to London. You can do this visually by visiting the OCI Object Storage console and checking the existence of the same folders: backups/nginx-backup-fss and kopia/nginx-fss.

Then, we need to replicate some of the same steps we did previously in Frankfurt.

Make sure your kubectl client is pointing to the London OKE cluster.

The first step is to create a File Storage Class in the London cluster. Create a file named fss-dyn-storage-london.yaml with the content of the manifest below. You need of course to use the ID (OCID) of your worker node subnet:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: fss-dyn-storage
provisioner: fss.csi.oraclecloud.com
parameters:
availabilityDomain: UK-LONDON-1-AD-1
mountTargetSubnetOcid: ocid1.subnet.oc1.uk-london-1.aaaa...
encryptInTransit: "false"

Run the command to create it:

kubectl apply -f fss-dyn-storage-london.yaml

Now, let's ask Velero to tell us which backups are available to restore in London:

./velero backup get

Of course, the one we are interested in is nginx-backup-fss, which we just copied from Frankfurt Object Storage into London Object Storage:

Let's try to restore it with the following command:

./velero restore create --from-backup nginx-backup-fss

Describe it to validate if it was completed:

./velero restore describe nginx-backup-fss-20231113120226

Checking the logs and searching for Persistent Volume related information will show that all restoring tasks were completed:

./velero restore logs nginx-backup-fss-20231113120226 | grep -in volume

And now, let's inspect our pod with the following command to prove that the application was restored with its volumes and content as they were created in Frankfurt:

kubectl -n nginx-fss exec -it fss-dynamic-app \
-- cat /usr/share/nginx/html/Hello.World

--

--