Disaster Recovery — Notes on Velero and OKE, Part 2: Stateful Pods with Persistent Volumes and Block Volume

Fernando Harris
Oracle Developers
Published in
7 min readJan 23, 2024
Photo by Matt Palmer on Unsplash

If you are new to Velero and OKE, please visit the first part of this blog here which explains step-by-step how to set up Velero to back up a stateless application in OKE. It also shows the very basics of Velero.

We will build from there and repeat the exercise this time with a stateful application that depends on Persistent Volumes and Block Volume. We need to make a small change in our Velero installation. To support Volumes backups, Velero needs to install in the cluster a node agent.

Confirm your kubectl client is targeting Frankfurt, our Region 1:

kubectl get nodes

You should be able to see your nodes:

Now, run the following command to reinstall Velero with the node agent in Region 1 (Frankfurt):

./velero install \
--provider aws \
--bucket bucket-velero \
--secret-file ./velero-credentials \
--plugins velero/velero-plugin-for-aws:v1.8.0 \
--use-volume-snapshots=false \
--backup-location-config region=eu-frankfurt-1,s3ForcePathStyle="true",\
s3Url=https://<<namespace>>.compat.objectstorage.eu-frankfurt-1.oraclecloud.com \
--use-node-agent

You should have now a daemon set running in the Velero namespace. Run the following command to validate that:

kubectl -n velero get daemonset/node-agent

The below picture shows that the Velero node-agent was successfully installed and is running as expected:

Velero should now be prepared to support the backup of volumes bound to pods.

Time to prepare our stateful application to backup. First, create the nginx-bv namespace with the following command:

kubectl create ns nginx-bv

Now, create a file called mynginxclaim.yaml with the mynginxclaim PVC manifest below:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mynginxclaim
namespace: nginx-bv
spec:
storageClassName: "oci-bv"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi

Run the following command to create the PVC in the namespace nginx-bv:

kubectl apply -f mynginxclaim.yaml

The PVC will remain in pending status until our next step which is to deploy a naked nginx pod that creates a volume called data with the Persistent Volume Claim mynginxclaim . Create a file called mynginxpod-bv.yaml with the content below:

apiVersion: v1
kind: Pod
metadata:
name: nginx
namespace: nginx-bv
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- name: http
containerPort: 80
volumeMounts:
- name: data
mountPath: /usr/share/nginx/html
volumes:
- name: data
persistentVolumeClaim:
claimName: mynginxclaim

Run the following command to deploy the pod:

kubectl apply -f mynginxpod-bv.yaml

Confirm a Persistent Volume was created with the following command:

kubectl get pv

Or confirm visually in the OCI Console if you prefer:

Check that PVC mynginxclaim status is bound and no longer pending with the following command:

kubectl -n nginx-bv get pvc 

Check that the pod is running without errors and that the volume is well mounted:

 kubectl -n nginx-bv describe pod nginx
mount on /usr/share/nginx/html successful
successfully attached volume event

If everything is OK, please run the following command to create in the nginx pod, inside the nginx container folder in the path /usr/share/nginx/html/, a file called temp.txt containing the text “Hello World”. We expect this file to be persisted in the block volume.

 

kubectl -n nginx-bv exec nginx -- bash -c "echo "HelloWorld" > \
/usr/share/nginx/html/temp.txt; cat /usr/share/nginx/html/temp.txt;"

Next, annotate the nginx pod according to Velero documentation here, with the following command, where data is the name of the volume:

kubectl -n nginx-bv annotate pod/nginx backup.velero.io/backup-volumes=data

Optionally check if the annotation was well created:

kubectl -n nginx-bv describe pod nginx | grep -in annotation

Finally, we are ready to create the Frankfurt backup with Velero:

./velero backup create nginx-backup-bv \
--include-namespaces nginx-bv \
--default-volumes-to-fs-backup

You may describe the backup to confirm its creation:

./velero backup describe nginx-backup-bv

Work is almost done in Frankfurt. We just need to copy the object storage bucket-velero from Frankfurt to London (Revisit part 1 of the blog if you have any doubts about this script):

./oci-copy-objects-to-region.sh

When the copy of the objects in the bucket is finished, work is done in Frankfurt and it is time to move to our Region 2, London

In London:

The first thing we are going to do is to confirm visually that the object storage copy from Frankfurt from our previous last step was successful:

You can see that we have a nginx-backup-bv folder in the backups main folder and a nginx-bv folder inside a kopia main folder. It looks fine.

Make sure your kubectl client is now targeting London:

kubectl get nodes

Next, we will need to reinstall Velero with the node agent in the OKE London cluster as we did in Frankfurt:

./velero install \
--provider aws \
--bucket bucket-velero \
--secret-file ./velero-credentials \
--plugins velero/velero-plugin-for-aws:v1.8.0 \
--use-volume-snapshots=false \
--backup-location-config region=uk-london-1,s3ForcePathStyle="true",\
s3Url=https://<<namespace>>.compat.objectstorage.uk-london-1.oraclecloud.com \
--use-node-agent

When finished, confirm the node agent is running:

kubectl -n velero get daemonset/node-agent

Let's ask Velero to shows us which backups are available to restore in London:

./velero backup get

Thanks to our copy of objects, we can see the backup nginx-backup is available. Let’s try to restore it:

./velero restore create --from-backup nginx-backup-bv

Check the status:

./velero restore describe nginx-backup-bv-20231110104118

When completed, check the logs if you wish just to confirm that everything went normally and the restore was completed successfully:

./velero restore logs nginx-backup-3-20231107140937

Let’s validate our application running:

kubectl -n nginx-bv get pod

Check that PVs and PVC mynginxclaim were created and bound:

kubectl get pv;
kubectl get pvc -n nginx-bv;

Confirm visually in OCI Block Volume console:

As the nginx pod is finally running in its namespace in London, and all the related assets were successfully recreated, let us test it with the following command to confirm that the previously persisted data in Frankfurt was also restored with the application in London as expected:

kubectl -n nginx-bv exec nginx  -it -- cat usr/share/nginx/html/temp.txt

--

--