Scaleway Kapsule and Velero — Backup Kubernetes Cluster

Lionel Daubichon
alter way
Published in
5 min readSep 10, 2020

Today I am going to show you how to backup a full namespace of a Kubernetes Kapsule cluster on Scaleway object storage and how restore it after deletion.

Scaleway offers a fully managed Kubernetes cluster with autoscaling and auto-healing, named Kapsule. Scaleway offers also an object storage that is compatible with S3. Velero, which was named Heptio Ark, is a tool you can use to save your Kubernetes cluster and restore it or some namespaces, persistent volumes.

Velero

Velero consist of a server installed on the cluster and a local CLI. The documentation of Velero can be found here: https://velero.io/docs/v1.3.0/

You can use Velero help to get all the command line possibility.

Bucket creation:

We are using Scaleway object storage to backup our cluster. First we will create a bucket to send our cluster backups:

Cli installation:

You will need the Velero CLI to interact with Velero server. You can find all the information about installing the cli on your computer here: https://velero.io/docs/main/basic-install/

Token Creation:

Now we have our Velero CLI working and we need to authorize communication between our Kapsule cluster and Scaleway bucket which is private. We have to create a token:

Your token will consist of a secret key and an access key. We will use those informations in a credential file, remember that Scaleway object storage is S3 compatible:

[default]
aws_access_key_id=SCALEWAY_KEY
aws_secret_access_key=SCALEWAY_SECRET

Now we have our token and we will be able to install our Velero server in the cluster:

velero install \
--provider velero.io/aws \
--bucket velero-alter-test \
--plugins velero/velero-plugin-for-aws:v1.0.0 \
--backup-location-config s3Url=https://s3.fr-par.scw.cloud,region=fr-par \
--use-volume-snapshots=false \
--secret-file=./credentials \
--use-restic

Please note that we refer our credential file with the "secret-file" argument. During the installation you can also specify a specific kubeconfig file.

Save and Restore

Backup Creation of a namespace:

velero backup create monitoring-backup --include-namespaces monitoring

Backup of all namespace in the cluster:

velero backup create full-backup --include-namespaces '*'

Scheduled Backup:

velero schedule SCHEDULENAME --schedule="@every 24h" --ttl 48h0m0s --selector app=APPNAME 

When you create a backup on Velero, you will see that Velero populates the object storage folder with your backups, the same applies for your restores:

Monitoring namespace example:

First I create a backup named "monitoring-backup" for the namespace monitoring:

velero backup create monitoring-backup --include-namespaces monitoring

In order to see my backup, I will use this command:

> velero get backup
monitoring-backup InProgress 0 0 2020-08-25 15:52:28 +0200 CEST 29d default <none>

We can see that the backup is InProgress. When the backup is Complete, you will be able to see your backup folder, with the name of your backup on the object storage:

Namespace destruction:

I delete my monitoring namespace

> on  lionel_velero [?] at ☸️  noodle@velerocluster ➜ k delete namespace monitoring  namespace "monitoring" deleted

Then I check that everything is deleted:

➜ k get ns
NAME STATUS AGE
cattle-system Active 84d
cert-manager Active 38d
default Active 84d
ingress Active 43d
kube-node-lease Active 84d
kube-public Active 84d
kube-system Active 84d
velero Active 24m

I have no more resources in my monitoring namespace, now I will try to restore all resources in this namespace.

Restoring the monitoring namespace:

We have to create a restore, this is the same process used to create a backup:

➜ velero restore create monitoring-restore --from-backup monitoring-backupRestore request "monitoring-restore" submitted successfully.Run `velero restore describe monitoring-restore` or `velero restore logs monitoring-restore` for more details.

We can check our restore with a describe on the restore we created:

➜ velero restore describe monitoring-restoreName:         monitoring-restore 
Namespace: velero
Labels: <none>
Annotations: <none>
Phase: InProgress Backup: monitoring-backup Namespaces:
Included: all namespaces found in the backup
Excluded: <none>

Resources:
Included: *
Excluded: nodes, events, events.events.k8s.io, backups.velero.io, restores.velero.io, resticrepositories.velero.io
Cluster-scoped: auto

Namespace mappings: <none>
Label selector: <none> Restore PVs: auto

When the Phase is marked as Completed, we can check that resources are recreated:

➜ k get ns
NAME STATUS AGE
cattle-system Active 84d
cert-manager Active 38d
default Active 84d
ingress Active 43d
kube-node-lease Active 84d
kube-public Active 84d
kube-system Active 84d
velero Active 24m
monitoring Active 28s

We can also check our pods:

➜ k get po -n monitoring
NAME READY STATUS RESTARTS AGE
alertmanager-prometheus-alertmanager-0 2/2 Running 0 33s
cm-acme-http-solver-v9n7p 1/1 Running 0 17s
prometheus-kube-state-metrics-5f35514d956-qxdf4l 1/1 Running 0 33s
prometheus-prometheus-node-exporter-8wqrcp4 1/1 Running 0 33s
prometheus-prometheus-node-exporter-g231rtww 1/1 Running 0 33s
prometheus-prometheus-node-exporter-qnar14x 1/1 Running 0 33s
prometheus-prometheus-operator-695568578d-dr1rr3wx7 2/2 Running 0 33s
prometheus-0 3/3 Running 1 33s

We have seen how to restore our namespace using Scaleway storage and Velero. You can find more options on Velero documentation to see how to backup specific namespace, pv, pvc or specific resources. Please note that you can also add annotations to exclude some resources.

--

--