Openshift Container Native Virtualization ( CNV ) with Ceph Storage Export and Import virtual machine

Emirsah Kursatoglu
Sahibinden Technology
8 min readSep 1, 2021

A new perspective on virtualization technology has come with the Red Hat Openshift Container Platform. Virtual servers now run as containers.

In the past years, a big step has been taken from baremetal servers to virtual servers. Now it’s time to take a big step from virtual servers to virtual servers running as containers.

With the addition of today’s best and most reliable open source software defined storage technology, such as Ceph, this structure has shared a technology ahead of today’s virtualization technologies in Hyper Converged architecture.

I will try to convey the information I need and experience about Openshift + CNV + Ceph section by section.

Export Virtual Machine

It is an indispensable issue in virtualization technologies in general, but we do not know when we will need it, so exporting is an important issue. Although importing seems easy in this technology, more effort is required for exporting.

How can we export a virtual server via Openshift + CNV + Ceph.

First of all, we need to find the csi volume name of the virtual server that we will export from Ceph.

virtual server we need to export testvm01

[kni@bastionvm ~]$ oc get vmi -A | grep testvm01

project-test testvm01 43d Running 10.10.10.1 worker01.ocp.example.com

The persistent volume claims the testvm01 server containing the csi volume information that we want.

[kni@bastionvm ~]$ oc get pvc -A | grep testvm01project-test testvm01-rootdisk Bound pvc-f6c1cfd4–16a0–4a82-ade0–3ef83bbd4ab5 20Gi RWX ocs-storagecluster-ceph-rbd 113d

We can learn the csi volume name in the yaml output of a virtual server which’s known as Persistent Volume Claims.

[kni@bastionvm ~]$ oc get pv pvc-f6c1cfd4–16a0–4a82-ade0–3ef83bbd4ab5 -o yaml | grep csi-vol-imageName: csi-vol-75cfa350–979a-11eb-9882–0a580a82200a

This information just helps us figure out what to look for on Ceph. Now we can process with the ceph tool for export.

Let’s make a Ceph tool rsh connection.

[kni@bastionvm ~]$ oc get pods -A | grep rook-ceph-toolopenshift-storage rook-ceph-tools-2aaa6775643-l881a 1/1 Running 0 35d[kni@ bastionvm ~]$ oc rsh rook-ceph-tools-2aaa6775643-l881ash-4.4#

We check the csi volume that we want to export in the ceph pool ocs-storagecluster-cephblockpool.

sh-4.4# rbd ls -p ocs-storagecluster-cephblockpool | grep csi-vol-75cfa350–979a-11eb-9882–0a580a82200acsi-vol-75cfa350–979a-11eb-9882–0a580a82200a

I highly recommend taking a snapshot of the current volume before exporting the volume.

sh-4.4# rbd snap create ocs-storagecluster-cephblockpool/csi-vol-75cfa350–979a-11eb-9882–0a580a82200a — snap export-vmtest01-snapshot

We can now export the Snapshot we have received.

sh-4.4# rbd export ocs-storagecluster-cephblockpool/csi-vol-75cfa350–979a-11eb-9882–0a580a82200a@export-vmtest01-snapshot /home/export-vmtest01-snapshot.raw

Exporting image: 100% complete…done.

The disk image we exported is located in the /home directory on the rook-ceph-tools-2aaa6775643-l881a partition.

sh-4.4# ls -lah /home | grep export-rw-r — r — . 1 root root 20G Jul 30 09:30 export-vmtest01-snapshot.raw

We can now use the oc rsync command to transfer this image file to my own server.

[kni@bastionvm ~]$ oc rsync oc rsh rook-ceph-tools-2aaa6775643-l881a:/home/export-vmtest01-snapshot.raw 

/home/kni/.export-vmtest01-snapshot.raw
[kni@bastionvm ~]$ ls -lah /home/kni/ | grep export-rw-r — r — . 1 kni kni 20G Jul 30 12:30 export-vmtest01-snapshot.raw

The export process has been completed, we can use the disk image by converting it to the format we want.

Import Virtual Machine

After Openshift export, we will import the exported disk image as a virtual server.

I am converting our image in .raw format to qcow2 format.

[kni@bastionvm ~]$ qemu-img convert export-vmtest01-snapshot.raw -O qcow2 export-vmtest01-snapshot.qcow2

If you want to upload any image to openshift, you need to create a data volume and upload the image in the data volume.

Let’s create the project to import.

[kni@bastionvm ~]$ oc new-project vmimporttestNow using project “vmimporttest” on server “https://api.ocp.example.com:6443".You can add applications to this project with the ‘new-app’ command. For example, try:oc new-app rails-postgresql-exampleto build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application:kubectl create deployment hello-node — image=k8s.gcr.io/serve_hostname

Let’s create a Data Volume

[kni@bastionvm ~]$ cat dv-import-vmtest01.yamlapiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
name: import-vmtest01
spec:
source:
upload: {}
pvc:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
storageClassName: ocs-storagecluster-ceph-rbd
volumeMode: Block
[kni@bastionvm ~]$ oc create -f dv-import-vmtest01.yamldatavolume.cdi.kubevirt.io/import-vmtest01 created

In order to upload to the data volume we created, we need to wait for the PHASE status to be UploadReady.

[kni@bastionvm ~]$ oc get dvNAME PHASE PROGRESS RESTARTS AGEimport-vmtest01 UploadReady N/A 28s

We load the export-vmtest01-snapshot.qcow2 disk image into the import-vmtest01 data volume in the vmimporttest project.

[kni@bastionvm ~]$ virtctl image-upload dv import-vmtest01 — size=20Gi — image-path=/home/kni/export-vmtest01-snapshot.qcow2 — namespace vmimporttestUsing existing PVC vmimporttest/import-vmtest01Uploading data to https://cdi-uploadproxy-openshift-cnv.apps.ocpprod.tmll.sahibindenlocal.net600.62 MiB / 2.9 GiB [=====> — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — -] 20.02% 02m10sUploading data completed successfully, waiting for processing to complete, you can hit ctrl-c without interrupting the progressProcessing completed successfullyUploading /home/kni/export-vmtest01-snapshot.qcow2 completed successfully

Our image is loaded into import-vmtest01 data volume. We can create a template using this image. I created the vmimporttest project for testing purposes.

With OCP 4.7, virtual server creation has been removed; we need to create a template using image.

Click on the “Virtualization” tab under the “Workloads” tab in the Openshift interface.

When we click on the “Create” button, we will create a template using the image we uploaded with the “Template” — “With Wizard” option in the picture below.

In the “Create Virtual Machine Template” section, we can give a name for the template, since we, as the Provider, prepared the image, we can give it the name we want because we will give it the support section.

You can choose the OS version of the server we have installed in the Operation System section. (RedHat does not specify a version for debian, so I choose Fedora).

I will copy it with the PVC clone process that we have installed as “Boot Source”.

Since I uploaded the image to the vmimporttest project, I select the relevant project name in the Persistent Volume Claims section.

I select the name import-vmtest01 Persistent Volume Claims that I want to copy.

After specifying the template sources with Flavor and choosing “Workload type”, I go to the next step with the “Next” button.

In this step, Pod Networking is selected by default. If you want, you can edit the network settings according to your current structure. I will choose Pod Networking by default and continue with “Next”.

The important factor in this area is that the image we load as the boot disk is selected. If the boot disk is not selected, the server cannot find a partition for the boot process. Cloudinitdisk comes by default. You can use it when needed.

If Cloud-init is installed in the uploaded image, you can change the user, password from the field below. I suggest you to review Clout-init for details. I continue with “Next”.

Finally, we complete our template creation steps by checking it in the Review section.

We created a template from the image we successfully imported.

Let’s create a server using the template we created.

Let’s create our server with “With Wizard” by clicking the “Create” button on the right.

Click on the template we have loaded and continue with “Next”.

We create the server in the vmimporttest project.

We were able to successfully create a virtual server using a template.

We can see that the server you created is in the Start state from the “Overview” tab. We can look at the opening of the server from the “Console” tab.

As you can see, the server is opened and we can log in from the login screen. If Cloud-init is installed in the related image; in the “Guest Login Information” section, we can specify parameters such as user, password, and view user and password information.

Using Cloud-init for automation and installations will provide a serious advantage.

--

--