OpenShift Virtualization 2.4: A declarative coexistence of virtual machines and containers.

Vishal Anand
AI+ Enterprise Engineering
9 min readSep 3, 2020

Author: Vishal Anand, Sr. Inventor, Developer Advocate, Chief Architect & Solutions Leader — Cloud Strategy, Solutioning and Transformation.

Virtual machines (VMs) on one platform and containers on another, technically, are a relic of bygone times. How about having both on the same platform? In addition, what if that comes with proven open source technologies from the leader itself? Hear me out: dream, no more! All out there who are living in the modern world of 2020 and are fans of Kubernetes and the Red Hat OpenShift Container Platform might be already hearing about OpenShift Virtualization 2.4. If yes, that is so rightly so.

It is brand new with general availability announced on 17 August 2020, and IT professionals at all levels have extreme appetites for it. The curiosity is all about the declarative coexistence of VMs and containers on OpenShift, which is made possible through the smart engineering put into OpenShift Virtualization 2.4, all for good profitable business reasons. Here, declarative means fully automated deployment through operators with a full lifecycle management maturity level.

Note that KubeVirt (one of the backbone components), its adjacent capabilities (for example, containerized data importer (CDI) and network add-ons), and its technical preview features have been out for a long time. You can now run Windows guest VMs, Linux guest VMs, containers, and serverless all together, yet leveraging a whole common converged ecosystem of OpenShift through its certified, conformant Kubernetes platform capabilities. Yes, you heard me right.

To start with, I reviewed every single asset available out in the public domain (I am serious) across YouTube, kubevirt.io, docs.openshift.com, github.com, and several other blogs, articles, and streaming sites. It was a good start, but all theory for me. I craved for wisdom rather than just knowledge. So, is seeing believing? Probably, but not always. Not for me, coming from a science and engineering background.

Hence, I decided to give it a personal touch: feel it, see it, and experience it myself. That is all to develop my own unique viewpoint to assist my clients at all levels (starting from hands-on professionals up to the senior leadership executive level). As a beloved user of the IBM Cloud, I selected 3 bare metal workers based on a OpenShift Container Platform 4.4 cluster configuration on IBM Cloud, each with 4 cores, 32 GB of RAM, Red Hat Enterprise Linux (RHEL), 2 TB HDD primary and secondary disk, 10 Gbps network (basic configuration per bare metal node).

With a click of a button, I was able to provision my cluster in a few hours since I used bare metal workers this time. I remembered that it took approximately half an hour when I consumed the same 4.4 version but with 3 virtual instance workers. Note that OpenShift Virtualization 2.4 is supported for use on OpenShift Container Platform 4.5. It was promoted to high touch beta for production readiness efforts on OpenShift Container Platform 4.4 in the first half of 2020. There are some minor console and GUI changes with the 4.5 platform, such as a virtualization side bar menu with VMs and VM templates as tabs on it.

Okay, let’s get started.

Philosophy

Let me first start with the philosophy of the technology, as shown in the following diagram.

In Scenario 1, the VMs require a whole separate stack of infrastructure and control plane. In Scenario 2, the containers require a whole separate stack of infrastructure and control plane. Whereas in Scenario 3 (topic of this post), the VMs and containers leverage the common infrastructure and control plane layers.

You gain multidimensional advantages with Scenario 3 when you think in terms of skills, tools, declarative automation, development, ecosystem, convergence, integrations, web consoles, CLIs, configurations (YAML), CI/CD, Infrastructure as Code, ChatOps, observability, portability, consistent user experience, and more.

Concept

Now, I’ll explain the conceptual view, as shown in the following diagram.

The middle layer, which shows the VMs and containers side by side, is the whole new paradigm. You can run different types of workloads, such as the VMs, containers, and serverless workloads shown in the top layer. The third layer from the top is the OpenShift Container Platform, where the VMs and containers can be created and managed, on and from a single platform. These workloads can now leverage all of the common platform capabilities including certified, conformant Kubernetes, as well as monitoring, alerting, networking, logging, application services, developer services, operators, storage, scaling, runtime, APIs, controllers, and registry. That is the engineering magic of OpenShift Virtualization.

Architecture

The following diagram shows the architectural deep dive view.

Note: In this diagram, the abbreviation of VM refers to virtual machine, L refers to libvertd, and Q-K refers to qemu-kvm. Each node has persistent volume claim (PVC), persistent volume (PV), or DataVolumes (DVs), and networking. Pods and VMs can be of same or different sizes.

As shown in the diagram, there are bare metal (RHEL) workers required as recommended. Each worker node has a virt-handler pod (daemon set). Each VM pod has a virt-launcher pod service which runs libvertd and QEMU-KVM processes. This virt-launcher pod controls the virtual machine instance. The pod named virt-handler on this node controls all such VMs through their virt-launcher pods on the same node. Containers technology has been out for several years now, so I am not spending time on containers per se. Kubelet can manage VM pods and conventional container pods as Kubernetes objects, both as first-class citizens.

Further, the virt-handler pod also communicates and manages control plane executions with virt-api, virt-controller, and other lifecycle services and pods, such as containerized data importer, network add-ons, and so on.

The OpenShift Virtualization control plane (I made up this term for better conceptual purposes) resides in the namespace called openshift-cnv. The components of this namespace communicate with the OpenShift cluster control plane components, such as the Kubernetes API Server. The components are pods, deployments, services, config maps, daemon sets, replica sets, secrets, route, and required custom resource definitions (CRDs), which are all brought up after the OpenShift Virtualization Operator and its instances are installed and running.

Operator

The operator view is shown in the following diagram.

OpenShift Virtualization brings a hyperconverged operator, which in turn installs KubeVirt and its other adjacent operators, such as containerized data importer (CDI), schedule, scale, and performance (SSP), cluster network add-ons, and node maintenance. Before you install this hyperconverged operator, the workload menu in the OpenShift console shows pods. But after you install the same, the workload menu shows VM and VM templates for OpenShift 4.4. Note that there is a minor change to the console and GUI with OpenShift 4.5, where the workload menu shows virtualization as a sidebar menu item that has VM and VM templates as two tabs within it.

Control plane

Here is a view of the OpenShift Virtualization control plane that I described earlier.

There are 16 deployments, 39 pods, 1 route, 88 secrets, 11 services, 26 config maps, 6 daemon sets, and 16 replica sets. These are all in the openshift-cnv namespace. Most of these objects run on multiple nodes. There are 22 related CRDs but they are not directly in the namespace. Note that number of pods may change depending upon the number of worker nodes.

I surprisingly jotted down the observed resource consumption of the entire said namespace, which was very efficient in my view. CPU usage was 824.9 millicores (< 1 core), memory was 1.28 GB, and filesystem was 75.88 MB, which I think is pretty light. Okay, so at this point it was vital for me to now perform some hands-on tests. To help you understand the functional workflow that I experienced, I will now show and explain how the workflows execute.

Functional workflow

The functional workflow view diagram shows how VMs are created.

DV per image is created depending upon how you want to upload or import the image. This DV, based on the StorageClass definition, creates an upload or import pod which then makes a PVC. This claim then provisions a PV. If you use disk-based source, then a similar, second scratch PVC and PV will be created. Once the image is uploaded, the scratch PVC, scratch PV, and upload pod disappear. This is a sign of a successful execution so far. It may take a while, so be patient.

Now you can create a VM using the created PVC (through which the image is uploaded) in the menu-driven functionality either directly or through a template. Once a VM is created, you will notice that its virt-launcher pod is running and the VM instance is running with its own internet protocol (IP) on a node. You can access this VM either through the virtual network computing (VNC) and serial console inside the VM tab, or externally over the network with a protocol such as Secure Shell (SSH), Remote Desktop Protocol (RDP), and so on.

Note: In the functional workflow view diagram, the abbreviation of PVC refers to persistent volume claim, CDI refers to containerized data importer, and VM refers to virtual machine. The objects shown in a light blue color (for example, the scratch ones described) and their flows shown with dotted lines are temporary. They disappear after a successful completion (virtctl uploads). Scratch claim volume is not created using the URL source or import method. There are other methods available that are Preboot Execution Environment (PXE) and container image based. If the VM is created through a VM template, then the template remains as is.

Results

When I created my first Windows Server 2012 R2 and RHEL Server 7.8 VMs using OpenShift Virtualization 2.4 on a Red Hat OpenShift 4.4 cluster, the much-awaited pictures came out as shown in the following screen captures. I call them my pictures of the month, August 2020.

The following screen capture shows the picture of Windows Server 2012 R2 inside the OpenShift virtual machine console tab.

The next screen capture shows the Red Hat Enterprise Linux Server 7.8 command prompt inside the OpenShift virtual machine console tab.

I created a total of 3 VMs of large sized Windows Server 2012 R2, RHEL Server 7.8, and Fedora 32 on my OpenShift cluster on the IBM Cloud.

It was vital that I observed resources utilization of my cluster, such as cluster insights of the CPU and memory. As I shared earlier in this post, I provisioned 3 bare metal RHEL worker clusters and each node was comprised of 4 cores and 32 GB of RAM. The freshly provisioned cluster had 4.8% CPU and 12.1% memory used. After I installed the related operators and 3 VMs as described earlier, the total CPU usage was 9.7% and total memory usage was 25.9%.

Summary

During this testing, I experienced one of my most memorable learning experiences. Of course, I faced several technical challenges initially, but solved them through logical thinking, wisdom, and patience. Now, I can spin up a Windows Server 2012 VM from scratch in about an hour (the image upload takes approximately 34 minutes) and a Linux VM in about 15 minutes, again on Red Hat OpenShift (I love to repeat it). Having said that, I have only begun to work with OpenShift Container Platform in terms of virtualization. There are numerous opportunities and capabilities that organizations can leverage from the Kubernetes-native OpenShift capabilities, all for good business reasons and consistent user experiences. I have no doubt that this technology marks a new era in the IT industry and has the potential to bring about a real paradigm shift for enterprise organizations.

To try OpenShift Virtualization 2.4 yourself, I recommend that you first learn the functional aspects of containerized data volumes, persistent volume claims, persistent volumes, storage classes, access modes, image curation, image imports, YAML, operators, and so on. This learning is additional to the required knowledge of Kubernetes and OpenShift.

--

--

Vishal Anand
AI+ Enterprise Engineering

Global Chief Technologist, Executive Architect, Master Inventor, Fellow of BCS, Certified Distinguished Architect.