Running Legacy VMs in Kubernetes

Kunal Kushwaha
nttlabs
Published in
7 min readMar 13, 2019

This is Kunal from NTT OSS Center. In NTT OSS Center, we contribute to various container projects like Kubernetes, Containerd, Podman, etc to make them more stable and suitable for production use. We are doing various development in cooperation with Open Source communities.

With the increased adoption of containerized platforms like Kubernetes by Enterprises, it has become the primary platform for managing the lifecycle of new applications. Also, the old applications which are in active development are slowly being migrated or already migrated to Kubernetes.

Yet there are many applications which are still running in old infrastructure i.e. Virtual Machines & Bare metals.

Fragmented Infrastructure

This has created a fragmentation where it is required to manage both kinds of infrastructure.

Couple of reasons why not all old applications can be moved to Kubernetes are

  • Application designed for a custom kernel.
  • Needs specific kernel parameters.
  • Lack of knowledge / Too complex to migrate in containers.
  • Application towards the end of life.

An ideal platform.

An ideal platform would be where Virtual Machines and containers could co-exist without losing their behavior. You might be thinking it’s very much possible to run VM’s in Kubernetes with CRI’s like Kata-containers.

But here the important condition is “VM’s and containers should co-exist without losing their behavior”. Solutions like Kata-containers and alike provide VM level of isolation to applications, but the behavior is of containers or better to say application containers, whereas traditionally in VM’s there is much more than just application running inside it.

VMs and containers on Kubernetes through KubeVirt

KubeVirt project is one such project, which aims to solve this issue. It enables to run VMs along with containers on existing Kubernetes nodes.

What is KubeVirt?

KubeVirt extends Kubernetes by adding resource types for VMs through Kubernetes’ Custom Resource Definitions API (CRD). KubeVirt VMs run inside regular Kubernetes pods, where they have access to standard pod networking and storage, and managed using standard Kubernetes tools such as kubectl.

Though communicating with VM’s directly for operations likekubectl exec , accessing console/VNC or publishing ports of VMs, we need one more cli virtctl which is part every release of kubevirt.

CRD also enable KubeVirt to be installed and removed in any existing Kubernetes cluster without building the cluster from scratch.

  • It introduces two resources VirtualMachine and VirtualMachineInstance to Kubernetes, which define properties of VM such as Machine & CPU type, RAM size, CPU count, NIC type and numbers etc.
KubeVirt components in Kubernetes

KubeVirt Components

  • virt-api-server: This serves as an entry point for all virtualization related flows and validates all VM and VMI resource type for each request coming user through api-server
  • virt-controller: A Kubernetes operator, which watches all VMIs resources and try to converge to its defined specs. It creates pods in which VM runs. Once VM is scheduled on a node, virt-controller update the VM object with the node name. Further operations are handled by virt-handler.
  • virt-handler: On every node, where KubeVirt is allowed to create VMs, virt-handler runs as daemonset. It watches for new VMIs or changes in VMI objects and updates the VM accordingly. For new VMI objects, it creates the VM in pod with help of libvirtd
  • virt-launcher: When a pod is created for each VM object, primary container of pod runs virt-launcher, which provides cgroups and namespaces to host VM. virt-handler singles the virt-launcher to start the VM by passing VM CRD object. virt-launcher then uses a local libvirtd instance within its container to start the VM.
  • libvirtd: An instance of libvirtd is present in every VM pod. virt-launcher uses libvirtd to manage the life-cycle of the VM process.

Problems it solves.

  • Reduce fragmentation, as hard-to-containerize apps can be deployed in Kubernetes as VM’s.
  • Lower the entry load for migration of the application to Kubernetes. No need to containerize the app before migrating.
  • New apps can interact with VM’s, which helps in the decomposition of a legacy application easy.

KubeVirt Setup in Kubernetes cluster

Installation of KubeVirt is smooth and easy as any CRD. Since KubeVirt is still in active development, and not yet v1.0 is released, currently releases managed on Github only.

Prerequisite:

Virtualization support on hardware is recommended, else performance is very poor with software simulation. This can be verified by virt-host-validate

$ virt-host-validate qemu
QEMU: Checking for hardware virtualization : PASS
QEMU: Checking if device /dev/kvm exists : PASS
QEMU: Checking if device /dev/kvm is accessible : PASS
QEMU: Checking if device /dev/vhost-net exists : PASS
QEMU: Checking if device /dev/net/tun exists : PASS
...

Kubevirt installation involves setting up various components in cluster like virt-api,virt-controller & virt-handler. All these components run as pods in kubernetes cluster.

kubevirt.yaml file setups the CRD and components of kubevirt.

Kubevirt components for given release are updated at docker.io/kubevirt/. CRD yaml file is available at github. So all we need to install the kubevirt on given cluster is kubectl apply.

$ export VERSION=v0.15.0
$ kubectl create -f https://github.com/kubevirt/kubevirt/releases/download/$VERSION/kubevirt.yaml

Note: If you are behind a corporate proxy, make sure no proxy settings in the environment variables.

On successful installation, you can verify the successful setup by, listing all of kubevirt namespace.

Also, communication between virt-api-server and cli can be verified with virtctl version command.

Migrate VMs into KubeVirt.

Let’s see how to move a VM to Kubernetes. For this example I will use a VirtualBox VM which has CentOS 7 running and have project management software trac installed.

Migrate Disk Image : To move this VM, we need to use the disk image of VM as bootable Image for KubeVirt VM. KubeVirt supports various Storage options. We will use this disk as containerDisk, which mean, we can create multiple VM instances using the image.

To create containerDisk, we will convert the VM disk image to container registry compatible image format using base image of kubevirt/container-disk-v1alpha

  • Convert VDI image to qcow2 format.
  • Build a container image to encapsulate qcow2 file & push image in container registry.

Prepare VMI spec : There is no tool to prepare the VMI spec by parsing existing VM configuration, but KubeVirt sets defaults in-case any settings is missing from VMI spec.

For this example, only important to application setting are done in this VMI i.e. memory, disk and network.

  • Disk attached to VM is containerDisk from image kunalkushwaha/track-centos:latest
  • An extra emptyDisk of 2GiB is attached to VM for temporary storage.
  • One NIC is attached VM, The application is exposed on port 80 and for ssh VM we have exposed port 22.

Create VirtualMachineInstance : Creating VMI is simple, using kubectl apply command.

commands like “kubectl describe” works for resources VMI, so such commands can be used to debugging and understanding more about what’s going on in VM can be used

Exposing service.

Finally, the application needs to get access by outside world and here kubernetes abstract work for VMIs too. Here for simplicity, I have created service on NodePort. Other methods to expose service like LoadBalancer & Ingress will also work for VMIs.

Issues I faced during initial evaluation & my contribution wish list

During evaluation, I faced couple of issues, which I have listed below. I would also like to work on them to begin my contribution to project.

  • Memory overhead to run single VM appears too much as of now. Though after digging more on this, it seems Virtual Memory usage is huge & rss i.e. actual memory usage is not much. This could still create issues like false trigger for Low Memory conditions for kubelet’s Out Of Resource handling or third party monitoring tools.
  • virtctl console does not work for every VM. Though virtctl vnc works.
  • Some tool to suggest VMI spec for given VM configuration shall be helpful. Though direct translation may not be possible, but suggestion of storage and networking options shall be useful for new users.
  • VM Template: VM Spec templates helps in defining the VMI Specs, but currently available only with OpenShift.
  • Probes are supported for VMI, though I was not able to use it successfully that resulted in VM shutdown.

I am looking forward to begin my contribution by fixing above issues and contribute to stability of KubeVirt project.

Conclusion: KubeVirt is promising and have potential to lower the entry load of migration to Kubernetes especially for applications which are not easy to containerize. It is still in active development and not yet reached v1.0, so there are rough edges, but stable enough to evaluate migration.

A further evaluation is required to understand details of KubeVirt and how VMs running on traditional architecture like Multi-Network VMs and HA based configuration can be migrated to Kubernetes.

NTT is looking for engineers who work in Open Source communities like Kubernetes & Docker projects. If you wish to work on such projects please do visit our recruitment page. To know more about NTT contribution towards open source projects please visit our Software Innovation Center page.

--

--

Kunal Kushwaha
nttlabs
Writer for

Software Engineer @ NTT OSSC | a Core Developer of Podman & Docker Community Leader, Tokyo.