Kubernetes and Virtualization: kubevirt will let you spawn virtual machine on your cluster!

Alessandro Arrichiello
11 min readFeb 15, 2018

--

KubeVirt logo

Imagine to have a kubernetes cluster made on top of physical hosts, now imagine that together with your great, ready to use, cloud-native applications running on containers you have also a brownfield world, made up of several virtual machines that you cannot easily port to a container.

Are you stuck using your virtualization layer, while working to interconnect your legacy world with this brand new one container based?

The answer is: NO! That’s the kubevirt job!

Just for quoting the project description:

The high level goal of the project is to build a Kubernetes add-on to enable management of virtual machines (KVM), via libvirt.

The intent is that over the long term this would enable application container workloads, and virtual machines, to be managed from a single place via the Kubernetes native API and objects. This will

  • Provide a migration path to Kubernetes for existing applications deployed in virtual machines, allowing them to more seemlessly integrate with application container and take advantage of Kubernetes concepts
  • Provide a converged API and object model for tenant users needing to manage both application containers and full OS image virtual machines
  • Provide converged infrastructure for administrators wishing to support both application containers and full machine virtualization concurrently
  • Facilitate the creation of virtual compute nodes to use KVM to strongly isolate application container pods belonging to tenant users with differing trust levels

In this article we’ll see how to easily create a Kubernetes cluster on top of vm and then run kubevirt on top of it for spawing new virtual machines (using nested virtualization). For this reason: ensure to enable nested virtualization before proceeding. For Fedora take a look here: https://fedoraproject.org/wiki/How_to_enable_nested_virtualization_in_KVM

Please note that kubevirt is really under high development, so expect changes, DON’T use it in production!

Creating your virtual machines based Kubernetes cluster

I’ve tried a lot of how-tos, projects, ansible playbooks, etc. available on the web. The best, easy and fast method to setup a working kubernetes cluster on your laptop is “kubespray”! https://github.com/kubernetes-incubator/kubespray

Please note: I preferred setting up a multinode kubernetes cluster instead of a single one (ex: minikube) for testing vm portability across nodes!

I chosed kubespray also because it lets you create your kubernetes cluster through Vagrant tool, that actually simplify the process of creating virtual machines, sizing them and installing the base operating system.

So, let’s start creating our kube cluster:

Get a copy of kubespray Github repository

At the time of writing this article kubespray Github repository is missing support for vagrant-libvirt images, for that reason I made a fork and a pull request for fixing it.

In the mean time they’ll review and eventually approve it, you can use my forked repo (BTW in case you want to use different virtualization engine you can clone the original repo):

[alex@laptop gitprojects]$ git clone https://github.com/alezzandro/kubespray

Setup your vagrant environment

Then we should edit the Vagrant file specification file for instructing it to use the centos-libvirt image and also change the default network if it overlap any your notebook active networks (in my case it overlaps with Docker network).

[alex@laptop gitprojects]$ cd kubespray
[alex@laptop kubespray]$ vi Vagrantfile
...
$subnet = "172.172.1"
$os = "centos-libvirt"

Then we’re ready to let Vagrant bring up our Virtual Machines based kubernetes cluster made of three vms.

[alex@laptop kubespray]$ vagrant up

After setting up the vms, Vagrant will start kubespray’s ansible provisioner that will setup all the kubernetes cluster configuration for us!

Once finished if you used vagrant-libvirt and centos image you have to disable the check for swap memory.

Kubelet won’t run unless you disable swap memory so for avoiding it we can disable this check into kubelet environment’s variables:

[alex@laptop kubespray]$ ansible -s -i .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory -m shell -a "sed -i 's/--fail-swap-on=True/--fail-swap-on=False/g' /etc/kubernetes/kubelet.env" all

Get the kubernetes token to connect to the cluster from your laptop

We need to connect to one of the masters to get the kubernetes token and copy it in our local kube config:

[alex@laptop kubespray]$  vagrant ssh k8s-02
Last login: Mon Feb 12 15:36:39 2018 from 192.168.121.1
[vagrant@k8s-02 ~]$ sudo -i
[root@k8s-02 ~]# cat /etc/kubernetes/admin.conf
...

Now copy the content in ~/.kube/config on your laptop!

It’s time to setup/download the kubectl client if you never did it. On Fedora/RHEL systems you should install “kubernetes-client” package.

Check the kubernetes cluster is properly working

If all it’s gone ok, you’ll something similar, reporting that now you have three kubernetes nodes ready to be used:

[alex@laptop ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-01 Ready master,node 21m v1.9.0+coreos.0
k8s-02 Ready master,node 21m v1.9.0+coreos.0
k8s-03 Ready node 21m v1.9.0+coreos.0
[alex@laptop ~]$ kubectl get ns
NAME STATUS AGE
default Active 22m
kube-public Active 22m
kube-system Active 22m

Setup kubevirt

I prepared some examples (apart from the ones available by kubevirt team) and for that reason I forked the original kubevirt-demo repository, so clone my repo first:

[alex@laptop gitprojects]$ git clone https://github.com/alezzandro/demo

Then setup kubevirt in your kubernetes cluster with the following command:

[alex@laptop demo]$ cd demo[alex@laptop demo]$ kubectl create -f https://github.com/kubevirt/kubevirt/releases/download/v0.2.0/kubevirt.yaml -f manifests/demo-pv.yaml
clusterrole "kubevirt-infra" created
serviceaccount "kubevirt-infra" created
serviceaccount "kubevirt-admin" created
clusterrolebinding "kubevirt-infra" created
clusterrolebinding "kubevirt-infra-cluster-admin" created
customresourcedefinition "virtualmachines.kubevirt.io" created
customresourcedefinition "migrations.kubevirt.io" created
customresourcedefinition "virtualmachinereplicasets.kubevirt.io" created
deployment "virt-controller" created
daemonset "virt-handler" created
daemonset "libvirt" created
service "iscsi-demo-target" created
deployment "iscsi-demo-target-tgtd" created
persistentvolumeclaim "disk-alpine" created
persistentvolume "iscsi-disk-alpine" created

With the previous command we’ll create all the resources needed for creating the kubevirt endpoint inside our kubernetes cluster. The demo-pv.yaml include all the necessary for setting up an iSCSI container for dispatching pre-built disks and iso to our future virtual machines.

We can check the status of the iscsi pod with the following commands:

[alex@laptop demo]$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
iscsi-disk-alpine 1Gi RWO Retain Bound default/disk-alpine 25m
[alex@laptop demo]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
iscsi-demo-target-tgtd-5674b4f6fd-hb2j5 1/1 Running 0 26m

As you can see by the previous commands, a running iscsi pod with a persistent volume was just created.

After that, check that all is working properly (it’ll need some time for finishing it up!):

[alex@laptop demo]$ kubectl get --namespace kube-system pods | grep virt
libvirt-2cvqb 2/2 Running 0 9m
libvirt-rmxg5 2/2 Running 0 9m
libvirt-w8xgc 2/2 Running 0 9m
virt-controller-b7b8fd9b7-f8vkf 1/1 Running 0 9m
virt-controller-b7b8fd9b7-q67bk 0/1 Running 0 9m
virt-handler-htv8c 1/1 Running 4 9m
virt-handler-lq4nf 1/1 Running 5 9m
virt-handler-xqwzk 1/1 Running 0 9m

Don’t worry about the not-ready virt-controller pod (in bold in the previous example), I’ve already filled an issue for kubevirt project reporting it.

https://github.com/kubevirt/kubevirt/issues/727

It seems that the container is deployed on role=master kubernetes nodes, in our case we have 2 master and for this reason it tries to deploy the virt-controller two times.

Spawn a test virtual machine

We’ll create a virtual machine using a simple yaml template:

[alex@laptop demo]$ cat manifests/vm.yaml 
apiVersion: kubevirt.io/v1alpha1
kind: VirtualMachine
metadata:
name: testvm
spec:
terminationGracePeriodSeconds: 0
domain:
resources:
requests:
memory: 64M
devices:
disks:
- name: mydisk
volumeName: myvolume
disk:
dev: vda
volumes:
- name: myvolume
iscsi:
iqn: iqn.2017-01.io.kubevirt:sn.42
lun: 2
targetPortal: iscsi-demo-target.default.svc.cluster.local
[alex@laptop demo]$ kubectl create -f manifests/vm.yaml
virtualmachine "testvm" created
[alex@laptop demo]$ kubectl get vm
NAME AGE
testvm 7s
[alex@laptop demo]$ kubectl describe vm testvm | tail -n8
Status:
Node Name: k8s-01
Phase: Running
Events:
Type Reason Age From Message
---- ------ ---- ---- -------Normal Created 6m virt-handler, k8s-01 VM defined.
Normal Started 6m virt-handler, k8s-01 VM started.

Yeah, this might sound cool, we’ve just started our first virtual machine on this kubernetes cluster: but how to interact with it?

First we have to download the virtctl binary (sorry mac users, try to search it in homebrew). This tool will get quick access to the serial and graphical ports of a VM:

[alex@laptop demo]$ mkdir -p ~/bin
[alex@laptop demo]$ curl -L -o ~/bin/virtctl https://github.com/kubevirt/kubevirt/releases/download/$VERSION/virtctl-$VERSION-linux-amd64
[alex@laptop demo]$ chmod a+x virtctl

Then we’ll setup a proxy connection to the kubernetes cluster:

[alex@laptop demo]$ kubectl proxy --disable-filter=true &
Starting to serve on 127.0.0.1:8001
[alex@laptop demo]$ KUBEAPI=http://127.0.0.1:8001

Finally we can connect to the just created VM:

[alex@laptop demo]$ virtctl console -s $KUBEAPI testvm
Escape sequence is ^]
Welcome to Alpine Linux 3.5
Kernel 4.4.45-0-virtgrsec on an x86_64 (/dev/ttyS0)
localhost login: root
Welcome to Alpine!
The Alpine Wiki contains a large amount of how-to guides and general
information about administrating Alpine systems.
See <http://wiki.alpinelinux.org>.
You can setup the system with the command: setup-alpineYou may change this message by editing /etc/motd.localhost:~# uname -a
Linux localhost 4.4.45-0-virtgrsec #1-Alpine SMP Thu Jan 26 14:32:43 GMT 2017 x86_64 Linux
localhost:~# whoami
root

As you saw in the previous example just after running the previous command we’ll jump into serial console of the vm.

Login as root will let us enter the Alpine Linux distribution that VM is running.

Ok, the vm is there, up & running, but where? On which node? Let’s explore kubernetes pod events:

[alex@laptop demo]$ kubectl get pods|grep virt
virt-launcher-testvm-----m5hqt 1/1 Running 0 24m
[alex@laptop demo]$ kubectl describe pod virt-launcher-testvm-----m5hqt | tail -n10
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 25m default-scheduler Successfully assigned virt-launcher-testvm-----m5hqt to k8s-01
Normal SuccessfulMountVolume 25m kubelet, k8s-01 MountVolume.SetUp succeeded for volume "virt-share-dir"
Normal SuccessfulMountVolume 25m kubelet, k8s-01 MountVolume.SetUp succeeded for volume "virt-private-dir"
Normal SuccessfulMountVolume 25m kubelet, k8s-01 MountVolume.SetUp succeeded for volume "default-token-zg28w"
Normal Pulling 25m kubelet, k8s-01 pulling image "kubevirt/virt-launcher:v0.2.0"
Normal Pulled 25m kubelet, k8s-01 Successfully pulled image "kubevirt/virt-launcher:v0.2.0"
Normal Created 25m kubelet, k8s-01 Created containerNormal Started 25m kubelet, k8s-01 Started container

Fine let’s connect to node k8s-01:

[alex@laptop demo]$ cd ../kubespray/
[alex@laptop kubespray]$ vagrant ssh k8s-01
Last login: Mon Feb 12 13:12:18 2018 from 192.168.121.1
[vagrant@k8s-01 ~]$ sudo -i[root@k8s-01 ~]# ps ax | grep qemu
10714 ? Ssl 0:06 /virt-handler -v 3 --libvirt-uri qemu:///system --hostname-override k8s-01
13706 pts/2 S+ 0:00 grep --color=auto qemu
17461 ? Ssl 0:01 /virt-launcher --qemu-timeout 5m --name testvm --namespace default --kubevirt-share-dir /var/run/kubevirt --readiness-file /tmp/healthy --grace-period-seconds 15
17533 ? Sl 1:41 /usr/bin/qemu-system-x86_64 -name guest=default_testvm,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-default_testvm/master-key.aes -machine pc-i440fx-2.10,accel=tcg,usb=off,dump-guest-core=off -m 62 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid baa06c70-187f-4ea8-9343-1ff88fe04155 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-1-default_testvm/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=iscsi://10.233.37.0:3260/iqn.2017-01.io.kubevirt%3Asn.42/2,format=raw,if=none,id=drive-virtio-disk0,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=25,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=52:54:00:3d:92:32,bus=pci.0,addr=0x3 -chardev socket,id=charserial0,path=/var/run/kubevirt-private/default/testvm/virt-serial0,server,nowait -device isa-serial,chardev=charserial0,id=serial0 -vnc vnc=unix:/var/run/kubevirt-private/default/testvm/virt-vnc -device VGA,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on

And here we go! We have on this node a containerized qemu-system-x86_64 process running the just created Virtual Machine on.

Explore the iSCSI container: a Cirros Virtual Machine

But let’s also explore the running iscsi pod, to see what kind of other VM we can start (what disk we can use):

[alex@laptop kubespray]$ kubectl get pods | grep scsi
iscsi-demo-target-tgtd-5674b4f6fd-hb2j5 1/1 Running 0 1h
[alex@laptop kubespray]$ kubectl exec iscsi-demo-target-tgtd-5674b4f6fd-hb2j5 -ti /bin/bash
[root@iscsi-demo-target-tgtd-5674b4f6fd-hb2j5 /]# ls volume/
0-custom.img alpine.iso cirros.rawa

So, ready to test the Cirros RAW image?

I’ve already prepared a template for using the third disk, you’ll find in the manifests/cirrosvm.yaml:

[alex@laptop kubespray]$ cd ../demo/[alex@laptop demo]$ cat manifests/cirrosvm.yaml 
apiVersion: kubevirt.io/v1alpha1
kind: VirtualMachine
metadata:
name: cirrosvm
spec:
terminationGracePeriodSeconds: 0
domain:
resources:
requests:
memory: 64M
devices:
disks:
- name: mydisk
volumeName: myvolume
disk:
dev: vda
- name: cloudinitdisk
volumeName: cloudinitvolume
disk:
dev: vdb
volumes:
- name: myvolume
iscsi:
iqn: iqn.2017-01.io.kubevirt:sn.42
lun: 3
targetPortal: iscsi-demo-target.default.svc.cluster.local
- name: cloudinitvolume
cloudInitNoCloud:
userDataBase64: I2Nsb3VkLWNvbmZpZwoK

As you can see by the previous template I’ve just changed the lun id to “3” for choosing the cirros image and I’ve also added a cloud-init volume for speeding up the boot sequence: otherwise cirros VM would try waiting for cloud-init instructions by network (until reaching timeout).

You can easily decode the Base64 data, it’s nothing more than:

#cloud-config

Then create the cirros VM:

[alex@laptop demo]$ kubectl create -f manifests/cirrosvm.yaml 
virtualmachine "cirrosvm" created
[alex@laptop demo]$ kubectl get vms
NAME AGE
cirrosvm 31s
testvm 1h
[alex@laptop demo]$ kubectl describe vm cirrosvm | tail -n8
Status:
Node Name: k8s-02
Phase: Running
Events:
Type Reason Age From Message
---- ------ ---- ---- -------Normal Created 1m virt-handler, k8s-02 VM defined.
Normal Started 1m virt-handler, k8s-02 VM started.

Let’s connect to the running cirrosvm as done previously:

[alex@laptop demo]$ virtctl console -s $KUBEAPI cirrosvm
Escape sequence is ^]
login as 'cirros' user. default password: 'cubswin:)'. use 'sudo' for root.
cirros login: cirros
Password:
$ sudo -i
#
# uname -a
Linux cirros 3.2.0-80-virtual #116-Ubuntu SMP Mon Mar 23 17:28:52 UTC 2015 x86_64 GNU/Linux
# mount
rootfs on / type rootfs (rw)
/dev on /dev type devtmpfs (rw,relatime,size=20764k,nr_inodes=5191,mode=755)
/dev/vda1 on / type ext3 (rw,relatime,errors=continue,user_xattr,acl,barrier=1,data=ordered)
/proc on /proc type proc (rw,relatime)
sysfs on /sys type sysfs (rw,relatime)
devpts on /dev/pts type devpts (rw,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /dev/shm type tmpfs (rw,relatime,mode=777)
tmpfs on /run type tmpfs (rw,nosuid,relatime,size=200k,mode=755)

Now lets try to store some data and see if it persists to VM crashes, we’ll create a file inside of our Cirros vm:

# echo "This is only an example of data" > /root/mydata.txt
# ls
mydata.txt

We’re now ready to delete the Virtual Machine and recreate it:

[alex@laptop demo]$ kubectl delete vm cirrosvm
virtualmachine "cirrosvm" deleted
[alex@laptop demo]$ kubectl create -f manifests/cirrosvm.yaml
virtualmachine "cirrosvm" created
[alex@laptop demo]$ kubectl describe vm cirrosvm | tail -n8
Status:
Node Name: k8s-01
Phase: Running
Events:
Type Reason Age From Message
---- ------ ---- ---- -------Normal Created 9s virt-handler, k8s-01 VM defined.
Normal Started 8s virt-handler, k8s-01 VM started.

And we got it! Cirrosvm is running on a different node, lets connect to it to see if data persisted the “crash”:

[alex@laptop demo]$ virtctl console -s $KUBEAPI cirrosvm
Escape sequence is ^]
login as 'cirros' user. default password: 'cubswin:)'. use 'sudo' for root.
cirros login: cirros
Password:
$ sudo -i
# cat /root/mydata.txt
This is only an example of data
#

Data is there, it persisted thanks to the iSCSI volume!

Other tests and hacks

I’ve also tested some other feature like the Migration object, you’ll find an example in file “manifests/migrate-testvm.yaml”.

It works correctly but unfortunately it hangs if the vm has a cloud-init volume attached on. I’ve also reported the bug https://github.com/kubevirt/kubevirt/issues/665

That’s all for now!

Keep watching the project, it’s very active!

About Alessandro

Alessandro Arrichiello is a Solution Architect for Red Hat Inc. He has a passion for GNU/Linux systems, which began at age 14 and continues today. He worked with tools for automating Enterprise IT. He’s now working on distributed cloud environment involving PaaS (OpenShift), IaaS (OpenStack) and Processes Management (CloudForms), Containers.

--

--