Using KubeVirt in Azure Kubernetes Service — part 3: Windows VM and CSI volume cloning

Alessandro Vozza
Oct 2, 2020 · 3 min read

We’ll look at how to install a Windows Server 2019 VM inside Azure Kubernetes Service running KubeVirt.

We introduced Kubevirt in part 1 and 2, now it’s time to get a Windows VM up and running.

Start by downloading the Windows Server 2019 iso from Microsoft (evaluation version). We will save this iso in a docker image and upload it to the Docker Hub (you can use any other registry of course) by leveraging the container disk image (a regular docker image that contains the iso in the /disk folder).

Now you can just build and push the image:

docker build -t <registry>/<org>/w2k9_iso:sep2020 .
docker push <registry>/<org>/w2k9_iso:sep2020

The container image will be big and will take some time to upload, but now we have a Windows Server iso available to any AKS cluster that needs it by simply pulling that image. (Note: you could have uploaded the image directly to the cluster using the Containerized Data Importer project, this will be the subject of another blog post).

We will now create a PersistentVolumeClaim to hold our installation:

Note the use of the managed-premium storage class for performance.

Let’s look now at the VM itself:

Note that the install VM has 3 disks atttached:

  • The PVC winhd as target to install Windows
  • The windows iso container disk created before
  • The virtio driver disk

Virtio is the type of devices that libvirt exposes to a VM in Kubevirt; Windows lacks the drivers in the standard iso distribution so we need to provide them at installation time. Creating the VM object will trigger the creation of a VMI (as the status is set to running: true ) which will spin up a virt-launcher Pod to hold the actual libvirt runtime and domain. It will take some time for the VMI object to go from Scheduling to Running because the large iso image must be downloaded by the host first.

Once running, you can connect to the VNC console for the VM using the command (you’ll need either virtctl or the virt plugin via Krew):

kubectl virt vnc vm1

(You can use the free VcXsrv with WSL2 on windows by disabling access control and exporting theDISPLAY environment variable).

You’ll be prompted with the Windows installation screen, go forward selecting the “Windows Server 2019 Standard with Desktop Experience”

until you see the disk selection screen which shows a discouraging empty list. At that point, click on Load drivers and browse to the virtio disk. You’ll need to load both the network (from \NetKVM\2k19\amd64 folder) and the disk driver from \viostor\2k19\amd64).

Now you’ll be able to see the persistent volume to install Windows to. Complete the installation and when the VM reboots, choose a password and enable RDP in the Local Server Manager.

You can now delete the VM with:

kubectl delete vm vm1

The PersistentVolume just created will, well, persist and you can now create a new VM without the ISOs; we are also create a LoadBalancer service to expose the RDP port:

You should now be able to connect to the VM via RDP over the Loadbalancer IP.

Extra tip: if you are using the new CSI volume provisioner for azure-disk, you can clone the volume where you installed Windows before spinning up a new VM. First install the CSI driver with snapshot enabled:

kubectl create -f https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi-driver/master/deploy/example/storageclass-azuredisk-csi.yaml

Then create a new StorageClass:

You can choose different disk types too here (the above uses the UltraSSD type). Now create the winhad PVC referring to this new StorageClass:

Now go on and install Windows on the winhd PVC. Delete the VM and before creating a new one, create a new PVC using the first PVC as source:

and refer to winhd-clone now when you create a new VM.

Cooking with Azure

All things Azure