Using KubeVirt in Azure Kubernetes Service — part 2: Installation

Alessandro Vozza
Cooking with Azure
Published in
3 min readSep 20, 2020

After the introduction to KubeVirt in part 1, we continue in this post to get to know KubeVirt in AKS by installing it and running our first VirtualMachine

We look at how KubeVirt implements the objects necessary to run VirtualMachines workloads in Kubernetes in part 1, and now we’ll get practical deploying those objects to an Azure Kubernetes Cluster. First, let’s create a basic 1-node cluster:

az aks create \
--resource-group k8s \
--network-policy calico \
--network-plugin kubenet \
--node-vm-size Standard_B4ms \
--node-count 1 \
--name kubevirt

Add a second nodepool with a VM size that has the Intel virtualization extensions (VT-x) enabled (those from the Ds_v3 series all have them):

az aks nodepool add \
--resource-group k8s \
--cluster-name kubevirt \
--name nested \
--node-vm-size Standard_D4s_v3 \
--labels nested=true \
--node-count 1

Note how we’re attaching a label to the nodepool; this way, scaling the pool will make sure the new instances will all get the same label (which comes in handy later when we’ll want to schedule our VM’s on nodes with native CPU acceleration).

Retrieve the credentials for the cluster and check the nodes:

az aks get-credentials -g k8s -n kubevirt
kubectl get nodes
NAME STATUS ROLES AGE VERSION
aks-nested-72121939-vmss000000 Ready agent 25m v1.17.9
aks-nodepool1-72121939-vmss000000 Ready agent 27m v1.17.9

(you can use the --show-labels kubectl option to confirm the label is attached to the second pool).

Time to install Kubevirt! First let’s install the handy virtctl tool as a krew plugin for kubectl:

kubectl krew install virt

And install the kubevirt operator and a single instance of the KubeVirt object:

You’ll notice in the kubevirt namespace the virt-operator pod that will in turn create the virt-api and virt-controller (deployments with replica=2) and a DaemonSet for the virt-handler.

When all pods are up, we can create a VM object that will in turn create a VirtualMachineInstance object (the actual running VM). First, I’d like to save the cloud-config as secret in Kubernetes (in my case, I’m not injecting a password but simply my public SSH key, but it’s a good practice to have the configuration separated from the actual VM object):

And finally create the VM object:

Few things to notice:

  • the nodeSelector for making sure the pod gets schedule on a node with nested virtualization
  • the request for 1Gb of RAM
  • the use of containerDisk to download automatically the latest Fedora image; containerDisks are full cloud images packaged into a Docker image (you can find all available versions on DockerHub).
  • the cloud-init script referenced from a Kubernetes secret.

Creating the object of type VirtualMachine will create a VirtualMachineInstance (because the spec.running: true property) which will force the virt-handler on the node to run a pod with libvirt` to actually run the Virtual Machine ( domain in libvirt parlance). You can check:

kubectl get po,vm,vmi
NAME READY STATUS RESTARTS AGE
pod/virt-launcher-vm1-g96j6 2/2 Running 0 16m

NAME AGE VOLUME
virtualmachine.kubevirt.io/vm1 19m

NAME AGE PHASE IP NODENAME
virtualmachineinstance.kubevirt.io/vm1 16m Running 10.244.1.8 aks-nested-72121939-vmss000000

And now, for the grand finale, it’s time to expose that VM so we can access it: it’s time to create a service to connect to. It can be done with the virtctl command expose or simply by creating a Service object (note how the same rules for label/selector apply: the service endpoint list will be populated by all VirtualMachines with lables matching the selector, so it’s possible to have access to multiple VM’s from a single Service).

Wait for the public IP to show up on the load balancer then SSH to the VM:

ssh fedora@`kubectl get svc  vm1ssh -o custom-columns=":.status.loadBalancer.ingress[0].ip" --no-headers=true`

You’re in! It’s a simple VM but feels like a great achievement. Coming up in the next articles: storage, running Windows and more.

--

--

Alessandro Vozza
Cooking with Azure

Full time Cloud Pirate, software engineer at Microsoft, public speaker, community organiser and mentor. Opinions are mine’s, facts are facts.