Using KubeVirt in Azure Kubernetes Service — part 2: Installation
After the introduction to KubeVirt in part 1, we continue in this post to get to know KubeVirt in AKS by installing it and running our first VirtualMachine
We look at how KubeVirt implements the objects necessary to run VirtualMachines workloads in Kubernetes in part 1, and now we’ll get practical deploying those objects to an Azure Kubernetes Cluster. First, let’s create a basic 1-node cluster:
az aks create \
--resource-group k8s \
--network-policy calico \
--network-plugin kubenet \
--node-vm-size Standard_B4ms \
--node-count 1 \
Add a second nodepool with a VM size that has the Intel virtualization extensions (VT-x) enabled (those from the Ds_v3 series all have them):
az aks nodepool add \
--resource-group k8s \
--cluster-name kubevirt \
--name nested \
--node-vm-size Standard_D4s_v3 \
--labels nested=true \
Note how we’re attaching a label to the nodepool; this way, scaling the pool will make sure the new instances will all get the same label (which comes in handy later when we’ll want to schedule our VM’s on nodes with native CPU acceleration).
Retrieve the credentials for the cluster and check the nodes:
az aks get-credentials -g k8s -n kubevirt
kubectl get nodesNAME STATUS ROLES AGE VERSION
aks-nested-72121939-vmss000000 Ready agent 25m v1.17.9
aks-nodepool1-72121939-vmss000000 Ready agent 27m v1.17.9
(you can use the
--show-labels kubectl option to confirm the label is attached to the second pool).
Time to install Kubevirt! First let’s install the handy
virtctl tool as a krew plugin for
kubectl krew install virt
And install the kubevirt operator and a single instance of the
You’ll notice in the
kubevirt namespace the virt-operator pod that will in turn create the virt-api and virt-controller (deployments with replica=2) and a DaemonSet for the virt-handler.
When all pods are up, we can create a VM object that will in turn create a VirtualMachineInstance object (the actual running VM). First, I’d like to save the cloud-config as secret in Kubernetes (in my case, I’m not injecting a password but simply my public SSH key, but it’s a good practice to have the configuration separated from the actual VM object):
And finally create the VM object:
Few things to notice:
nodeSelectorfor making sure the pod gets schedule on a node with nested virtualization
requestfor 1Gb of RAM
- the use of
containerDiskto download automatically the latest Fedora image; containerDisks are full cloud images packaged into a Docker image (you can find all available versions on DockerHub).
- the cloud-init script referenced from a Kubernetes secret.
Creating the object of type VirtualMachine will create a VirtualMachineInstance (because the
spec.running: true property) which will force the virt-handler on the node to run a pod with
libvirt` to actually run the Virtual Machine (
domain in libvirt parlance). You can check:
kubectl get po,vm,vmi
NAME READY STATUS RESTARTS AGE
pod/virt-launcher-vm1-g96j6 2/2 Running 0 16m
NAME AGE VOLUME
NAME AGE PHASE IP NODENAME
virtualmachineinstance.kubevirt.io/vm1 16m Running 10.244.1.8 aks-nested-72121939-vmss000000
And now, for the grand finale, it’s time to expose that VM so we can access it: it’s time to create a service to connect to. It can be done with the virtctl command expose or simply by creating a Service object (note how the same rules for label/selector apply: the service endpoint list will be populated by all VirtualMachines with lables matching the selector, so it’s possible to have access to multiple VM’s from a single Service).
Wait for the public IP to show up on the load balancer then SSH to the VM:
ssh fedora@`kubectl get svc vm1ssh -o custom-columns=":.status.loadBalancer.ingress.ip" --no-headers=true`
You’re in! It’s a simple VM but feels like a great achievement. Coming up in the next articles: storage, running Windows and more.