Deploying VMs into multiple clusters using KubeStellar and KubeVirt
Introduction
Distributing workloads into multiple Kubernetes clusters can be a tedious and complex task. In particular when we want to use KubeVirt to manage VMs on Kubernetes clusters, we need to deploy KubeVirt in each of these clusters, and then connect to each cluster in order to create and manage the VMs
In this post we will show how users can easily and transparently deploy VMs into multiple Kubernetes clusters through KubeStellar, avoiding the need to interact with each cluster individually.
Table of Contents
- Background
- Setup the KubeStellar environment
- Deploy KubeVirt into multiple clusters
- Deploying VMs into multiple clusters
- Summary
Background
KubeStellar
KubeStellar supports multi-cluster deployment of Kubernetes objects, controlled by a simple binding policy and deploying Kubernetes objects in their native formats. It uses OCM as transport, with standard OCM agents (Klusterlet).
In a single-cluster setup, developers typically access the cluster and deploy Kubernetes objects directly. Without KubeStellar, multiple clusters are usually deployed and configured individually, which can be time-consuming and complex.
KubeStellar simplifies this process by allowing developers to define a binding policy between clusters and Kubernetes objects. You can then use your regular single-cluster tooling to deploy and configure native Kubernetes objects into the Workload Execution Cluster (WEC) based on these binding policies, making multi-cluster operations as straightforward as managing a single cluster. This approach enhances productivity and efficiency, making KubeStellar a valuable tool in a multi-cluster Kubernetes environment. In order to deploy a workload into one or more WECs the user simply needs to:
- Deploy the workload into the Workload Description Space (WDS)
- Create a BindingPolicy that specify what objects should be deployed and where they should be deployed to (i.e., which WECs)
Both the What and Where can be defined using selection criteria (e.g., object names, label selectors, etc..). We will elaborate on those steps later through the example.
KubeVirt
KubeVirt is an open-source project that enhances Kubernetes by integrating virtual machine management within the same framework used for container orchestration. It enables both VMs and containerized applications to coexist and be managed through the same Kubernetes interface, using a unified set of tools and APIs. This is particularly beneficial for organizations that are transitioning from traditional server environments to container-based deployments but still need to maintain legacy systems that require virtual machines.
KubeVirt utilizes the same foundational principles of Kubernetes. It introduces additional resource types into the Kubernetes ecosystem through Custom Resource Definitions (CRDs). These new types allow the Kubernetes cluster to manage VMs and Virtual Machine Instances (VMIs) as if they were native Kubernetes objects. In addition to CRDs KubeVirt also adds controllers that observe the Kubernetes API for changes to VMI objects and orchestrate the necessary operations to align the virtual machines’ actual state with the desired state specified in the VMI definitions. It also adds Daemons (e.g., virt-handler) that are running on each node; these daemons work alongside the kubelet to handle node-specific tasks, such as launching and configuring VMIs to ensure they reach and maintain their required state.
Both controllers and daemons operate within the Kubernetes cluster, deployed as Pods themselves, ensuring that they leverage the same underlying Kubernetes infrastructure without requiring side installations. This integration allows users to manage VMs through familiar Kubernetes operations and tools, enhancing usability and leveraging existing knowledge and infrastructure. The entire architecture enables users to seamlessly create and manage VMIs through Kubernetes.
Setup the KubeStellar environment
Deploy the prerequisites
KubeStellar requires some prerequisites, you can find the description and how to install them in https://docs.kubestellar.io/release-0.21.2/direct/pre-reqs/#kubestellar-prerequisites
You can also use the KubeStellar check_pre_req.sh utility script to validate the pre-requisites:
check_pre_req.sh --assert docker go helm kflex kind ko kubectl make ocmInstall and setup KubeStellar
We will be using a KubeStellar utility script to install, you can get it by cloning the KubeStellar github repo
git clone https://github.com/kubestellar/kubestellar
cd kubestellar
./test/e2e/common/setup-kubestellar.shAfter the setup, you should have the following
kubectl config get-contextsWe will use three KIND clusters:
kind-kubeflex: The hub cluster (the KubeStellar management cluster)
cluster1, cluster2: Two work execution clusters that workloads should be distributed to
We also use two internal spaces**
its1: Inventory and Transport Space (ITS), used for the KubeStelar transport (OCM)
wds1: This is the Workload Description Space (WDS) that the user interacts with to distribute workloads to the WECs
**Space: KubeStellar relies on the concept of spaces. A Space is an abstraction to represent an API service that behaves like a Kubernetes kube-apiserver (including the persistent storage behind it) and the subset of controllers in the kube-controller-manager that are concerned with API machinery generalities (not management of containerized workloads). In KubeStellar we use KubeFlex ControlPlanes as our spaces. (See: https://github.com/kubestellar/kubeflex)
Deploy KubeVirt into multiple WECs
KubeStellar can be used to easily deploy KubeVirt itself to multiple WECs avoiding the need to explicitly install it into each cluster.
Deploy KubeVirt to WDS
We use the regular KubeVirt deployment procedure (unchanged) and use the WDS as the target by specifying the wds1 context.
kubectl --context wds1 create -f https://github.com/kubevirt/kubevirt/releases/download/v1.2.0/kubevirt-operator.yaml
kubectl --context wds1 create -f https://github.com/kubevirt/kubevirt/releases/download/v1.2.0/kubevirt-cr.yamlLabel all the deployed objects
We label all objects related to KubeVirt so we can easily point to them when defining the BindingPolicy. In this example we set the label to deploy=kubevirt
kubectl --context wds1 label -f https://github.com/kubevirt/kubevirt/releases/download/v1.2.0/kubevirt-operator.yaml deploy=kubevirt
kubectl --context wds1 label -f https://github.com/kubevirt/kubevirt/releases/download/v1.2.0/kubevirt-cr.yaml deploy=kubevirtAdd RBAC permissions for the priorityclasses resource
*Note This step can be skipped once KubeStellar will be updated to use the new version of OCM.
kubectl --context wds1 apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
deploy: kubevirt
name: klusterlet-priorityclasses-access
rules:
- apiGroups: ["scheduling.k8s.io"]
resources: ["priorityclasses"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
deploy: kubevirt
name: klusterlet-priorityclasses-access
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: klusterlet-priorityclasses-access
subjects:
- kind: ServiceAccount
name: klusterlet-work-sa
namespace: open-cluster-management-agent
EOFCreate BindingPolicy
Create a BindingPolicy to match the KubeVirt related objects and the target WECs. We use the deploy=kubevirt label to select all the KubeVirt related objects (the label was created in the previous steps). To select the WECs we use the label location-group=edge, KubeStellar creates this label by default on all WEC objects. By using this label we simply select all WECs
kubectl --context wds1 apply -f - <<EOF
apiVersion: control.kubestellar.io/v1alpha1
kind: BindingPolicy
metadata:
name: kubevirt-deploy-bindingpolicy
spec:
clusterSelectors:
- matchLabels: {"location-group":"edge"}
downsync:
- objectSelectors:
- matchLabels: {"deploy":"kubevirt"}
EOFVerify the KubeVirt installation on the WECs
Wait for the KubeVirt object status to become Deployed on both WECs.
kubectl --context cluster1 wait --for=jsonpath='{.status.phase}'=Deployed kubevirt.kubevirt.io/kubevirt -n kubevirt
kubectl --context cluster2 wait --for=jsonpath='{.status.phase}'=Deployed kubevirt.kubevirt.io/kubevirt -n kubevirtCreating and managing VMs on multiple WECs
Once KubeVirt is deployed to the WECs, we can now use KubeStellar in a similar way to deploy (and manage) VMs into multiple WECs
Create the KubeVirt VM CRD and deploy it to the WDS
In order to deploy VMs through KubeStellar we need to create the VM CRD in the WDS
git clone https://github.com/kubevirt/kubevirt -b release-1.2
cd kubevirt
go install sigs.k8s.io/controller-tools/cmd/controller-gen@v0.13.0
controller-gen crd:allowDangerousTypes=true paths=./staging/src/kubevirt.io/api/core/v1/…
kubectl --context wds1 create -f config/crd/kubevirt.io_virtualmachines.yamlDeploy the VM to the WDS, and label it
We use the VM from the KubeVirt example as is.
kubectl --context wds1 apply -f https://kubevirt.io/labs/manifests/vm.yaml
kubectl --context wds1 label virtualmachine testvm demo=kubevirtDisplay the VM on the WDS
kubectl --context wds1 get virtualmachines
NAME AGE
testvm 10mCreate the BindingPolicy for the VM
kubectl --context wds1 apply -f - <<EOF
apiVersion: control.kubestellar.io/v1alpha1
kind: BindingPolicy
metadata:
name: kubevirt-bindingpolicy
spec:
clusterSelectors:
- matchLabels: {"location-group":"edge"}
downsync:
- objectSelectors:
- matchLabels: {"demo":"kubevirt"}
EOFOnce we created the BindingPolicy, the VM should now be automatically deployed into the WECs
Check the VM on the WECs
kubectl --context cluster1 get vms
NAME AGE STATUS READY
testvm 30s Stopped FalseBy default the VM is deployed in the “stopped” state. We can start the VM on all WECs through KubeStellar.
Start the VM on both WECs through KubeStellar
kubectl --context wds1 patch virtualmachine testvm --type merge -p '{"spec":{"running":true}}'Check the VM on both WECs
kubectl --context cluster1 get vms
kubectl --context cluster2 get vmsOptional: Access the VM
We will use the virtctl utility to access the VM, but any regular method should work (e.g., ssh, vnc, etc..)
ARCH=$(uname -s | tr A-Z a-z)-$(uname -m | sed ‘s/x86_64/amd64/’) || windows-amd64.exe
curl -L -o virtctl https://github.com/kubevirt/kubevirt/releases/download/v1.2.0/virtctl-v1.2.0-${ARCH}
chmod +x virtctl
./virtctl --context cluster1 console testvmSummary
In this post we showed how KubeStellar can be used to deploy workloads across multiple Kubernetes clusters in an easy and transparent way.
Specifically, users can now deploy VMs into a large number of Kubernetes clusters without the need to deploy KubeVirt individually into each cluster. Users can then deploy the VMs themselves and start/stop them from a centralized place avoiding the need to interact with each cluster in order to manage the VMs.
As we showed above the process is very simple and basically contains only 2 steps:
- Deploy your workload into KubeStellar Workload Description Space (WDS)
- Create a BindingPolicy to match workload objects and target clusters
As KubeStellar supports distributing any Kubernetes workload, this opens the opportunity for users to distribute complex workloads combined from a mix of containerized and VM based workloads.
References:
- https://kubestellar.io/
- https://github.com/kubestellar/kubestellar
- https://github.com/kubestellar/kubeflex
- https://kubevirt.io/
Credits: Joint work with Andrey Odarenko
