Hosted control planes: Creating KubeVirt hosted clusters

Anshu Garg
6 min readSep 15, 2023

In previous stories we learnt and familiarised ourselves with hosted control plane, benefits and key terms. Now it is time for the interesting part: Hands on !

Pre-requisites

  • OCP 4.12.0 to serve as base/hub OCP cluster bare metal nodes
  • OpenShift Data Foundation (ODF) using local storage devices or other storage with equivalent capabilities
  • OpenShift Virtualization
  • MetalLB
  • Couple IPs reserved for API end points for clusters to be created to be used by MetalLB
  • Multicluster Engine
  • Cluster Manager
  • HyperShift

Hub setup

Hosted control plane technology is still Tech Preview and is slated to be GAed with OpenShift Container Platform 4.14, where multicluster engine and HyperShift operator would be part of Advanced cluster manager operator.

While it is still in tech preview for setting up multicluster engine and HyperShift operator, follow steps from this excellent blog https://cloud.redhat.com/blog/effortlessly-and-efficiently-provision-openshift-clusters-with-openshift-virtualization. Once Hub cluster has been setup and verified, move to next section.

HyperShift Cli setup

Follow these steps to setup hypershift cli. KubeVirt cluster creation is still a CLI only feature and not completely integarted in ACM GUI.

docker pull registry.redhat.io/multicluster-engine/hypershift-cli-rhel8:v2.3.0-89              
docker create --name dummy registry.redhat.io/multicluster-engine/hypershift-cli-rhel8:v2.3.0-89
docker cp dummy:/opt/app-root/src/linux/amd64/hypershift.tar.gz ./hypershift.tar.gz
tar -xzf hypershift.tar.gz
./hypershift -v

Note: Different versions of HyperShift operators have a predefined latest OCP version, that is, using this cli tool, you can only create cluster of that highest version or lower. For example, this version of HyperShift can provision cluster till 4.13

KubeVirt cluster creation

Once your hub and HyperShift cli are setup. Connect to hub OCP API, from host where HyperShift cli is setup.

To create hosted control plane clusters, OpenShift release images will be pulled and for that pull-secret from Red Hat is needed. Obtain that and save it in a file, for example, ps.json. Export path to pull secret file as PULL_SECRET

export PULL_SECRET="/root/hypershift-poc/ps.json"

Identify release image for OCP version you wish to create cluster for from. For instance, for version 4.13.12, you can find it from https://mirror.openshift.com/pub/openshift-v4/clients/ocp/4.13.12/release.txt

Pull From: quay.io/openshift-release-dev/ocp-release@sha256:73946971c03b43a0dc6f7b0946b26a177c2f3c9d37105441315b4e3359373a55

Create cluster

hypershift create cluster kubevirt --name hcp-41312 --node-pool-replicas=3 --pull-secret $PULL_SECRET --root-volume-size '32' --memory '16Gi' --cores '2' --release-image=quay.io/openshift-release-dev/ocp-release@sha256:73946971c03b43a0dc6f7b0946b26a177c2f3c9d37105441315b4e3359373a55

Understanding command options

There is a wide range of options that cane be specified at time of cluster creation as listed below.

Flags:
--api-server-address string The API server address that should be used for components outside the control plane
--containerdisk string A reference to docker image with the embedded disk to be used to create the machines
--cores uint32 The number of cores inside the vmi, Must be a value greater or equal 1 (default 2)
-h, --help help for kubevirt
--memory string The amount of memory which is visible inside the Guest OS (type BinarySI, e.g. 5Gi, 100Mi) (default "4Gi")
--root-volume-access-modes string The access modes of the root volume to use for machines in the NodePool (comma-delimited list)
--root-volume-size uint32 The size of the root volume for machines in the NodePool in Gi (default 16)
--root-volume-storage-class string The storage class to use for machines in the NodePool
--service-publishing-strategy string Define how to expose the cluster services. Supported options: Ingress (Use LoadBalancer and Route to expose services), NodePort (Select a random node to expose service access through) (default "Ingress")

Global Flags:
--additional-trust-bundle string Path to a file with user CA bundle
--annotations stringArray Annotations to apply to the hostedcluster (key=value). Can be specified multiple times.
--auto-repair Enables machine autorepair with machine health checks
--base-domain string The ingress base domain for the cluster
--base-domain-prefix string The ingress base domain prefix for the cluster, defaults to cluster name. Use 'none' for an empty prefix
--cluster-cidr string The CIDR of the cluster network (default "10.132.0.0/14")
--control-plane-availability-policy string Availability policy for hosted cluster components. Supported options: SingleReplica, HighlyAvailable (default "SingleReplica")
--control-plane-operator-image string Override the default image used to deploy the control plane operator
--etcd-storage-class string The persistent volume storage class for etcd data volumes
--external-dns-domain string Sets hostname to opinionated values in the specificed domain for services with publishing type LoadBalancer or Route.
--fips Enables FIPS mode for nodes in the cluster
--generate-ssh If true, generate SSH keys
--image-content-sources string Path to a file with image content sources
--infra-availability-policy string Availability policy for infrastructure services in guest cluster. Supported options: SingleReplica, HighlyAvailable
--infra-id string Infrastructure ID to use for hosted cluster resources.
--infra-json string Path to file containing infrastructure information for the cluster. If not specified, infrastructure will be created
--name string A name for the cluster (default "example")
--namespace string A namespace to contain the generated resources (default "clusters")
--network-type string Enum specifying the cluster SDN provider. Supports either Calico, OVNKubernetes, OpenShiftSDN or Other.
--node-drain-timeout duration The NodeDrainTimeout on any created NodePools
--node-pool-replicas int32 If >-1, create a default NodePool with this many replicas
--node-selector stringToString A comma separated list of key=value to use as node selector for the Hosted Control Plane pods to stick to. E.g. role=cp,disk=fast (default [])
--node-upgrade-type UpgradeType The NodePool upgrade strategy for how nodes should behave when upgraded. Supported options: Replace, InPlace (default Replace)
--pull-secret string Path to a pull secret (required)
--release-image string The OCP release image for the cluster
--render Render output as YAML to stdout instead of applying
--service-cidr string The CIDR of the service network (default "172.31.0.0/16")
--ssh-key string Path to an SSH key file
--timeout duration If the --wait flag is set, set the optional timeout to limit the waiting duration. The format is duration; e.g. 30s or 1h30m45s; 0 means no timeout; default = 0
--wait If the create command should block until the cluster is up. Requires at least one node.

Let’s closely look at options we used above in our cluster creation.

  • name

Name of cluster to be created. Must be unique for clusters for a hub

  • node-pool-replicas

Number of compute nodes (VMs) to be added to cluster being created

  • pull-secret

Red Hat pull secret to pull OCP release images. This also gets set as default pull-secret to be used in your cerated cluster.

TIP: Append auth for registries you plan to use on your cluster in this pull-secret file. This will elimiate need to change configured pull secret later.

  • root-volume-size

This is the size of disk that will be used for each VM for CoreOS installation

  • memory

Memory to be allocated to each VM of cluster being created.

  • core

CPU cores to be added to each VM for cluster.

  • release-image

OCP release image corresponding to OCP version that you want to create.

Monitor cluster progress

oc wait --for=condition=Ready --namespace hcp-41312 vm --all --timeout=600s

Retrieve kubeconfig

hypershift create kubeconfig --name=hcp-41312 > hcp-41312-kubeconfig

Cluster access details

You can retrieve cluster API/Ingress endpoints and kubeadmin password from Hub cluster console as shown

HCP clusters Ingress console is available on Hub cluster Ingress

Peak into hosted cluster

Notice, there are not any control plane nodes. Just compute nodes for workloads.

Conclusion

This concludes walk through of KubeVirt cluster creation. In next article I’ll take you through creation of a bare metal hosted control plane cluster.

--

--