ClusterAPI for Kubernetes— A detailed look on Vmware Cluster API

Abhishek Mitra
18 min readJan 8, 2020

--

CAP-V

Kubernetes, since its inception, has become the next cool kid on the block and everyone seems to be using this tech to maintain relevance with the changing scenario in the IT tech world. Creating and managing K8 clusters is a challenging task by itself.

Discounting the enterprise tools available, there are a lot of open source tools available that one can use to manage and create k8 clusters such as Terraform and Ansible.

The K8 sig also developed kubeadm to make installation cloud agnostic , it provisions the Kubernetes cluster and handles cluster upgrades but does not get into the domain of infrastructure management.

I myself built one handy tool specific to Vmware “Complete end-end Kubernetes Cluster Deployment

What is Cluster Api?

Cluster API is part of Cluster Lifecycle SIG project in Kubernetes that attempts brings declarative, Kubernetes-style APIs for cluster creation, configuration and management. It provides optional, additive functionality on top of core Kubernetes API with custom resources definition (CRD).

  1. A declarative, Kubernetes-style API, so we can manage and maintain Kubernetes infrastructure in a fashion we got used to, example using kubectl.
  2. Consistent way to provision and manage Kubernetes clusters across multiple cloud providers.
  3. Management of infra and cluster via a single interface.
Cluster API

So cluster api uses CRDs underneath the hood to manage kubernetes infrastructure. It basically allows the capability of creating ,provisioning and deleting actual physical resources on the cloud using a declarative approach.

To put it bluntly normally developers/devops folks use :

  1. Cloud-formation to tie infrastructure pieces of AWS
  2. Powershell for Azure Tasks
  3. Python for Vmware/Openstack/KVM
  4. Next we have tools like Terraform and Ansible for doing similar stuff on all Clouds. These are cloud agnostic, but its meant for creating individual constructs in the specific cloud and much more
  5. Cluster Api allows you to deploy and manage K8 cluster on any cloud and takes care of setting up the infrastructure pieces also ! It uses cloud-init to perform customization in a cloud agnostic way

Following are the basic building blocks of cluster api :

ClusterAPI and K8s Components

Management cluster
The cluster where one or more Infrastructure Providers run, and where resources (e.g. Machines) are stored. Typically referred to when you are provisioning multiple clusters. We can consider this as the main control cluster.

Workload/Target Cluster
A cluster whose lifecycle is managed by the Management cluster.

A Machine resource resembles an individual Kubernetes control plane or Kubernetes worker node. Machine definition is specified in Machine resource spec. It consists of information like kubelet version, etc. Machine resource is analogous to a Pod in Kubernetes lingo.

A MachineSet resource represents a group of machines. The idea of MachineSet is taken from ReplicaSet, but instead of representing group of Pods, MachineSet represents a group of Machines.

A MachineDeployment resource represents the deployment object for the machine. It is analogous to Deployment resource in Kubernetes.

How it works ?

  1. There are 2 K8s clusters involved in this process.
  2. Create a bootstrap cluster (on demand if you do not have existing Kubernetes cluster) and apply resources in the cluster
  3. Create a machine of control plane for the target cluster.
    Transfer all Cluster API resources from bootstrap cluster to target cluster control plane
  4. Create compute node from control plane of target clusters
Simple Flow Diagram

Bootstrap cluster is automatically destroyed (if created by clusterctl) after target Kubetnetes taking over the Cluster API management work.

Hands on Vmware !!

Pre-requisites:-

a. A vSphere environment (duh !!!)

b. A Ubuntu1604 VM that can reach the vSphere env. This can be your mac/windows machine also (Yes Windows !!)

c. Docker https://docs.docker.com/install/linux/docker-ce/ubuntu/

d. govc https://github.com/vmware/govmomi/tree/master/govc

e. kubectl https://kubernetes.io/docs/tasks/tools/install-kubectl/

f. kind https://github.com/kubernetes-sigs/kind

g. Clusterctl — The cmd line tool for clusterApi

Installing govc

govc is a tool written in go that helps us interact with vsphere over command line. Its platform agnostic !

root@demo-exec:~# export URL_TO_BINARY=https://github.com/vmware/govmomi/releases/download/prerelease-v0.21.0-58-g8d28646/govc_linux_amd64.gz
root@demo-exec:~# curl -L $URL_TO_BINARY | gunzip > /usr/local/bin/govc
root@demo-exec:~# chmod +x /usr/local/bin/govc
root@demo-exec:~# govc version
govc 0.21.0
root@demo-exec:~#

Installing go

Installing golang will be needed when we build binaries down the line

root@demo-exec:~# wget https://dl.google.com/go/go1.13.5.linux-amd64.tar.gz
root@demo-exec:~# tar -zxvf go1.13.5.linux-amd64.tar.gz
root@demo-exec:~# cp go/bin/go /usr/local/bin/
root@demo-exec:~# cp go/bin/gofmt /usr/local/bin/

Installing Kubectl

The cmd line for K8s

root@demo-exec:~# curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
root@demo-exec:~# mv kubectl /usr/local/bin/
root@demo-exec:~# chmod +x /usr/local/bin/kubectl

Installing Kind

Kubernetes In Docker is a tool for running local Kubernetes clusters using Docker container “nodes”.

root@demo-exec:~# curl -Lo ./kind "https://github.com/kubernetes-sigs/kind/releases/download/v0.6.1/kind-$(uname)-amd64"
root@demo-exec:~# chmod +x ./kind
root@demo-exec:~# mv ./kind /some-dir-in-your-PATH/kind

Installing Clusterctl

This is similar to kubectl and its required only for creating the management cluster for Cluster Api

root@demo-exec:~# wget https://github.com/kubernetes-sigs/cluster-api/releases/download/v0.2.8/clusterctl-linux-amd64
root@demo-exec:~# mv clusterctl-linux-amd64 clusterctl
root@demo-exec:~# chmod +x clusterctl
root@demo-exec:~# mv clusterctl /usr/local/bin/
  1. We will be using the CAPV image provided by the community from here . For this article we will be using Ubuntu1805 and K8s v1.6.3.
root@demo-exec:~/ClusterApiWorkingDirectory/ova# tree /root/
/root/
├── ClusterApiWorkingDirectory
│ └── ova
root@demo-exec:~/ClusterApiWorkingDirectory/ova# wget http://storage.googleapis.com/capv-images/release/v1.16.3/ubuntu-1804-kube-v1.16.3.ova

2. In the remaining section we will be using govc to perform all vSphere related actions. (All these can be done from the UI, this is just a different perspective). To use govc we need to update certain env variables

export GOVC_INSECURE=1
export GOVC_URL='vmware-scale-vcenter.cloudlabs.com'
export GOVC_USERNAME='vm-admin@vsphere.local'
export GOVC_PASSWORD='passWord'
export GOVC_DATASTORE="datastore01"
export GOVC_NETWORK="vlan1212"
export GOVC_RESOURCE_POOL='vmware-scale/ClusterApi'
export GOVC_DATACENTER="SJC19"
root@demo-exec:~/ClusterApiWorkingDirectory# govc about
Name: VMware vCenter Server
Vendor: VMware, Inc.
Version: 6.5.0
Build: 8307201
OS type: linux-x64
API type: VirtualCenter
API version: 6.5
Product ID: vpx
UUID: 816912af-a101-4058-9fbc-0ae53821663d
root@demo-exec:~/ClusterApiWorkingDirectory#

3. Next we create some folders on our vSphere environment to upload our templates

govc folder.create /$GOVC_DATACENTER/vm/ClusterApiTemplates
govc folder.create /$GOVC_DATACENTER/vm/ClusterApiDemo
govc folder.create /$GOVC_DATACENTER/vm/ClusterApiDemo/K8s

4. Next we import the OVA into vSphere (downloaded) in step1 and customize it. Normally this is a very simple step from the vSphere client , but here we will do it with govc (crazy stuff !!)

root@demo-exec:~/ClusterApiWorkingDirectory# govc import.spec ova/ubuntu-1804-kube-v1.16.3.ova | python -m json.tool > ubuntu.jsonroot@demo-exec:~/ClusterApiWorkingDirectory# cat ubuntu.json
{
"Annotation": "Cluster API vSphere image - Ubuntu 18.04 and Kubernetes v1.16.3 - https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/tree/master/build/images",
"DiskProvisioning": "flat",
"IPAllocationPolicy": "dhcpPolicy",
"IPProtocol": "IPv4",
"InjectOvfEnv": false,
"MarkAsTemplate": false,
"Name": null,
"NetworkMapping": [
{
"Name": "nic0",
"Network": ""
}
],
"PowerOn": false,
"WaitForIP": false
}
root@demo-exec:~/ClusterApiWorkingDirectory#

5. Below is the modified ubuntu.json file as per my env .

root@demo-exec:~/ClusterApiWorkingDirectory# cat ubuntu.json
{
"Annotation": "Cluster API vSphere image - Ubuntu 18.04 and Kubernetes v1.16.3 - https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/tree/master/build/images",
"DiskProvisioning": "thin", ### For testing we can use thin
"IPAllocationPolicy": "dhcpPolicy",
"IPProtocol": "IPv4",
"InjectOvfEnv": false,
"MarkAsTemplate": true, ### Will convert to template after import
"Name": "ubuntu-1804-kube-v1.16.3.ova", ## Name as per the OVA downloaded
"NetworkMapping": [
{
"Name": "nic0",
"Network": "vlan1212" ### Network info
}
],
"PowerOn": false,
"WaitForIP": false
}

6. Import the OVA using the json file

govc import.ova -folder /$GOVC_DATACENTER/vm/ClusterApiTemplates -options ubuntu.json ova/ubuntu-1804-kube-v1.16.3.ova

Clusterctl, cmd line tool for ClusterApi

Note: ClusterApi is still in Alpha at the time of writing this article

  1. As explained in the beginning we need a management cluster , which allows us to spin off workload/target clusters
  2. We need to generate the following yamls to deploy the management cluster. These are also the building blocks for future clusters as well
cluster.yaml - The cluster resource for the target cluster
controlplane.yaml - The machine resource for target cluster's control plane nodes
machinedeployment.yaml - The machine resource for target cluster's worker nodes
provider-components.yaml - The CAPI and CAPV resources for the target cluster
addons.yaml - Additional add-ons to apply to the management cluster (ex. CNI)

3. To generate the above yamls we need to create a config file that will have the environment details where our k8s cluster will be deployed.

$ cat <<EOF >envvars.txt
# vCenter config/credentials
export VSPHERE_SERVER='10.0.0.1' # (required) The vCenter server IP or FQDN
export VSPHERE_USERNAME='viadmin@vmware.local' # (required) The username used to access the remote vSphere endpoint
export VSPHERE_PASSWORD='some-secure-password' # (required) The password used to access the remote vSphere endpoint

# vSphere deployment configs
export VSPHERE_DATACENTER='SDDC-Datacenter' # (required) The vSphere datacenter to deploy the management cluster on
export VSPHERE_DATASTORE='DefaultDatastore' # (required) The vSphere datastore to deploy the management cluster on
export VSPHERE_NETWORK='vm-network-1' # (required) The VM network to deploy the management cluster on
export VSPHERE_RESOURCE_POOL='*/Resources' # (required) The vSphere resource pool for your VMs
export VSPHERE_FOLDER='vm' # (optional) The VM folder for your VMs, defaults to the root vSphere folder if not set.
export VSPHERE_TEMPLATE='ubuntu-1804-kube-v1.15.4' # (required) The VM template to use for your management cluster.
export VSPHERE_DISK_GIB='50' # (optional) The VM Disk size in GB, defaults to 20 if not set
export VSPHERE_NUM_CPUS='2' # (optional) The # of CPUs for control plane nodes in your management cluster, defaults to 2 if not set
export VSPHERE_MEM_MIB='2048' # (optional) The memory (in MiB) for control plane nodes in your management cluster, defaults to 2048 if not set
export SSH_AUTHORIZED_KEY='ssh-rsa AAAAB3N...' # (optional) The public ssh authorized key on all machines in this cluster

# Kubernetes configs
export KUBERNETES_VERSION='1.16.2' # (optional) The Kubernetes version to use, defaults to 1.16.2
export SERVICE_CIDR='100.64.0.0/13' # (optional) The service CIDR of the management cluster, defaults to "100.64.0.0/13"
export CLUSTER_CIDR='100.96.0.0/11' # (optional) The cluster CIDR of the management cluster, defaults to "100.96.0.0/11"
export SERVICE_DOMAIN='cluster.local' # (optional) The k8s service domain of the management cluster, defaults to "cluster.local"
EOF

Note: Modify the above values as per your environment

4. Run the following cmd to generate the yamls

docker run --rm \
-v "$(pwd)":/out \
-v "$(pwd)/envvars.txt":/envvars.txt:ro \
gcr.io/cluster-api-provider-vsphere/release/manifests:latest \
-c management-cluster

Generated ./out/management-cluster/cluster.yaml
Generated ./out/management-cluster/controlplane.yaml
Generated ./out/management-cluster/machinedeployment.yaml
Generated /build/examples/provider-components/provider-components-cluster-api.yaml
Generated /build/examples/provider-components/provider-components-kubeadm.yaml
Generated /build/examples/provider-components/provider-components-vsphere.yaml
Generated ./out/management-cluster/provider-components.yaml
WARNING: ./out/management-cluster/provider-components.yaml includes vSphere credentials
root@demo-exec:~/ClusterApiWorkingDirectory# tree out/
out/
└── management-cluster
├── addons.yaml
├── cluster.yaml
├── controlplane.yaml
├── machinedeployment.yaml
└── provider-components.yaml
1 directory, 5 files

5. The yaml files have the machine, cluster configurations of the management cluster . Lets go ahead and deploy the management cluster using clusterctl

clusterctl create cluster \
--bootstrap-type kind \
--bootstrap-flags name=management-cluster \
--cluster ./out/management-cluster/cluster.yaml \
--machines ./out/management-cluster/controlplane.yaml \
--provider-components ./out/management-cluster/provider-components.yaml \
--addon-components ./out/management-cluster/addons.yaml \
--kubeconfig-out ./out/management-cluster/kubeconfig

6. If all the input is correct, we should see a docker container spin up in the VM . This is basically a Kubernetes cluster running IN a Docker container.

root@demo-exec:~/ClusterApiWorkingDirectory# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
aa2ab23b52e9 kindest/node:v1.16.3 "/usr/local/bin/entr…" 19 seconds ago Up 18 seconds 127.0.0.1:35292->6443/tcp management-cluster-control-plane
root@demo-exec:~/ClusterApiWorkingDirectory# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
cabpk-system cabpk-controller-manager-84cf8f75b-vw7lf 2/2 Running 0 106s
capi-system capi-controller-manager-7c67ddc7f9-kkcth 1/1 Running 0 106s
capv-system capv-controller-manager-5b49494db8-htx9h 1/1 Running 0 106s
kube-system coredns-5644d7b6d9-6v5lx 1/1 Running 0 106s
kube-system coredns-5644d7b6d9-b8n8j 1/1 Running 0 106s
kube-system etcd-management-cluster-control-plane 1/1 Running 0 60s
kube-system kindnet-xh57j 1/1 Running 0 106s
kube-system kube-apiserver-management-cluster-control-plane 1/1 Running 0 38s
kube-system kube-controller-manager-management-cluster-control-plane 1/1 Running 0 63s
kube-system kube-proxy-slqzs 1/1 Running 0 106s
kube-system kube-scheduler-management-cluster-control-plane 1/1 Running 0 52s

Its purpose is to bootstrap the management cluster thats deployed on vSphere environment whose details we provided in step3.

7. Once the clusterctl cmd completion completes( step 5) we are presented with the following info

I0108 00:13:38.309071   18057 createbootstrapcluster.go:27] Preparing bootstrap cluster
I0108 00:14:24.680352 18057 clusterdeployer.go:82] Applying Cluster API stack to bootstrap cluster
I0108 00:14:24.680392 18057 applyclusterapicomponents.go:26] Applying Cluster API Provider Components
I0108 00:14:26.799013 18057 clusterdeployer.go:87] Provisioning target cluster via bootstrap cluster
I0108 00:14:26.803304 18057 applycluster.go:42] Creating Cluster referenced object "infrastructure.cluster.x-k8s.io/v1alpha2, Kind=VSphereCluster" with name "capv-mgmt-example" in namespace "default"
I0108 00:14:26.852068 18057 applycluster.go:48] Creating cluster object capv-mgmt-example in namespace "default"
I0108 00:14:26.856443 18057 clusterdeployer.go:96] Creating control plane machine "capv-mgmt-example-controlplane-0" in namespace "default"
I0108 00:14:26.859149 18057 applymachines.go:40] Creating Machine referenced object "infrastructure.cluster.x-k8s.io/v1alpha2, Kind=VSphereMachine" with name "capv-mgmt-example-controlplane-0" in namespace "default"
I0108 00:14:26.917195 18057 applymachines.go:40] Creating Machine referenced object "bootstrap.cluster.x-k8s.io/v1alpha2, Kind=KubeadmConfig" with name "capv-mgmt-example-controlplane-0" in namespace "default"
I0108 00:14:26.976280 18057 applymachines.go:46] Creating machines in namespace "default"
I0108 00:17:37.030632 18057 clusterdeployer.go:105] Creating target cluster
I0108 00:17:37.056907 18057 applyaddons.go:25] Applying Addons
I0108 00:17:37.743091 18057 clusterdeployer.go:123] Pivoting Cluster API stack to target cluster
I0108 00:17:37.743322 18057 pivot.go:76] Applying Cluster API Provider Components to Target Cluster
I0108 00:17:38.701871 18057 pivot.go:81] Pivoting Cluster API objects from bootstrap to target cluster.
I0108 00:18:19.454268 18057 clusterdeployer.go:128] Saving provider components to the target cluster
I0108 00:18:19.489241 18057 clusterdeployer.go:150] Creating node machines in target cluster.
I0108 00:18:19.492122 18057 applymachines.go:46] Creating machines in namespace "default"
I0108 00:18:19.492146 18057 clusterdeployer.go:164] Done provisioning cluster. You can now access your cluster with kubectl --kubeconfig ./out/management-cluster/kubeconfig
I0108 00:18:19.492274 18057 createbootstrapcluster.go:36] Cleaning up bootstrap cluster.
## On Vsphere we see this
root@demo-exec:~/ClusterApiWorkingDirectory# govc vm.info capv-mgmt-example-controlplane-0
Name: capv-mgmt-example-controlplane-0
Path: /SJC19/vm/ClusterApiDemo/K8s/capv-mgmt-example-controlplane-0
UUID: 422aa353-d034-698f-8709-83b4b30209a1
Guest name: Other 3.x or later Linux (64-bit)
Memory: 8192MB
CPU: 4 vCPU(s)
Power state: poweredOn
Boot time: 2020-01-08 08:16:03.387386 +0000 UTC
IP address: 10.8.14.97
Host: vmware-scale-5.######.com

8. Now that we have the management cluster lets access the same go through the configurations

root@demo-exec:~/ClusterApiWorkingDirectory# kubectl --kubeconfig ./out/management-cluster/kubeconfig get nodes
NAME STATUS ROLES AGE VERSION
capv-mgmt-example-controlplane-0 Ready master 16m v1.16.3
root@demo-exec:~/ClusterApiWorkingDirectory# kubectl --kubeconfig ./out/management-cluster/kubeconfig get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
cabpk-system cabpk-controller-manager-84cf8f75b-ltbqs 2/2 Running 0 16m
capi-system capi-controller-manager-7c67ddc7f9-wl8mn 1/1 Running 0 16m
capv-system capv-controller-manager-5b49494db8-dngtv 1/1 Running 0 16m
kube-system calico-kube-controllers-ff95847f5-8bf7d 1/1 Running 0 16m
kube-system calico-node-mfpv8 1/1 Running 0 16m
kube-system coredns-5644d7b6d9-24vqm 1/1 Running 0 16m
kube-system coredns-5644d7b6d9-qvzpg 1/1 Running 0 16m
kube-system etcd-capv-mgmt-example-controlplane-0 1/1 Running 0 15m
kube-system kube-apiserver-capv-mgmt-example-controlplane-0 1/1 Running 0 15m
kube-system kube-controller-manager-capv-mgmt-example-controlplane-0 1/1 Running 0 15m
kube-system kube-proxy-sjzjs 1/1 Running 0 16m
kube-system kube-scheduler-capv-mgmt-example-controlplane-0 1/1 Running 0 14m

9. Lets look at the clusters and machines the management cluster is managing

root@demo-exec:~/ClusterApiWorkingDirectory# kubectl --kubeconfig ./out/management-cluster/kubeconfig get cluster
NAME PHASE
capv-mgmt-example provisioned
root@demo-exec:~/ClusterApiWorkingDirectory# kubectl --kubeconfig ./out/management-cluster/kubeconfig get machine
NAME PROVIDERID PHASE
capv-mgmt-example-controlplane-0 vsphere://422aa353-d034-698f-8709-83b4b30209a1 running
root@demo-exec:~/ClusterApiWorkingDirectory#
root@demo-exec:~/ClusterApiWorkingDirectory# kubectl --kubeconfig ./out/management-cluster/kubeconfig get machineset
No resources found in default namespace.
root@demo-exec:~/ClusterApiWorkingDirectory# kubectl --kubeconfig ./out/management-cluster/kubeconfig get machinedeployment
No resources found in default namespace.

10. Next we proceed to create a target cluster using this management cluster. Create a copy of the envvars.txt file and modify them with values pertaining to the new cluster (step3) and execute the command as shown in step4 to generate the yamls for the workload cluster

docker run --rm \
-v "$(pwd)":/out \
-v "$(pwd)/envvars-worker1.txt":/envvars.txt:ro \
gcr.io/cluster-api-provider-vsphere/release/manifests:latest \
-c workload-cluster-1
root@demo-exec:~/ClusterApiWorkingDirectory# docker run --rm \
> -v "$(pwd)":/out \
> -v "$(pwd)/envvars-worker1.txt":/envvars.txt:ro \
> gcr.io/cluster-api-provider-vsphere/release/manifests:latest \
> -c workload-cluster-1
Checking vmware-scale-vcenter.cpsg.ciscolabs.com for vSphere version
Detected vSphere version 6.5
Generated ./out/workload-cluster-1/addons.yaml
Generated ./out/workload-cluster-1/cluster.yaml
Generated ./out/workload-cluster-1/controlplane.yaml
Generated ./out/workload-cluster-1/machinedeployment.yaml
Generated /build/examples/pre-67u3/provider-components/provider-components-cluster-api.yaml
Generated /build/examples/pre-67u3/provider-components/provider-components-kubeadm.yaml
Generated /build/examples/pre-67u3/provider-components/provider-components-vsphere.yaml
Generated ./out/workload-cluster-1/provider-components.yaml
WARNING: ./out/workload-cluster-1/provider-components.yaml includes vSphere credentials

root@demo-exec:~/ClusterApiWorkingDirectory# tree out/
out/
├── management-cluster
│ ├── addons.yaml
│ ├── cluster.yaml
│ ├── controlplane.yaml
│ ├── kubeconfig
│ ├── machinedeployment.yaml
│ └── provider-components.yaml
└── workload-cluster-1
├── addons.yaml
├── cluster.yaml
├── controlplane.yaml
├── machinedeployment.yaml
└── provider-components.yaml
2 directories, 11 files

11. Using the management cluster as reference we need to execute the following YAMLs — cluster.yaml, controlplane.yaml an machinedeployment.yaml to create the new worker cluster .

root@demo-exec:~/ClusterApiWorkingDirectory# kubectl --kubeconfig ./out/management-cluster/kubeconfig apply -f out/workload-cluster-1/cluster.yaml
cluster.cluster.x-k8s.io/capv-worker01 created
vspherecluster.infrastructure.cluster.x-k8s.io/capv-worker01 created
root@demo-exec:~/ClusterApiWorkingDirectory# kubectl --kubeconfig ./out/management-cluster/kubeconfig apply -f out/workload-cluster-1/controlplane.yaml
kubeadmconfig.bootstrap.cluster.x-k8s.io/capv-worker01-controlplane-0 created
machine.cluster.x-k8s.io/capv-worker01-controlplane-0 created
vspheremachine.infrastructure.cluster.x-k8s.io/capv-worker01-controlplane-0 created
root@demo-exec:~/ClusterApiWorkingDirectory# kubectl --kubeconfig ./out/management-cluster/kubeconfig apply -f out/workload-cluster-1/machinedeployment.yaml
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capv-worker01-md-0 created
machinedeployment.cluster.x-k8s.io/capv-worker01-md-0 created
vspheremachinetemplate.infrastructure.cluster.x-k8s.io/capv-worker01-md-0 created

12. Now this will trigger the deployment of the first target cluster on VMware. Note , we never had to create any virtual machines. All the cloud provisioning will be taken care using ClusterApi and configuration via cloud-init. The initial target cluster consists of only 1 master and 1 worker. Lets look at the configuration again from the POV of the management cluster

root@demo-exec:~/ClusterApiWorkingDirectory# kubectl --kubeconfig ./out/management-cluster/kubeconfig get clusters
NAME PHASE
capv-mgmt-example provisioned
capv-worker01 provisioned
root@demo-exec:~/ClusterApiWorkingDirectory# kubectl --kubeconfig ./out/management-cluster/kubeconfig get machine
NAME PROVIDERID PHASE
capv-mgmt-example-controlplane-0 vsphere://422aa353-d034-698f-8709-83b4b30209a1 running
capv-worker01-controlplane-0 vsphere://422a568e-96ec-387e-0ef3-427db3d15b9f running
capv-worker01-md-0-6846468464-sst7s vsphere://422abccc-26d5-feb4-f303-5586f34564a3 running
root@demo-exec:~/ClusterApiWorkingDirectory#
root@demo-exec:~/ClusterApiWorkingDirectory# kubectl --kubeconfig ./out/management-cluster/kubeconfig get machinesets
NAME AGE
capv-worker01-md-0-6846468464 5h24m
root@demo-exec:~/ClusterApiWorkingDirectory#
root@demo-exec:~/ClusterApiWorkingDirectory# kubectl --kubeconfig ./out/management-cluster/kubeconfig get machinedeployment
NAME AGE
capv-worker01-md-0 5h24m
root@demo-exec:~/ClusterApiWorkingDirectory#

13. We see above vms created in the VMware vSphere as well

root@demo-exec:~/ClusterApiWorkingDirectory# govc ls /*/vm/ClusterApiDemo/K8s/*
/SJC19/vm/ClusterApiDemo/K8s/capv-worker01-md-0-6846468464-sst7s
/SJC19/vm/ClusterApiDemo/K8s/capv-worker01-controlplane-0
/SJC19/vm/ClusterApiDemo/K8s/capv-mgmt-example-controlplane-0

14. Lets increase the target cluster to have multiple worker nodes. Remember we mentioned the crds are analogous to existing kubernetes constructs. So in this case the machine deployment is similar to a K8s deployment that can be scaled

root@demo-exec:~/ClusterApiWorkingDirectory# kubectl --kubeconfig ./out/management-cluster/kubeconfig scale machinedeployment capv-worker01-md-0 --replicas=4
machinedeployment.cluster.x-k8s.io/capv-worker01-md-0 scaled
## This triggers creation of new worker VM's !!!
root@demo-exec:~/ClusterApiWorkingDirectory# kubectl --kubeconfig ./out/management-cluster/kubeconfig get machine -w
NAME PROVIDERID PHASE
capv-mgmt-example-controlplane-0 vsphere://422aa353-d034-698f-8709-83b4b30209a1 running
capv-worker01-controlplane-0 vsphere://422a568e-96ec-387e-0ef3-427db3d15b9f running
capv-worker01-md-0-6846468464-g2khb provisioning
capv-worker01-md-0-6846468464-hswcs provisioning
capv-worker01-md-0-6846468464-sst7s vsphere://422abccc-26d5-feb4-f303-5586f34564a3 running
capv-worker01-md-0-6846468464-zcb9s provisioning
root@demo-exec:~/ClusterApiWorkingDirectory# kubectl --kubeconfig ./out/management-cluster/kubeconfig get machine
NAME PROVIDERID PHASE
capv-mgmt-example-controlplane-0 vsphere://422aa353-d034-698f-8709-83b4b30209a1 running
capv-worker01-controlplane-0 vsphere://422a568e-96ec-387e-0ef3-427db3d15b9f running
capv-worker01-md-0-6846468464-g2khb vsphere://422a40bf-ba02-ad5b-2935-623682ac5dfb running
capv-worker01-md-0-6846468464-hswcs vsphere://422aeeea-d7dc-63bd-306c-60c1723c9410 running
capv-worker01-md-0-6846468464-sst7s vsphere://422abccc-26d5-feb4-f303-5586f34564a3 running
capv-worker01-md-0-6846468464-zcb9s vsphere://422aed13-6dd5-132c-5ba1-f4379f3e8676 running

15. Lets do a fact check on vSphere as well.

root@demo-exec:~/ClusterApiWorkingDirectory# govc ls /*/vm/ClusterApiDemo/K8s/*
/SJC19/vm/ClusterApiDemo/K8s/capv-worker01-md-0-6846468464-zcb9s
/SJC19/vm/ClusterApiDemo/K8s/capv-worker01-md-0-6846468464-hswcs

/SJC19/vm/ClusterApiDemo/K8s/capv-worker01-md-0-6846468464-sst7s
/SJC19/vm/ClusterApiDemo/K8s/capv-worker01-md-0-6846468464-g2khb
/SJC19/vm/ClusterApiDemo/K8s/capv-worker01-controlplane-0
/SJC19/vm/ClusterApiDemo/K8s/capv-mgmt-example-controlplane-0

16. Similarly we can also create a second cluster with different parameters under a new folder

govc folder.create /$GOVC_DATACENTER/vm/ClusterApiDemo/K8sTarget02

17. At this stage we have created 2 Target clusters successfully

root@demo-exec:~/ClusterApiWorkingDirectory# kubectl --kubeconfig ./out/management-cluster/kubeconfig get clusters
NAME PHASE
capv-mgmt-example provisioned
capv-worker01 provisioned
capv-worker02 provisioned
root@demo-exec:~/ClusterApiWorkingDirectory# kubectl --kubeconfig ./out/management-cluster/kubeconfig get machine
NAME PROVIDERID PHASE
capv-mgmt-example-controlplane-0 vsphere://422aa353-d034-698f-8709-83b4b30209a1 running
capv-worker01-controlplane-0 vsphere://422a568e-96ec-387e-0ef3-427db3d15b9f running
capv-worker01-md-0-6846468464-g2khb vsphere://422a40bf-ba02-ad5b-2935-623682ac5dfb running
capv-worker01-md-0-6846468464-hswcs vsphere://422aeeea-d7dc-63bd-306c-60c1723c9410 running
capv-worker01-md-0-6846468464-sst7s vsphere://422abccc-26d5-feb4-f303-5586f34564a3 running
capv-worker01-md-0-6846468464-zcb9s vsphere://422aed13-6dd5-132c-5ba1-f4379f3e8676 running
capv-worker02-controlplane-0 vsphere://422a6dc1-3ad1-2c2b-a360-726a3f0f3f16 running
capv-worker02-md-0-79d58fdd5c-2b5nl vsphere://422acb7e-89de-25ee-05e4-85fbe5358064 running
capv-worker02-md-0-79d58fdd5c-569qc vsphere://422a4052-3f5d-6fbb-2493-c9142dcab1c2 running
capv-worker02-md-0-79d58fdd5c-5wjwk vsphere://422a71e7-62b5-1f48-c644-a7b8f2219b93 running
capv-worker02-md-0-79d58fdd5c-6flpf vsphere://422a84a8-83d1-9573-2a98-da2be59a6376 running

Managing the Target Clusters

  1. Now that we have created the clusters successfully we can now access the individual Target clusters.
  2. The kubeconfigs of the target clusters are present in the management cluster
root@demo-exec:~/ClusterApiWorkingDirectory# kubectl --kubeconfig ./out/management-cluster/kubeconfig get secrets
NAME TYPE DATA AGE
capv-mgmt-example-ca Opaque 2 7h
capv-mgmt-example-etcd Opaque 2 7h
capv-mgmt-example-kubeconfig Opaque 1 7h
capv-mgmt-example-proxy Opaque 2 7h
capv-mgmt-example-sa Opaque 2 7h
capv-worker01-ca Opaque 2 6h8m
capv-worker01-etcd Opaque 2 6h8m
capv-worker01-kubeconfig Opaque 1 6h7m
capv-worker01-proxy Opaque 2 6h8m
capv-worker01-sa Opaque 2 6h8m
capv-worker02-ca Opaque 2 19m
capv-worker02-etcd Opaque 2 19m
capv-worker02-kubeconfig Opaque 1 18m
capv-worker02-proxy Opaque 2 19m
capv-worker02-sa Opaque 2 19m
default-token-nvwcd kubernetes.io/service-account-token 3 7h1m

3. Extract the kubeconfig in the following fashion

root@demo-exec:~/ClusterApiWorkingDirectory# kubectl --kubeconfig ./out/management-cluster/kubeconfig get secrets capv-worker01-kubeconfig -o=jsonpath='{.data.value}'| { base64 -d 2>/dev/null || base64 -D; } > ./out/workload-cluster-1/kubeconfig

4. Use the extracted kubeconfig to access the target cluster 1.

root@demo-exec:~/ClusterApiWorkingDirectory# kubectl --kubeconfig ./out/workload-cluster-1/kubeconfig cluster-info
Kubernetes master is running at https://10.8.12.249:6443
KubeDNS is running at https://10.8.12.249:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
root@demo-exec:~/ClusterApiWorkingDirectory# kubectl --kubeconfig ./out/workload-cluster-1/kubeconfig get nodes
NAME STATUS ROLES AGE VERSION
capv-worker01-controlplane-0 NotReady master 6h13m v1.16.3
capv-worker01-md-0-6846468464-g2khb NotReady <none> 37m v1.16.3
capv-worker01-md-0-6846468464-hswcs NotReady <none> 37m v1.16.3
capv-worker01-md-0-6846468464-sst7s NotReady <none> 6h12m v1.16.3
capv-worker01-md-0-6846468464-zcb9s NotReady <none> 37m v1.16.3
root@demo-exec:~/ClusterApiWorkingDirectory# kubectl --kubeconfig ./out/workload-cluster-1/kubeconfig get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
capv-worker01-controlplane-0 NotReady master 6h13m v1.16.3 10.8.12.249 10.8.12.249 Ubuntu 18.04.3 LTS 4.15.0-72-generic containerd://1.3.0
capv-worker01-md-0-6846468464-g2khb NotReady <none> 37m v1.16.3 10.8.11.67 10.8.11.67 Ubuntu 18.04.3 LTS 4.15.0-72-generic containerd://1.3.0
capv-worker01-md-0-6846468464-hswcs NotReady <none> 37m v1.16.3 10.8.15.172 10.8.15.172 Ubuntu 18.04.3 LTS 4.15.0-72-generic containerd://1.3.0
capv-worker01-md-0-6846468464-sst7s NotReady <none> 6h12m v1.16.3 10.8.8.245 10.8.8.245 Ubuntu 18.04.3 LTS 4.15.0-72-generic containerd://1.3.0
capv-worker01-md-0-6846468464-zcb9s NotReady <none> 37m v1.16.3 10.8.12.199 10.8.12.199 Ubuntu 18.04.3 LTS 4.15.0-72-generic containerd://1.3.0
root@demo-exec:~/ClusterApiWorkingDirectory#

5. By default the clusters are brought up with no cni installed, hence you will see the nodes in “NotReady” state. This is because we haven’t deployed the addons.yaml that was generated in earlier steps (step 10)

root@demo-exec:~/ClusterApiWorkingDirectory# kubectl --kubeconfig ./out/workload-cluster-1/kubeconfig apply -f out/workload-cluster-1/addons.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
root@demo-exec:~/ClusterApiWorkingDirectory# kubectl --kubeconfig ./out/workload-cluster-1/kubeconfig get nodes
NAME STATUS ROLES AGE VERSION
capv-worker01-controlplane-0 Ready master 6h17m v1.16.3
capv-worker01-md-0-6846468464-g2khb Ready <none> 41m v1.16.3
capv-worker01-md-0-6846468464-hswcs Ready <none> 41m v1.16.3
capv-worker01-md-0-6846468464-sst7s Ready <none> 6h16m v1.16.3
capv-worker01-md-0-6846468464-zcb9s Ready <none> 41m v1.16.3

Deleting a cluster

Cluster Api also gives the capability of deleting target clusters when required. It cleans up all the resources associated from the respective cloud.

root@demo-exec:~/ClusterApiWorkingDirectory# kubectl --kubeconfig ./out/management-cluster/kubeconfig get clusters
NAME PHASE
capv-mgmt-example provisioned
capv-worker01 provisioned
capv-worker02 provisioned
root@demo-exec:~/ClusterApiWorkingDirectory# kubectl --kubeconfig ./out/management-cluster/kubeconfig delete clusters capv-worker02
cluster.cluster.x-k8s.io "capv-worker02" deleted
root@demo-exec:~/ClusterApiWorkingDirectory#
root@demo-exec:~/ClusterApiWorkingDirectory#
root@demo-exec:~/ClusterApiWorkingDirectory#
root@demo-exec:~/ClusterApiWorkingDirectory# kubectl --kubeconfig ./out/management-cluster/kubeconfig get clusters
NAME PHASE
capv-mgmt-example provisioned
capv-worker01 provisioned
root@demo-exec:~/ClusterApiWorkingDirectory# kubectl --kubeconfig ./out/management-cluster/kubeconfig get machine
NAME PROVIDERID PHASE
capv-mgmt-example-controlplane-0 vsphere://422aa353-d034-698f-8709-83b4b30209a1 running
capv-worker01-controlplane-0 vsphere://422a568e-96ec-387e-0ef3-427db3d15b9f running
capv-worker01-md-0-6846468464-g2khb vsphere://422a40bf-ba02-ad5b-2935-623682ac5dfb running
capv-worker01-md-0-6846468464-hswcs vsphere://422aeeea-d7dc-63bd-306c-60c1723c9410 running
capv-worker01-md-0-6846468464-sst7s vsphere://422abccc-26d5-feb4-f303-5586f34564a3 running
capv-worker01-md-0-6846468464-zcb9s vsphere://422aed13-6dd5-132c-5ba1-f4379f3e8676 running
root@demo-exec:~/ClusterApiWorkingDirectory#

Conclusion

Thats it for this article. I hope this was helpful in demonstrating that the new Cluster Api gives us a lot of flexibility in deploying and managing K8 clusters. Its still WIP (alpha) so you will see more enhancements coming through as well

References:

  1. Cluster Api Handbook

--

--