Create AKS clusters with a Helm chart? It’s possible with Cluster API!

Alessandro Vozza
Cooking with Azure
Published in
4 min readDec 7, 2020

--

Cluster APIs are Kubernetes-native APIs to manage the lifecycle of clusters and nodes as CRDs.

Code available at https://github.com/ams0/azure-managed-cluster-capz-helm

I’ve been following the Cluster API project for some time now, but lately my interest was renewed when I find out that the cluster-api-provider-azure (sometimes abbreviated in CAPZ) now supports AKS managed clusters in preview. So I set out to come up with some simple instructions on how to create AKS clusters programmatically. However, since clusters are described by (at least) four templates (as of now), I decided to write a simple Helm chart to easily and repeatably create clusters. Helm allows for an easy packaging and distribution of complex applications, and makes it simple to group all available options either on the command line or in a values file.

Preparation

First, you’ll need to get the clusterctl command line (I recommend using v0.3.11, the latest at the time of writing). Then, you’ll need to have a Kubernetes cluster to store the CRDs for your AKS clusters. You can use any cluster (another AKS cluster for example) or to keep things simple, you can spin up a local KIND cluster, and that’s what we will do here (my preferred way to obtain kind and other tools is to use the gofish package manager):

kind create cluster --name capi

Now clone the repo and edit the values in clusterctl.env :

git clone https://github.com/ams0/azure-managed-cluster-capz-helm.git
cd azure-managed-cluster-capz-helm

The clustertlc.env contains some environment variable that will influence the installation of the cluster-api-provider-azure (like EXP_AKS=true) which you should use to get the experimental support for AKS cluster in CAPZ. Fill the comment values for AZURE_SUBSCRIPTION_ID, AZURE_TENANT_ID and AZURE_LOCATION and source clusterctl.env .

Next, use clusterctl to install the Cluster API components:

clusterctl init --infrastructure azure

Wait until all components are deployed in the various capi-*-system namespaces.

Deploy a cluster

Ready to roll! Deploy your first cluster with Helm:

helm install capz1 charts/azure-managed-cluster/  \
--set subscriptionID=<subID> \
--set cluster.resourceGroupName=aksclusters \
--set cluster.nodeResourceGroupName=capz1 \
--set cluster.name=aks1 \
--set controlplane.sshPublicKey="$(cat ~/.ssh/id_rsa.pub)" \
--set agentpools[0].name=capz1np0 \
--set agentpools[0].nodecount=1 \
--set agentpools[0].sku=Standard_B4ms \
--set agentpools[0].osDiskSizeGB=100 \
--set agentpools[1].name=capz1np1 \
--set agentpools[1].nodecount=1 \
--set agentpools[1].sku=Standard_B4ms \
--set agentpools[1].osDiskSizeGB=10

If you like you can use a values.yaml file:

helm install capz1 charts/azure-managed-cluster/ --set subscriptionID=<subID> --values aks1.yaml

The options available are a bit limited at the moment (here’s for example the supported options for the AzureManagedControlPlane CRD), but the Cluster API Azure provider is open to accept suggestions in the form of issues or pull requests. Note that the cluster is created with a SystemAssigned-managed identity and that the service principal from cluster.env is the one needed to create the cluster itself, not the one used by the cluster to interact with Azure via the cloud-controller-manager.

You can check the status of the deployment in several ways; in addition to the obvious az aks list to check the Azure side, you can check the cluster-api object and get the logs of the capz-controller-manager pod:

kubectl get cluster-api
kubectl logs -n capz-system -l control-plane=capz-controller-manager -c manager -f

Access the cluster

You don’t need to access the Azure API to get the credentials for the cluster, as its kubeconfig config file is stored in the managent cluster (the KIND cluster) as a secret in the namespace where we created the cluster CRDs.

kubectl get secret {cluster-name}-kubeconfig -o yaml \
-o jsonpath={.data.value} | base64 --decode > aks1.kubeconfig
kubectl --kubeconfig=aks1.kubeconfig cluster-info

That’s it!

Adding clusters

Let’s go crazy and add a second cluster:

helm install capz2 charts/azure-managed-cluster/ --values aks2.yaml

And so on! Now for some magic: you can delete the KIND cluster and reapply exactly the same Helm charts/Cluster API CRDs and the controller will just reconcile the desired state (2 clusters) with the current situation in Azure (2 clusters) and happily report that no change is needed. So as you can see this is a great step towards complete automation of AKS infrastructure, from start to finish (you can just add a GitOps configuration repository to the clusters, even via Azure Arc and now you have a full end-to-end automation from clusters to applications). I will expand on the concept in a later article.

I hope you find Cluster API as exciting as I do, and I’m looking forward to contributing to the project. And I will follow up with another post about day 2 operations (upgrades, scaling and so on) on AKS clusters deployed with Cluster APIs.

--

--

Alessandro Vozza
Alessandro Vozza

Written by Alessandro Vozza

Full time Cloud Pirate, software engineer at Microsoft, public speaker, community organiser and mentor. Opinions are mine’s, facts are facts.