Using the Azure Cloud Provider in a VM Based Kubernetes Cluster

How to Hook in the Azure Cloud Provider for Persistent Volume Claims and Load Balancers with your K8S cluster

Gaurav Agarwal
Mar 22, 2020 · 4 min read
Image for post
Image for post
Photo by AndriyKo Podilnyk on Unsplash

There are multiple ways to set up a kubernetes cluster, and some organisations want to start from scratch instead of using managed Kubernetes services. Since organisations which use Azure as a cloud provider do not have visibility of the master nodes as Azure currently does not offer a private master, some have security constraints in using a managed service such as AKS. In scenarios like this, they need to set up a kubernetes cluster on VMs from scratch. To make use of cloud services such as Load Balancers and to provision volume using persistent volumes, claims and mounts they would need to hook their cluster with the Azure Cloud Provider. This story describes in detail how one can use the Azure Cloud Provider on the Kubernetes installation, and it is an advanced level topic.

Pre-requisites

You would require a running VM based Kubernetes cluster on Azure to Hook in the Cloud Provider. If you are creating a new cluster, then do not bootstrap it yet and read on.

Creating the Azure cloud-config file

We would now create an Azure cloud-config file which would use the Azure service principal to authenticate with Azure REST APIs and provision the resources requested by Kubernetes.

You would need to create an Azure service principal to authenticate with Azure. Follow the steps in the link https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal to create an Azure service principal and make sure it has contributor access on the subscription the kubernetes cluster is running on.

Ensure that the cluster is running within a single subnet and there are a “network security group” and a “route table” assigned to the cluster. K8s is going to manipulate these resources to create load balancers and other network objects.

Create the following file /etc/kubernetes/cloud.conf on every master and worker node of the cluster.

$vim /etc/kubernetes/cloud.conf
{
"cloud":"AzurePublicCloud",
"tenantId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"subscriptionId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"aadClientId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"aadClientSecret": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"resourceGroup": "<Resource Group of the K8s cluster>",
"location": "<Region of the K8s cluster>",
"subnetName": "<Subnet Name where the cluster is running>",
"securityGroupName": "<network security group assigned to the subnet>",
"routeTableName": "<route table assiged to the subnet>",
"vnetName": "<virtual network of the cluster>",
"vnetResourceGroup": "<Resource Group of the K8s cluster>",
"cloudProviderBackoff": true,
"cloudProviderBackoffRetries": 6,
"cloudProviderBackoffExponent": 1.5,
"cloudProviderBackoffDuration": 5,
"cloudProviderBackoffJitter": 1,
"cloudProviderRatelimit": true,
"cloudProviderRateLimitQPS": 3,
"cloudProviderRateLimitBucket": 10,
"useManagedIdentityExtension": false,
"useInstanceMetadata": true
}

Creating the Kubeadm config file

If you are creating a new cluster, you would then need to create a Kube config file which would be used by kubeadm to bootstrap the cluster. You would need to refer to cloud.conf file we created in the earlier section within the file as shown below.

$vim /root/config.yaml
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
cloud-provider: "azure"
cloud-config: "/etc/kubernetes/cloud.conf"

---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.16.5
controlPlaneEndpoint: "masterlb:6443"
apiServer:
extraArgs:
cloud-provider: "azure"
cloud-config: "/etc/kubernetes/cloud.conf"
extraVolumes:
- name: cloud
hostPath: "/etc/kubernetes/cloud.conf"
mountPath: "/etc/kubernetes/cloud.conf"

controllerManager:
extraArgs:
cloud-provider: "azure"
cloud-config: "/etc/kubernetes/cloud.conf"
extraVolumes:
- name: cloud
hostPath: "/etc/kubernetes/cloud.conf"
mountPath: "/etc/kubernetes/cloud.conf"

dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12

If you are modifying an existing cluster, export your existing kubeadm configuration and modify it to include the settings in bold from the above configuration.

kubeadm config view > kubeadm-config.yaml
vim kubeadm-config.yaml
#Include the details within the kubeadm-config file

Upgrading an Existing Cluster

If you are making changes to an existing cluster, you need to run kubeadm upgrade to apply the changes to the cluster. On one of your masters run the following:

kubeadm upgrade apply <current_kubernetes_version> \
--config=/etc/kubernetes/kubeadm/kubeadm-config.yaml

Bootstrapping a new Cluster

If you are creating a new cluster, once the files are created and ready to go, bootstrap your cluster by running the following command on one of your masters.

Initialise the control plane using kubeadm

kubeadm init --upload-certs --config /root/config.yaml

If everything is ok, you will get a message like below

You can now join any number of the control-plane node running the following command on each as root:kubeadm join masterlb:6443 --token hwe4u6.hy79bfq4uq3myhsn \
--discovery-token-ca-cert-hash sha256:7b437ae3463c1236e29f30dc9c222f65f818d304f8b410b598451478240f105a \
--control-plane --certificate-key b38664ca2d82e7e4969a107b45d2be83767606331590d7b487eaad1ddbe8cd26Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.Then you can join any number of worker nodes by running the following on each as root:kubeadm join masterlb:6443 --token hwe4u6.hy79bfq4uq3myhsn \
--discovery-token-ca-cert-hash sha256:7b437ae3463c1236e29f30dc9c222f65f818d304f8b410b598451478240f105a

Copy this in a text editor as we will use it later on

We will need to configure the kubernetes config to the user so that kubectl can authenticate with the API server

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

and also set up a pod network so that kubernetes resources can communicate with each other internally

export kubever=$(kubectl version | base64 | tr -d '\n')
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever"
kubectl get nodes

Then continue joining the rest of the control plane and worker nodes in your cluster.

The Startup

Medium's largest active publication, followed by +771K people. Follow to join our community.

Gaurav Agarwal

Written by

Certified Kubernetes Administrator | Cloud Architect | DevOps Enthusiast | Connect @ https://gauravdevops.com | https://freedevtools.net

The Startup

Medium's largest active publication, followed by +771K people. Follow to join our community.

Gaurav Agarwal

Written by

Certified Kubernetes Administrator | Cloud Architect | DevOps Enthusiast | Connect @ https://gauravdevops.com | https://freedevtools.net

The Startup

Medium's largest active publication, followed by +771K people. Follow to join our community.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store