Kubernetes storage on Azure (Part 2)

Azure disk programming internals and container access

Krishnakumar R
Microsoft Azure
5 min readSep 22, 2020

--

In Part2 of Kubernetes storage on Azure we explore various steps involved in exposing disks to a container and look at more details of Azure specific code behind the scenes making this possible in the disk CSI driver.

Overview

As depicted in the figure below, a managed disk can be attached to a VM. Once attached, it appears as a scsi device within the VM. If it’s the first time that the disk is getting used, then it needs to be partitioned and formatted. Afterwards, the disk can be made available to a container by mounting it to a local path on the VM and exposing it by means of the container runtime. Applications running in the container can use this device by reading and writing to this local path. There are other techniques in which the device can be directly exposed to the container in the raw form. But, in this article we will focus on the format and mount scenario.

Users can create and interact with managed disks using various ways — cli, sdk, portal etc. Since Kubernetes storage drivers for Azure are written in golang we will look at sample code performing disk related operations like create and attach (Controller plugin functionality) using golang azure sdk in the upcoming subsections. Afterwards, we will see how using Linux tools and container runtime (docker) we can make the disk available within the container (Node plugin functionality).

Disk creation and VM Attach

At a high level — disk related operations are performed by communicating with ARM ( Azure Resource Manager) using REST APIs. To create a managed disk a PUT with new object’s properties has to be performed and to attach the disk to a VM a PATCH operation on the VM has to be executed. The azure go sdk provides wrapper functions over these REST calls making it easy to program and maintain code. The Azure samples project has examples on how to use the golang sdk. We will go through modified versions of those samples to understand how to programmatically create a disk and attach it to a VM.

Our code needs to authenticate and authorize with Azure Active Directory(AAD) before performing any disk related operations. Inorder to do this, first we generate a Service Principal (SP) using az cli which has the required permissions to operate on the resource group.

Then, we use adal library to extract a token which we will use in the rest of the code for authenticating with AAD.

The equivalent code in azure disk csi driver comes from kubernetes staging repo.

Create

Inorder to perform any disk related operations we provide the token received using adal libraries as Authorizer with diskClient. Using this disk client we can call into the CreateOrUpdate call to create a new disk as depicted in the code below. Creation is an asynchronous operation; we use WaitForCompletionRef to wait for its completion. We can specify the name of the disk, the size of the disk, the data source of the disk and more using the various options.

The equivalent code is invoked by the controller plugin’s CreateVolume code in the CSI driver. The calls made from the driver can be found in controllerserver.go and the implementation found in azure_managedDiskController.go.

Attach

The disk created above needs to be attached to a VM for use. Code below depicts how the disk can be attached to a VM. As previously mentioned the disk client is created using a token we got using the adal library. A vm client is created which is used to obtain the VM object. The data disks slice in VM is updated using the Update call. One important point to note here is the lun number ('0' in this case) we use for the data disk while attaching. This lun number will be used to correlate the attached disks within the VM. We will go through more details of this aspect in the Container access section.

Code similar to the above goes into the azure_controller_standard.go code which is invoked from the ControllerPublish implementation of the controller plugin.

Container access

Once the disk is attached to the VM, we have to identify and map the disk for use. The lun number provided (‘0’) while attaching the disk is used to make the correlation. The following is how the disk attached to a VM looks within the VM in the /dev/disk/azure location. As can be seen there is an entry with the lun number under . Lets format that device and then mount into a path within the VM. Afterwards, let's leave a cookie file in the path.

This path then can be accessed when a container is started by docker by using docker mount path options like the following:

From within the container we can see that the container has access to the path where we mounted the Azure disk by accessing the cookie file we placed on the VM path.

Similar code to format and mount are invoked from the NodeStateVolume implementation of the nodeserver.go.

Acknowledgement

A big thanks to Andy, Chitkala, Balaji, Anish and Sundeep for reviews and valuable feedback provided on earlier drafts of this article.

Conclusion

In this part of Kubernetes storage on Azure, we went through the details of how a managed disk can be created, attached to a VM and used from within a container. We looked at how this can be done programmatically using the azure go sdk and how code similar to this powers the Azure disk driver implementation.

In Part3 we will look into using Azure File share on Kubernetes and the internals of the Azure file CSI driver. Until then, Bye 🙂 Take care !!

Krishnakumar is a Senior Software engineer in the Azure Data team. Follow him on Twitter at https://twitter.com/kkwriting .

Originally published at http://kkwriting.com on September 22, 2020.

--

--