Kubernetes CSI in action

Kosta Klevensky
6 min readMay 6, 2019

--

In this post we will dig into Kubernetes Container Storage Interface. We will install CSI Driver for Amazon EBS and see what really happens during pvc/pv/pod lifecycle

We’ve saved all the yamls and scripts in awscsi-demo repository, so all the commands in this post are executed from its root directory

git clone https://github.com/kosta709/awscsi-demo.git

Test cluster

Firstly we will spinup new Kubernetes cluster (at least 1.14) on Amazon Linux instances with IAM roles using kubeadm

  • Create IAM Roles for master, csi-controller and worker nodes — see awscsi-demo/iam
  • Follow awscsi-demo/kubernetes to create master node and setup kubernetes , autoscaling groups for csi-controller and workers, calico
  • Note about feature-gates for api-server CSINodeInfo=true,CSIDriverRegistry=true,CSIBlockVolume=true,VolumeSnapshotDataSource=true and kubelet CSINodeInfo=true,CSIDriverRegistry=true,CSIBlockVolume=true
  • Don’t miss to tag instances KubernetesCluster=<cluster-name>, otherwise kubelet fails on nodes with --cloud-provider=aws
kubectl get nodes -owide — show-labels

Install aws-ebs-csi-driver stack

We’ve copied deploy directory of aws-ebs-csi-driver to the our repo with adding nodeSelector to the pod template of csi-controller statefulSet. Also we are using iam roles so we don’t need to add aws credentials to aws-secret

[kosta@localhost awscsi-demo]$ kubectl create -f deploy-awscsi/secret.yaml -f deploy-awscsi/rbac.yaml -f deploy-awscsi/controller.yaml -f deploy-awscsi/node.yaml
secret/aws-secret created
serviceaccount/csi-controller-sa created
clusterrole.rbac.authorization.k8s.io/external-provisioner-role created
clusterrolebinding.rbac.authorization.k8s.io/csi-provisioner-binding created
clusterrole.rbac.authorization.k8s.io/external-attacher-role created
clusterrolebinding.rbac.authorization.k8s.io/csi-attacher-binding created
clusterrole.rbac.authorization.k8s.io/cluster-driver-registrar-role created
clusterrolebinding.rbac.authorization.k8s.io/csi-driver-registrar-binding created
clusterrole.rbac.authorization.k8s.io/external-snapshotter-role created
clusterrolebinding.rbac.authorization.k8s.io/csi-snapshotter-binding created
statefulset.apps/ebs-csi-controller created
daemonset.apps/ebs-csi-node created

It creates DaemonSet ebs-csi-node running in host network and StatefulSet ebs-csi-controller along with rbac cluster roles and bindings

kubectl get pods -nkube-system -l ‘app in (ebs-csi-node,ebs-csi-controller)’ -owide

CSI API Resources

All the CSI staff deals with the following api resources:

kubectl api-resources | grep -E “^NAME|csi|storage|PersistentVolume”

There are resources from core apigroups ", storage.k8s.io and resources which created by CRDs snapshot.storage.k8s.io and csi.storage.k8s.io.

CSINode , CSIDriver and VolumeAttachments are part of core since Kubernetes 1.14. CSIDriver and snapshot resources are also going to be part of core very soon — see issue-259 and snapshot core doc

So upon starting csi pods create the crds, register csidriver, csinodes and adds topology label to each of the nodes where csi-node daemonSet is running:

crd created by csi-controller
kubectl get csidrivers
kubectl get csinodes
kubectl get csinode ip-172–16–101–65.ec2.internal -oyaml
kubectl get nodes -L topology.ebs.csi.aws.com/zone

CSI Containers

Before CSI (and now) all the staff was part of controller-manager and kubelet code. There is also a possibility to write own provisioner controller like kubernetes-incubator/external-storage and FlexVolume to create any custom implementation of attach/detach/mount/unmount.

With CSI there are common CSI sidecar containers and CSI Driver itself with storage provider specific code.

kubectl get pods -nkube-system -l ‘app in (ebs-csi-node,ebs-csi-controller)’ -owide

So we’ve started csi-controller statefulset with 6 containers and csi-node daemonset with 3 containers. There is only one container “ebs-plugin” from the image amazon/aws-ebs-csi-driver which contains EBS specific code

All the others are common CSI sidecar containers from quay.io/k8scsi. These containers communicate with CSI Driver by gRPC protocol through the socket in the common socker-dir EmptyDir volume. Note that all of the sidecars get — -csi-driver=<path to the socket>parameter and the socket is created by ebs-driver in the path of --endpoint=$(CSI_ENDPOINT) . Note also that in csi-controller, ebs-plugin does not require any privileged mode, but in the daemon set it does.

CSI Driver must implement CSI RPC Interface . There are three RPC services:

  • CSI Identity — provides driver information (GetPluginInfo, GetPluginCapabilities, Probe)
  • CSI Node — serving CSI RPCs that MUST be run on the Node (mount/unmount related operations)
  • CSI Controller — A gRPC endpoint serving CSI RPCs that MAY be run anywhere. (volume create/delete, attach/detach related operations)

csi-controller sidecar containers: csi-provisioner, csi-attacher, csi-snapshotter, cluster-driver-registrar, liveness-probe

csi-node sidecar containers: node-driver-registrar, liveness-probe

Provisioning PVC

Lets create StorageClass and then PVC from examples/dynamic-provisioning/specs folder

kubectl create -f examples/dynamic-provisioning/specs/storageclass.yaml
kubectl create -f examples/dynamic-provisioning/specs/claim.yaml

The pvc is in pending status. This is because the storageClass is configured with volumeBindingMode=WaitForFirstCustomer , it means that provisioning (CreateVolume) occurs only after pod is scheduled to the node. In our case it solves the problem of volume-to-node aws availability zones — see the StorageClass doc . Note, that CSINode object listed above has topologyKeys=[topology.ebs.csi.aws.com/zone] parameter and nodes have topology.ebs.csi.aws.com/zone=us-east-1d label

So lets create the pod:

kubectl create -f examples/dynamic-provisioning/specs/pod.yaml

and:

  • csi-controller-0.csi-provisioner issues CreateVolumeRequest call to the CSI socket — see csi-provisioner log at I0505 16:56:07.687188
  • csi-controller-0.ebs-plugin calls AWS CreateVolume and informs CSI about its creation — see ebs-plugin log at I0505 16:56:07.690189
  • controller-0.csi-provisioner creates PV and updates PVC to be bounded — see csi-provisioner log at I0505 16:56:14.150148
  • VolumeAttachment object is created by controller-manager — see controller-manager log at I0505 16:56:14.763247
  • csi-controller-0.csi-attacher which watches for VolumeAttachments submits ControllerPublishVolume rpc call to ebs-plugin — see the csi-attacher log at I0505 16:56:14.778179
  • csi-controller-0.ebs-plugin gets ControllerPublishVolume and calls aws AttachVolume see ebs-plugin log at I0505 16:56:15.320234
  • csi-controller-0.csi-attacher update VolumeAttachment status — see csi-attacher log at I0505 16:56:16.575386
  • All this time kubelet waits for volume to be attached and submits NodeStageVolume (format and mount to the node to the staging dir) to the csi-node.ebs-plugin — see kubelet log
  • csi-node.ebs-plugin gets NodeStageVolume call, formats /dev/xvdca if need and mounts to `/var/lib/kubelet/plugins/kubernetes.io/csi/pv/<pv-name>/globalmount`, then responses to kubelet — see node-ebs-plugin log at I0505 16:56:18.341035 until I0505 16:56:18.935107
  • kubelet calls NodePublishVolume (mount volume to the pod’s dir)
  • csi-node.ebs-plugin performs NodePublishVolume and mounts the volume to `/var/lib/kubelet/pods/<pod-uuid>/volumes/kubernetes.io~csi/<pvc-name>/mount` — see see node-ebs-plugin log at I0505 16:56:18.943074
  • kubelet starts container of the pod with the volume
kubectl get pvc
kubectl describe pvc
kubectl get pv
kubectl get volumeattachments
kubectl get pods -owide
mount points on the node

That’s it! Volume has been created, attached and mounted, the pod is running! Reverse process occures on pod and pvc deletion. We’ve saved relevant logs and outputs in our repo.

EBS CSI Driver also implements Volume Snapshots and Raw Block Volumes. You can try them from Snapshots create/restore and Block Volume Example

References

Kubernetes Container Storage Interface (CSI) Documentation

CSI Spec

Amazon Elastic Block Store (EBS) CSI driver

Container Storage Interface (CSI) for Kubernetes GA

--

--

Kosta Klevensky
Kosta Klevensky

No responses yet