Cilium: Evaluating pod identities on an AKS cluster running Azure CNI powered by Cilium

Amit Gupta
8 min readFeb 14, 2024

--

Source: learn.microsoft.com

☸ ️Introduction

Workloads deployed on an Azure Kubernetes Services (AKS) cluster require Microsoft Entra application credentials or managed identities to access Microsoft Entra protected resources, such as Azure Key Vault.

🎯Goals & Objectives

In this article, you will learn how to deploy an AKS cluster running Azure CNI powered by Cilium and configured it to use a workload identity in preparation for application workloads to authenticate with that credential.

How does Workload Identity work?

Microsoft Entra Workload ID integrates with the capabilities native to Kubernetes to federate with external identity providers. Microsoft Entra Workload ID uses Service Account Token Volume Projection enabling pods to use a Kubernetes identity (that is, a service account). A Kubernetes token is issued and OIDC federation enables Kubernetes applications to access Azure resources securely with Microsoft Entra ID based on annotated service accounts.

Pre-Requisites

  • You should have an Azure Subscription.
  • Install kubectl.
  • Install Helm.
  • Install the aks-preview Azure CLI extension
    - The aks-preview Azure CLI extension version 0.5.123 or later.
az extension add --name aks-preview
The installed extension 'aks-preview' is in preview.
az extension update --name aks-preview
Latest version of 'aks-preview' is already installed.
Use --debug for more information
  • The identity you’re using to create your cluster has the appropriate minimum permissions. For more information about access and identity for AKS, see Access and identity options for Azure Kubernetes Service (AKS).
  • Ensure you have enough quota resources to create an AKS cluster. Go to the Subscription blade, navigate to “Usage + Quotas”, and make sure you have enough quota for the following resources:
    -Regional vCPUs
    -Standard Dv4 Family vCPUs

Let’s get going

Create a keyvault

  • Azure Key Vault is a cloud service that provides a secure store for keys, secrets, and certificates.
  • Create a Resource Group
az group create --name aksidentity --location eastus
  • Use the Azure CLI az keyvault create command to create a Key Vault in the resource group from the previous step.
az keyvault create --name "<your-unique-keyvault-name>" --resource-group aksidentity --location eastus
  • Provide the necessary permissions to the Key Vault.
    - Login to the Azure Portal.
    - Search for Key Vault and select the configured key vault.
    - Click > Settings > Access configuration

Create an AKS cluster running Azure CNI powered by Cilium

  • Create an AKS cluster with Azure CNI powered by Cilium in overlay mode.
az aks create -n aksidentity -g aksidentity -l eastus \
--network-plugin azure \
--network-plugin-mode overlay \
--pod-cidr 192.168.0.0/16 \
--network-dataplane cilium

az aks get-credentials --resource-group aksidentity --name aksidentity

Note- Ideally, the AKS cluster should have been created with the field — enable-workload-identity but that is not available during the AKS cluster creation and hence we have to update the AKS cluster.

Export environment variables

  • To help simplify steps to configure the identities required, the steps below define environment variables for reference on the cluster.
RG_NAME=aksidentity
LOCATION=eastus
CLUSTER_NAME=aksidentity
KEYVAULT_SECRET_NAME=mysecret
SUBSCRIPTION_ID=$(az account show --query id -o tsv)
KEYVAULT_NAME=$(az keyvault list -g ${RG_NAME} --query "[0].name" -o tsv)

Enable workload identity on the AKS cluster

  • Enable workload identity on the newly created AKS cluster.
az aks update --resource-group $RG_NAME --name $CLUSTER_NAME --enable-oidc-issuer --enable-workload-identity

Retrieve the OIDC issuer URL

To get the OIDC Issuer URL and save it to an environmental variable, run the following command. Replace the default value for the arguments -n, which is the name of the cluster:

export AKS_OIDC_ISSUER="$(az aks show -n $CLUSTER_NAME -g $RG_NAME --query "oidcIssuerProfile.issuerUrl" -otsv)"
echo $AKS_OIDC_ISSUER

Mutating webhook

Verify that you now see a mutating webhook pod on your cluster. The mutating admission webhook projects a signed service account token to a workload’s volume and injects environment variables to pods.

kubectl get pods -n kube-system | grep webhook
azure-wi-webhook-controller-manager-69b4897c88-6slm2 1/1 Running 0 95m
azure-wi-webhook-controller-manager-69b4897c88-z2lc7 1/1 Running 0 95m

Create a managed identity and grant access to Keyvault

Use the Azure CLI az account set command to set a specific subscription to be the current active subscription. Then use the az identity create command to create a managed identity.

export USER_ASSIGNED_IDENTITY_NAME="workload-identity"
az identity create --name "${USER_ASSIGNED_IDENTITY_NAME}" --resource-group "${RG_NAME}" --location "${LOCATION}" --subscription "${SUBSCRIPTION_ID}"

Note- Currently, there seems to be an issue with assigning policies to the keyvault from the CLI and hence use the UI. These can be either user or SPN-based.

Create a Service Account

Create a Kubernetes service account and annotate it with the client ID of the managed identity created in the previous step.

export SERVICE_ACCOUNT_NAME="workload-identity-sa"
export SERVICE_ACCOUNT_NAMESPACE="default"

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
azure.workload.identity/client-id: "${USER_ASSIGNED_CLIENT_ID}"
labels:
azure.workload.identity/use: "true"
name: "${SERVICE_ACCOUNT_NAME}"
namespace: "${SERVICE_ACCOUNT_NAMESPACE}"
EOF

Establish a Federated Identity

The namespace and service account name are used to create the subject identifier in the federation. Once this is setup, this Managed Identity will now trust tokens coming from our Kubernetes cluster. The subject claim identifies the principal that will be the subject of the token.

az identity federated-credential create --name myfederatedIdentity --identity-name "${USER_ASSIGNED_IDENTITY_NAME}" --resource-group "${RG_NAME}" --issuer "${AKS_OIDC_ISSUER}" --subject system:serviceaccount:"${SERVICE_ACCOUNT_NAMESPACE}":"${SERVICE_ACCOUNT_NAME}"

Deploy Sample workload & Test

The following YAML deploys a sample .net application that writes to the log the content of the secret inside keyvault. The .NET application expects two environment variables for the Kevault URL and the Keyvault secret name references.

Note the following required annotations on the K8S YAML configuration:

  • azure.workload.identity/use: “true”
  • serviceAccountName: ${SERVICE_ACCOUNT_NAME}
  • Apply the yaml file
export KEYVAULT_URL="$(az keyvault show -g ${RG_NAME} -n ${KEYVAULT_NAME} --query properties.vaultUri -o tsv)"

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: quick-start
namespace: ${SERVICE_ACCOUNT_NAMESPACE}
labels:
azure.workload.identity/client-id: CLIENT_ID
azure.workload.identity/tenant-id: TENANT_ID
azure.workload.identity/use: "true"
spec:
serviceAccountName: ${SERVICE_ACCOUNT_NAME}
containers:
- image: ghcr.io/azure/azure-workload-identity/msal-net
name: oidc
env:
- name: KEYVAULT_URL
value: ${KEYVAULT_URL}
- name: SECRET_NAME
value: ${KEYVAULT_SECRET_NAME}
EOF
  • Observe that the pod is up and running
kubectl get pods -A -o wide                                                                                                                    Amits-MacBook-Air.local: Thu Jan 11 18:06:46 2024

NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default quick-start 1/1 Running 0 61m 192.168.0.232 aks-nodepool1-50194707-vmss000002 <none> <none>
kube-system azure-cns-275pp 1/1 Running 0 114m 10.224.0.4 aks-nodepool1-50194707-vmss000000 <none> <none>
kube-system azure-cns-644gq 1/1 Running 0 114m 10.224.0.5 aks-nodepool1-50194707-vmss000001 <none> <none>
kube-system azure-cns-v7kkj 1/1 Running 0 114m 10.224.0.6 aks-nodepool1-50194707-vmss000002 <none> <none>
kube-system azure-ip-masq-agent-7nhpx 1/1 Running 0 114m 10.224.0.6 aks-nodepool1-50194707-vmss000002 <none> <none>
kube-system azure-ip-masq-agent-gtvhb 1/1 Running 0 114m 10.224.0.4 aks-nodepool1-50194707-vmss000000 <none> <none>
kube-system azure-ip-masq-agent-tjmpg 1/1 Running 0 114m 10.224.0.5 aks-nodepool1-50194707-vmss000001 <none> <none>
kube-system azure-wi-webhook-controller-manager-69b4897c88-6slm2 1/1 Running 0 106m 192.168.0.162 aks-nodepool1-50194707-vmss000002 <none> <none>
kube-system azure-wi-webhook-controller-manager-69b4897c88-z2lc7 1/1 Running 0 106m 192.168.2.208 aks-nodepool1-50194707-vmss000001 <none> <none>
kube-system cilium-gdwbf 1/1 Running 0 113m 10.224.0.5 aks-nodepool1-50194707-vmss000001 <none> <none>
kube-system cilium-khn4m 1/1 Running 0 113m 10.224.0.4 aks-nodepool1-50194707-vmss000000 <none> <none>
kube-system cilium-operator-fb4c58f8d-5xvxt 1/1 Running 0 113m 10.224.0.4 aks-nodepool1-50194707-vmss000000 <none> <none>
kube-system cilium-operator-fb4c58f8d-bsc4s 1/1 Running 0 113m 10.224.0.5 aks-nodepool1-50194707-vmss000001 <none> <none>
kube-system cilium-vghnd 1/1 Running 0 113m 10.224.0.6 aks-nodepool1-50194707-vmss000002 <none> <none>
kube-system cloud-node-manager-bfnxr 1/1 Running 0 114m 10.224.0.6 aks-nodepool1-50194707-vmss000002 <none> <none>
kube-system cloud-node-manager-f2988 1/1 Running 0 114m 10.224.0.4 aks-nodepool1-50194707-vmss000000 <none> <none>
kube-system cloud-node-manager-qc4cw 1/1 Running 0 114m 10.224.0.5 aks-nodepool1-50194707-vmss000001 <none> <none>
kube-system coredns-789789675-c2w7c 1/1 Running 0 114m 192.168.0.104 aks-nodepool1-50194707-vmss000002 <none> <none>
kube-system coredns-789789675-r79pf 1/1 Running 0 111m 192.168.1.191 aks-nodepool1-50194707-vmss000000 <none> <none>
kube-system coredns-autoscaler-649b947bbd-5ljt7 1/1 Running 0 114m 192.168.0.200 aks-nodepool1-50194707-vmss000002 <none> <none>
kube-system csi-azuredisk-node-6kjbh 3/3 Running 0 114m 10.224.0.6 aks-nodepool1-50194707-vmss000002 <none> <none>
kube-system csi-azuredisk-node-pq9bc 3/3 Running 0 114m 10.224.0.4 aks-nodepool1-50194707-vmss000000 <none> <none>
kube-system csi-azuredisk-node-q4sgw 3/3 Running 0 114m 10.224.0.5 aks-nodepool1-50194707-vmss000001 <none> <none>
kube-system csi-azurefile-node-bjzv7 3/3 Running 0 114m 10.224.0.6 aks-nodepool1-50194707-vmss000002 <none> <none>
kube-system csi-azurefile-node-bzvtv 3/3 Running 0 114m 10.224.0.4 aks-nodepool1-50194707-vmss000000 <none> <none>
kube-system csi-azurefile-node-hqzc9 3/3 Running 0 114m 10.224.0.5 aks-nodepool1-50194707-vmss000001 <none> <none>
kube-system konnectivity-agent-64c4b5f57b-4fd7d 1/1 Running 0 107m 192.168.0.166 aks-nodepool1-50194707-vmss000002 <none> <none>
kube-system konnectivity-agent-64c4b5f57b-zhjf8 1/1 Running 0 107m 192.168.2.86 aks-nodepool1-50194707-vmss000001 <none> <none>
kube-system metrics-server-5467676b76-8mkcc 2/2 Running 0 111m 192.168.1.10 aks-nodepool1-50194707-vmss000000 <none> <none>
kube-system metrics-server-5467676b76-xthmb 2/2 Running 0 111m 192.168.2.24 aks-nodepool1-50194707-vmss000001 <none> <none>
  • Once the pod is running, ensure the pod is showing the KeyVault secret. If the pod communication to the KeyVault was successful, you will see the following message:
kubectl logs quick-start

START 01/11/2024 11:35:35 (quick-start)
Your secret is AKS Secret\!
  • You can also inspect the additional environment variables and volumeMounts created:
Environment:
KEYVAULT_URL: https://aksidentity.vault.azure.net/
SECRET_NAME: ########
AZURE_CLIENT_ID: ####################################
AZURE_TENANT_ID: ####################################
AZURE_FEDERATED_TOKEN_FILE: /var/run/secrets/azure/tokens/azure-identity-token
AZURE_AUTHORITY_HOST: https://login.microsoftonline.com/
Mounts:
/var/run/secrets/azure/tokens from azure-identity-token (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dx6dr (ro)

Accessing Azure Resources from the test pod

  • Install Azure CLI on the test pod
  • Fetch the federation token from the pod
kubectl exec quick-start - cat /var/run/secrets/azure/tokens/azure-identity-token
  • Using the federated token, log in to Azure CLI from the test pod.
az login --federated-token “$(cat $AZURE_FEDERATED_TOKEN_FILE)” --debug
  • You should not be prompted for a password as the token will be used for authentication.
  • Once the login goes through you can execute all AKS and az related commands.
az aks list -o table
Name Location ResourceGroup KubernetesVersion CurrentKubernetesVersion ProvisioningState Fqdn
---------------- ---------------- ----------------- ------------------- -------------------------- ------------------- -------------------------------------------------------------------------------------------------------

devsecops-aks eastus rg-aks-gha 1.27 1.27.7 Succeeded ###############################################################
prvaks westeurope prvaksvnet 1.27 1.27.7 Succeeded ########################################################################################

az group list -o table
Name Location Status
---------------------------------------------------- ---------------- ---------
NetworkWatcherRG eastus Succeeded
quickstart-rancher-quickstart westus3 Succeeded
AzureArc eastus Succeeded
rg-aks-gha eastus Succeeded
MC_rg-aks-gha_devsecops-aks_eastus eastus Succeeded
kubeadm southindia Succeeded
prvaks westeurope Succeeded
prvaksvnet westeurope Succeeded
MC_prvaksvnet_prvaks_westeurope westeurope Succeeded

References

Try out Cilium

  • Try out Cilium and get a first-hand experience of how it solves some real problems and use-cases in your cloud-native or on-prem environments related to Networking, Security or Observability.

🌟Conclusion 🌟

Hopefully, this post gave you a good idea of how to deploy an AKS cluster running Azure CNI powered by Cilium and configured it to use a workload identity in preparation for application workloads to authenticate with that credential. Thank you for Reading !! 🙌🏻😁📃, see you in the next blog.

🚀 Feel free to connect/follow with me/on :

LinkedIn: linkedin.com/in/agamitgupta

--

--