Cilium: Pod Sandboxing in AKS and Azure CNI powered by Cilium

Amit Gupta
14 min readDec 27, 2023

--

Source: containerd/cri

☸ ️Introduction

A sandbox is a tightly controlled environment where an application runs. Sandboxed environments impose permanent restrictions on resources and are often used to isolate and execute untested or untrusted programs without risking harm to the host machine or operating system.

Containers and Virtual Machines differ in several ways, but one of the key areas is how the operating system kernel works. In a VM, each VM on a host gets a separate copy of the kernel, whereas with a container, all containers on a host share the same kernel. This is part of what makes containers so fast, small and flexible, but it brings issues, especially around security. If an attacker manages to compromise one container running on a host and get access to the kernel, it can likely get access to all the containers running on a host. This concerns anyone running hostile multi-tenanted workloads in containers and Kubernetes.

Pod Sandboxing is a solution to this problem, bringing a way to run a container with its own copy of the kernel rather than sharing it with the rest of the host. An attack on one container on the host will no longer compromise all containers on the host.

🎯Goals & Objectives

In this article you will learn how to implement Pod sandboxing on an AKS cluster running Azure CNI powered by Cilium. This is currently in Preview mode.

How does Pod Sandboxing work?

Pod Sandboxing in AKS is based on a technology called Kata Containers. Kata Containers look and behave like containers, but wrap your container in a small, lightweight virtual machine. This virtual machine has its own kernel that is separate from the host kernel, and this means you are now protected from an attack on one container, being able to access that host kernel.

Source: katacontainers.io

Hardware isolation allocates resources for each pod and doesn’t share them with other Kata Containers or namespace containers running on the same host. The solution architecture is based on the following components:

Deploying Pod Sandboxing using Kata Containers is similar to the standard containerd workflow to deploy containers. The deployment includes kata-runtime options that you can define in the pod template. To use this feature with a pod, the only difference is to add runtimeClassName kata-mshv-vm-isolation to the pod spec.

When a pod uses the kata-mshv-vm-isolation runtimeClass, it creates a VM to serve as the pod sandbox to host the containers.

Why would I want to use Pod Sandboxing?

It’s all about security and access to that host kernel. If you are running multiple workloads on a cluster, multi-tenant applications, or applications that may be attacked or want your workloads to be as secure as possible, then you would want to use this feature.

Pre-Requisites

  • You should have an Azure Subscription.
  • Install kubectl.
  • Install Helm.
  • Install the aks-preview Azure CLI extension
    - The aks-preview Azure CLI extension version 0.5.123 or later.
az extension add --name aks-preview
The installed extension 'aks-preview' is in preview.
az extension update --name aks-preview
Latest version of 'aks-preview' is already installed.
Use --debug for more information
  • Ensure you have enough quota resources to create an AKS cluster. Go to the Subscription blade, navigate to “Usage + Quotas”, and make sure you have enough quota for the following resources:
    -Regional vCPUs
    -Standard Dv4 Family vCPUs

Limitations

The following are constraints with this preview of Pod Sandboxing (preview):

  • Kata containers may not reach the IOPS performance limits that traditional containers can reach on Azure Files and high performance local SSD.
  • Microsoft Defender for Containers doesn’t support assessing Kata runtime pods.
  • Kata host-network isn’t supported.

Let’s get going

Registering the KataVMIsolationPreview feature

  • Register the KataVMIsolationPreview feature in your Azure subscription.
    - Register the KataVMIsolationPreview feature flag by using the az feature register command, as shown in the following example:
az feature register --namespace "Microsoft.ContainerService" --name "KataVMIsolationPreview"
Once the feature 'KataVMIsolationPreview' is registered, invoking 'az provider register -n Microsoft.ContainerService' is required to get the change propagated
{
"id": "/subscriptions/##############################/providers/Microsoft.Features/providers/Microsoft.ContainerService/features/KataVMIsolationPreview",
"name": "Microsoft.ContainerService/KataVMIsolationPreview",
"properties": {
"state": "Registering"
},
"type": "Microsoft.Features/providers/features"
}
  • It takes a few minutes for the status to show Registered. Verify the registration status by using the az feature show command:
az feature show --namespace "Microsoft.ContainerService" --name "KataVMIsolationPreview" -o table
Name RegistrationState
------------------------------------------------- -------------------
Microsoft.ContainerService/KataVMIsolationPreview Registered

Deploy a new AKS cluster with Azure CNI powered by Cilium in Overlay Mode

Note- you can deploy this either in Overlay mode or VNET mode.

  • Set the Subscription- If you have multiple Azure subscriptions, choose the subscription you want to use.
    -Replace SubscriptionName with your subscription name.
    -You can also use your subscription ID instead of your subscription name.
az account set --subscription SubscriptionName
  • Create a Resource Group in a particular region
az group create --name azpcoverlayal --location westus2
{
"id": "/subscriptions/##################/resourceGroups/azpcoverlayal",
"location": "westus2",
"managedBy": null,
"name": "azpcoverlayal",
"properties": {
"provisioningState": "Succeeded"
},
"tags": null,
"type": "Microsoft.Resources/resourceGroups"
}
  • Create an AKS cluster
    -workload-runtime: Specify KataMshvVmIsolation to enable the Pod Sandboxing feature on the node pool. With this parameter, these other parameters shall satisfy the following requirements. Otherwise, the command fails and reports an issue with the corresponding parameter(s).
    -os-sku: AzureLinux. Only the Azure Linux os-sku supports this feature in this preview release.
    -node-vm-size: Any Azure VM size that is a generation 2 VM and supports nested virtualization works. For example, Dsv3 VMs.
az aks create -n azpcoverlayal -g azpcoverlayal -l westus2 \
--network-plugin azure \
--network-plugin-mode overlay \
--pod-cidr 192.168.0.0/16 \
--network-dataplane cilium \
--workload-runtime KataMshvVmIsolation \
--node-vm-size Standard_D4s_v3 \
--os-sku AzureLinux
  • Set the Kubernetes Context- Log in to the Azure portal and browse to Kubernetes Services> select the respective Kubernetes service that was created ( AKS Cluster) and click on connect. This will help you connect to your AKS cluster and set the respective Kubernetes context.
az aks get-credentials --resource-group azpcoverlayal --name azpcoverlayal
  • List all the pods in all the namespaces
kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system azure-cns-gqt9k 1/1 Running 0 6d5h 10.224.0.4 aks-nodepool1-14951783-vmss000000 <none> <none>
kube-system azure-ip-masq-agent-4ms5c 1/1 Running 0 6d5h 10.224.0.4 aks-nodepool1-14951783-vmss000000 <none> <none>
kube-system cilium-operator-d78f778f7-7d7h2 1/1 Running 0 6d5h 10.224.0.4 aks-nodepool1-14951783-vmss000000 <none> <none>
kube-system cilium-operator-d78f778f7-m96fs 0/1 Pending 0 6d <none> <none> <none> <none>
kube-system cilium-tdgdw 1/1 Running 0 6d5h 10.224.0.4 aks-nodepool1-14951783-vmss000000 <none> <none>
kube-system cloud-node-manager-wb7mw 1/1 Running 0 6d5h 10.224.0.4 aks-nodepool1-14951783-vmss000000 <none> <none>
kube-system coredns-789789675-d6gzr 1/1 Running 0 6d5h 192.168.0.197 aks-nodepool1-14951783-vmss000000 <none> <none>
kube-system coredns-789789675-x4kz9 1/1 Running 0 6d5h 192.168.0.53 aks-nodepool1-14951783-vmss000000 <none> <none>
kube-system coredns-autoscaler-649b947bbd-znrpb 1/1 Running 0 6d5h 192.168.0.204 aks-nodepool1-14951783-vmss000000 <none> <none>
kube-system csi-azuredisk-node-9tj89 3/3 Running 0 6d5h 10.224.0.4 aks-nodepool1-14951783-vmss000000 <none> <none>
kube-system csi-azurefile-node-9fftz 3/3 Running 0 6d5h 10.224.0.4 aks-nodepool1-14951783-vmss000000 <none> <none>
kube-system extension-agent-59ff6f87bc-x8w74 2/2 Running 0 6d5h 192.168.0.189 aks-nodepool1-14951783-vmss000000 <none> <none>
kube-system extension-operator-59fcdc5cdc-cbcbw 2/2 Running 0 6d5h 192.168.0.65 aks-nodepool1-14951783-vmss000000 <none> <none>
kube-system hubble-relay-76ff659b59-dvjmc 1/1 Running 0 6d5h 192.168.0.27 aks-nodepool1-14951783-vmss000000 <none> <none>
kube-system konnectivity-agent-7b6475bfbd-6xx9h 1/1 Running 0 6d5h 192.168.0.230 aks-nodepool1-14951783-vmss000000 <none> <none>
kube-system konnectivity-agent-7b6475bfbd-cvdkn 1/1 Running 0 6d5h 192.168.0.201 aks-nodepool1-14951783-vmss000000 <none> <none>
kube-system metrics-server-5bd48455f4-7wrxt 2/2 Running 0 6d5h 192.168.0.92 aks-nodepool1-14951783-vmss000000 <none> <none>
kube-system metrics-server-5bd48455f4-mlskb 2/2 Running 0 6d5h 192.168.0.212 aks-nodepool1-14951783-vmss000000 <none> <none>

Update an existing AKS cluster with Azure CNI powered by Cilium in Overlay Mode

Note- your existing AKS cluster could have been deployed either in Overlay mode or VNET mode.

  • Use the following command to enable Pod Sandboxing (preview) by creating a node pool to host it.
    -workload-runtime: Specify KataMshvVmIsolation to enable the Pod Sandboxing feature on the node pool. With this parameter, these other parameters shall satisfy the following requirements. Otherwise, the command fails and reports an issue with the corresponding parameter(s).
    -os-sku: AzureLinux. Only the Azure Linux os-sku supports this feature in this preview release.
    -node-vm-size: Any Azure VM size that is a generation 2 VM and supports nested virtualization works. For example, Dsv3 VMs.
az aks nodepool add --cluster-name azpcoverlay --resource-group azpcoverlay --name nodepool2 --os-sku AzureLinux --workload-runtime KataMshvVmIsolation --node-vm-size Standard_D4s_v3 --node-count 1
  • Enable pod sandboxing (preview) on the cluster.
az aks update --name azpcoverlay --resource-group azpcoverlay
  • List all the pods in all the namespaces
kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default trusted 1/1 Running 0 6d2h 192.168.1.121 aks-nodepool2-37579464-vmss000000 <none> <none>
default untrusted 1/1 Running 0 6d2h 192.168.1.115 aks-nodepool2-37579464-vmss000000 <none> <none>
kube-system azure-cns-7rvd5 1/1 Running 0 6d2h 10.224.0.4 aks-nodepool1-29355248-vmss000000 <none> <none>
kube-system azure-cns-9h7xf 1/1 Running 0 6d2h 10.224.0.5 aks-nodepool2-37579464-vmss000000 <none> <none>
kube-system azure-ip-masq-agent-cwzg9 1/1 Running 0 6d2h 10.224.0.5 aks-nodepool2-37579464-vmss000000 <none> <none>
kube-system azure-ip-masq-agent-qgwr4 1/1 Running 0 6d2h 10.224.0.4 aks-nodepool1-29355248-vmss000000 <none> <none>
kube-system cilium-dh6hx 1/1 Running 0 6d2h 10.224.0.5 aks-nodepool2-37579464-vmss000000 <none> <none>
kube-system cilium-gbb4d 1/1 Running 0 6d2h 10.224.0.4 aks-nodepool1-29355248-vmss000000 <none> <none>
kube-system cilium-operator-fb4c58f8d-lmkg7 1/1 Running 0 6d2h 10.224.0.4 aks-nodepool1-29355248-vmss000000 <none> <none>
kube-system cloud-node-manager-n5b26 1/1 Running 0 6d2h 10.224.0.4 aks-nodepool1-29355248-vmss000000 <none> <none>
kube-system cloud-node-manager-vwcnr 1/1 Running 0 6d2h 10.224.0.5 aks-nodepool2-37579464-vmss000000 <none> <none>
kube-system coredns-789789675-pwbjz 1/1 Running 0 6d2h 192.168.0.92 aks-nodepool1-29355248-vmss000000 <none> <none>
kube-system coredns-789789675-vv2zv 1/1 Running 0 6d2h 192.168.0.213 aks-nodepool1-29355248-vmss000000 <none> <none>
kube-system coredns-autoscaler-649b947bbd-br2zr 1/1 Running 0 6d2h 192.168.0.165 aks-nodepool1-29355248-vmss000000 <none> <none>
kube-system csi-azuredisk-node-59f9p 3/3 Running 0 6d2h 10.224.0.5 aks-nodepool2-37579464-vmss000000 <none> <none>
kube-system csi-azuredisk-node-wpkx5 3/3 Running 0 6d2h 10.224.0.4 aks-nodepool1-29355248-vmss000000 <none> <none>
kube-system csi-azurefile-node-dhfcw 3/3 Running 0 6d2h 10.224.0.5 aks-nodepool2-37579464-vmss000000 <none> <none>
kube-system csi-azurefile-node-pww7z 3/3 Running 0 6d2h 10.224.0.4 aks-nodepool1-29355248-vmss000000 <none> <none>
kube-system konnectivity-agent-7b65687ff7-fvmzn 1/1 Running 0 6d1h 192.168.1.207 aks-nodepool2-37579464-vmss000000 <none> <none>
kube-system konnectivity-agent-7b65687ff7-r9rx5 1/1 Running 0 6d1h 192.168.0.239 aks-nodepool1-29355248-vmss000000 <none> <none>
kube-system metrics-server-5955767688-l4dx2 2/2 Running 0 6d2h 192.168.1.222 aks-nodepool2-37579464-vmss000000 <none> <none>
kube-system metrics-server-5955767688-pqrj6 2/2 Running 0 6d2h 192.168.1.68 aks-nodepool2-37579464-vmss000000 <none> <none>

Deploy a trusted application

To demonstrate deployment of a trusted application on the shared kernel in the AKS cluster, perform the following steps.

  • Create a file named trusted-application.yaml to describe a trusted pod, and then paste the following manifest.
kind: Pod
apiVersion: v1
metadata:
name: trusted
spec:
containers:
- name: trusted
image: mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11
command: ["/bin/sh", "-ec", "while :; do echo '.'; sleep 5 ; done"]
  • Deploy the Kubernetes by specifying your trusted-application.yaml file:
kubectl apply -f trusted-application.yaml

Deploy an untrusted application

To demonstrate deployment of an untrusted application on the shared kernel in the AKS cluster, perform the following steps.

  • Create a file named untrusted-application.yaml to describe a trusted pod, and then paste the following manifest.
kind: Pod
apiVersion: v1
metadata:
name: untrusted
spec:
runtimeClassName: kata-mshv-vm-isolation
containers:
- name: untrusted
image: mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11
command: ["/bin/sh", "-ec", "while :; do echo '.'; sleep 5 ; done"]
  • Deploy the Kubernetes pod by specifying your untrusted-application.yaml file:
kubectl apply -f untrusted-application.yaml

Verify Kernel Isolation configuration

  • In this example, you’re accessing the container inside the untrusted pod.
    -To see the kernel version run uname-r :
kubectl exec -it untrusted -- /bin/bash
root@untrusted:/# uname -r
6.1.0.mshv14
  • You’ll notice that it has a different kernel version compared to the trusted container outside the sandbox.
    -To see the kernel version run uname-r :
kubectl exec -it trusted -- /bin/bash
root@trusted:/# uname -r
5.15.126.mshv9-2.cm2

Cluster and Cilium Health Check

  • Let’s check the health of the nodes
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
aks-nodepool1-29355248-vmss000000 Ready agent 6d2h v1.27.7 10.224.0.4 <none> Ubuntu 22.04.3 LTS 5.15.0-1052-azure containerd://1.7.5-1
aks-nodepool2-37579464-vmss000000 Ready agent 6d2h v1.27.7 10.224.0.5 <none> CBL-Mariner/Linux 5.15.126.mshv9-2.cm2 containerd://1.7.2
  • Let’s check the health status for Cilium
kubectl exec -ti ds/cilium -n kube-system -- cilium status
Defaulted container "cilium-agent" out of: cilium-agent, install-cni-binaries (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), systemd-networkd-overrides (init), block-wireserver (init)
KVStore: Ok Disabled
Kubernetes: Ok 1.27 (v1.27.7) [linux/amd64]
Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement: Strict [eth0 10.224.0.4 (Direct Routing)]
Host firewall: Disabled
CNI Chaining: none
Cilium: Ok 1.12.10 (v1.12.10-628b5209ef)
NodeMonitor: Disabled
Cilium health daemon: Ok
IPAM: IPv4: delegated to plugin,
BandwidthManager: Disabled
Host Routing: Legacy
Masquerading: Disabled
Controller Status: 29/29 healthy
Proxy Status: No managed proxy redirect
Global Identity Range: min 256, max 65535
Hubble: Disabled
Encryption: Disabled
Cluster health: Probe disabled
  • Let’s also check the node-to-node health with cilium-health status
kubectl exec -ti ds/cilium -n kube-system -- cilium-health status
Defaulted container "cilium-agent" out of: cilium-agent, install-cni-binaries (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), systemd-networkd-overrides (init), block-wireserver (init)
Probe time: 2023-12-27T12:15:12Z
Nodes:
aks-nodepool1-29355248-vmss000000 (localhost):
Host connectivity to 10.224.0.4:
ICMP to stack: OK, RTT=76.501µs
HTTP to agent: OK, RTT=246.505µs

Validate the installation

Let’s run a cilium connectivity test (an automated test that checks that Cilium has been deployed correctly and tests intra-node connectivity, inter-node connectivity and network policies) to verify that everything is working as expected.

cilium connectivity test
ℹ️ Monitor aggregation detected, will skip some flow validation steps
✨ [azpcoverlayal] Creating namespace cilium-test for connectivity check...
✨ [azpcoverlayal] Deploying echo-same-node service...
✨ [azpcoverlayal] Deploying DNS test server configmap...
✨ [azpcoverlayal] Deploying same-node deployment...
✨ [azpcoverlayal] Deploying client deployment...
✨ [azpcoverlayal] Deploying client2 deployment...
⌛ [azpcoverlayal] Waiting for deployment cilium-test/client to become ready...
⌛ [azpcoverlayal] Waiting for deployment cilium-test/client2 to become ready...
⌛ [azpcoverlayal] Waiting for deployment cilium-test/echo-same-node to become ready...
⌛ [azpcoverlayal] Waiting for CiliumEndpoint for pod cilium-test/client-75bff5f5b9-s4lvb to appear...
⌛ [azpcoverlayal] Waiting for CiliumEndpoint for pod cilium-test/client2-88575dbb7-pbx92 to appear...
⌛ [azpcoverlayal] Waiting for pod cilium-test/client-75bff5f5b9-s4lvb to reach DNS server on cilium-test/echo-same-node-55fdb9f64c-76wgb pod...
⌛ [azpcoverlayal] Waiting for pod cilium-test/client2-88575dbb7-pbx92 to reach DNS server on cilium-test/echo-same-node-55fdb9f64c-76wgb pod...
⌛ [azpcoverlayal] Waiting for pod cilium-test/client-75bff5f5b9-s4lvb to reach default/kubernetes service...
⌛ [azpcoverlayal] Waiting for pod cilium-test/client2-88575dbb7-pbx92 to reach default/kubernetes service...
⌛ [azpcoverlayal] Waiting for CiliumEndpoint for pod cilium-test/echo-same-node-55fdb9f64c-76wgb to appear...
⌛ [azpcoverlayal] Waiting for Service cilium-test/echo-same-node to become ready...
⌛ [azpcoverlayal] Waiting for Service cilium-test/echo-same-node to be synchronized by Cilium pod kube-system/cilium-tdgdw
⌛ [azpcoverlayal] Waiting for NodePort 10.224.0.4:30026 (cilium-test/echo-same-node) to become ready...
ℹ️ Skipping IPCache check
🔭 Enabling Hubble telescope...
ℹ️ Expose Relay locally with:
cilium hubble enable
cilium hubble port-forward&
ℹ️ Cilium version: 1.12.17
🏃 Running 61 tests ...
[=] Test [no-unexpected-packet-drops] [1/61]

[=] Test [no-policies] [2/61]
.....................
[=] Test [no-policies-extra] [3/61]
..
[=] Test [allow-all-except-world] [4/61]
........
[=] Test [client-ingress] [5/61]
..
[=] Test [client-ingress-knp] [6/61]
..
[=] Test [allow-all-with-metrics-check] [7/61]
..
[=] Test [all-ingress-deny] [8/61]
......
[=] Test [all-ingress-deny-knp] [9/61]
......
[=] Test [all-egress-deny] [10/61]
........
[=] Test [all-egress-deny-knp] [11/61]
........
[=] Test [all-entities-deny] [12/61]
......
[=] Test [cluster-entity] [13/61]
..
[=] Test [host-entity] [14/61]
..
[=] Test [echo-ingress] [15/61]
..
[=] Test [echo-ingress-knp] [16/61]
..
[=] Test [client-ingress-icmp] [17/61]
..
[=] Test [client-egress] [18/61]
..
[=] Test [client-egress-knp] [19/61]
..
[=] Test [client-egress-expression] [20/61]
..
[=] Test [client-egress-expression-knp] [21/61]
..
[=] Test [client-with-service-account-egress-to-echo] [22/61]
..
[=] Test [client-egress-to-echo-service-account] [23/61]
..
[=] Test [to-entities-world] [24/61]
......
[=] Test [to-cidr-external] [25/61]
....
[=] Test [to-cidr-external-knp] [26/61]
....
[=] Test [echo-ingress-from-other-client-deny] [27/61]
....
[=] Test [client-ingress-from-other-client-icmp-deny] [28/61]
....
[=] Test [client-egress-to-echo-deny] [29/61]
....
[=] Test [client-ingress-to-echo-named-port-deny] [30/61]
..
[=] Test [client-egress-to-echo-expression-deny] [31/61]
..
[=] Test [client-with-service-account-egress-to-echo-deny] [32/61]
..
[=] Test [client-egress-to-echo-service-account-deny] [33/61]
.
[=] Test [client-egress-to-cidr-deny] [34/61]
....
[=] Test [client-egress-to-cidr-deny-default] [35/61]
....
[=] Skipping Test [health] [36/61] (Feature health-checking is disabled)
[=] Skipping Test [north-south-loadbalancing] [37/61] (Feature node-without-cilium is disabled)
[=] Skipping Test [pod-to-node-cidrpolicy] [38/61] (Feature cidr-match-nodes is disabled)
[=] Skipping Test [north-south-loadbalancing-with-l7-policy] [39/61] (requires Cilium version >1.13.2 but running 1.12.17)
[=] Test [echo-ingress-l7] [40/61]
......
[=] Test [echo-ingress-l7-named-port] [41/61]
......
[=] Test [client-egress-l7-method] [42/61]
......
[=] Test [client-egress-l7] [43/61]
........
[=] Test [client-egress-l7-named-port] [44/61]
........
[=] Skipping Test [client-egress-l7-tls-deny-without-headers] [45/61] (Feature secret-backend-k8s is disabled)
[=] Skipping Test [client-egress-l7-tls-headers] [46/61] (Feature secret-backend-k8s is disabled)
[=] Skipping Test [client-egress-l7-set-header] [47/61] (Feature secret-backend-k8s is disabled)
[=] Skipping Test [echo-ingress-auth-always-fail] [48/61] (Feature mutual-auth-spiffe is disabled)
[=] Skipping Test [echo-ingress-mutual-auth-spiffe] [49/61] (Feature mutual-auth-spiffe is disabled)
[=] Skipping Test [pod-to-ingress-service] [50/61] (Feature ingress-controller is disabled)
[=] Skipping Test [pod-to-ingress-service-deny-all] [51/61] (Feature ingress-controller is disabled)
[=] Skipping Test [pod-to-ingress-service-deny-ingress-identity] [52/61] (Feature ingress-controller is disabled)
[=] Skipping Test [pod-to-ingress-service-deny-backend-service] [53/61] (Feature ingress-controller is disabled)
[=] Skipping Test [pod-to-ingress-service-allow-ingress-identity] [54/61] (Feature ingress-controller is disabled)
[=] Skipping Test [outside-to-ingress-service] [55/61] (Feature ingress-controller is disabled)
[=] Skipping Test [outside-to-ingress-service-deny-world-identity] [56/61] (Feature ingress-controller is disabled)
[=] Skipping Test [outside-to-ingress-service-deny-cidr] [57/61] (Feature ingress-controller is disabled)
[=] Skipping Test [outside-to-ingress-service-deny-all-ingress] [58/61] (Feature ingress-controller is disabled)
[=] Test [dns-only] [59/61]
........
[=] Test [to-fqdns] [60/61]
........
[=] Skipping Test [external-cilium-dns-proxy] [61/61] (Feature cilium-dnsproxy-deployed is disabled)

✅ All 42 tests (184 actions) successful, 19 tests skipped, 0 scenarios skipped.

References

Try out Cilium

  • Try out Cilium and get a first-hand experience of how it solves some real problems and use-cases in your cloud-native or on-prem environments related to Networking, Security or Observability.

🌟Conclusion 🌟

Hopefully, this post gave you a good overview of how to enable Pod Sandboxing on an AKS cluster running Azure CNI powered by Cilium. Thank you for Reading !! 🙌🏻😁📃, see you in the next blog.

🚀 Feel free to connect/follow with me/on :

LinkedIn: linkedin.com/in/agamitgupta

--

--