Touring the Kubernetes Istio Ambient Mesh — Part 1: Setup, ZTunnel

Sabuj Jana
15 min readJun 16, 2024

--

You can learn more about Istio in my previous blogs. This blog assumes you have a preliminary knowledge of working with Istio.

Istio is moving towards an ambient mode — with a sidecar-less model. Finally, we will be able to ditch the sidecar which was consuming a lot of our $$ in terms of cpu & memory!

Since it’s a long weekend ahead, I decided to get my hands dirty with Istio ambient mesh.

Using this blog to document the journey so that like-minded Istio adopters can follow along :P.

In this Part 1, I will be focusing on getting our local cluster setup perfectly for ambient mesh experiments. Then, I will be exploring the ztunnel component of the mesh.

In Part 2, we will talk about L4 Auth Policies and then get started with waypoint proxy.

What is Ambient ?

Istio Ambient Mesh is essentially a new dataplane mode of Istio — without sidecars.

In the normal mode of Istio, we recall that every single application pod was getting injected with the envoy proxy. However, in this new mode, application pods will be untouched :) and they will only be having their own application containers.

One big advantage that comes to my mind instantly is the much reduced infra cost imagine the dollars we are saving in terms of compute cores and memory !!

Ambient Architecture

Time to envision the old and the new ambient architecture and how they differ in traffic flow path of the dataplane.

Sidecar mode

sidecar mode

This is the traditional sidecar model where each service pod contains the application container coupled with the envoy sidecar. All traffic to and from from the application is intercepted by the sidecar.

Ambient mode, with ztunnel

Ambient mode — with ztunnel

In this new ambient mode, the application pods are standalone pods with no sidecars. However, in this case, each kubernetes node in the cluster will be running a daemonset pod — the mighty ztunnel (zero-trust tunnel). All traffic to and from pods in the node is intercepted by the ztunnel. Ztunnel is a L4 per-node proxy.

Ambient mode, with ztunnel + waypoint

Ambient mode — ztunnel + waypoint

Ztunnel is sufficient for networking between workloads which only have a need for L4 proxy. For L7 requirements, like http header based routing, L7 authorization, we deploy a workload called waypoint proxy, which are envoy pods, on a per application basis. They may run in the same or different nodes.

In this case, traffic originating from source ztunnel will hit waypoint proxy and waypoint will then forward it to the destination ztunnel.

Hence, if an app has no requirement for L7 processing, we can ditch the waypoint and only work with ztunnel. In the earlier mode, we were mandated to use envoy sidecars even if we only had L4 requirements.

Cluster Setup In Depth

It took me a lot of tinkering around to setup a Kind Cluster with the valid CNI settings that would be properly supported by Ambient workloads. Currently, I don’t have access to any public GKE/AKS/EKS k8s clusters for the experimentation.

Hence, I will proceed to home-setup a local Kind cluster.

System Specs

  • Mac M1 (Apple-Silicon arch) with the default configurations.

You can proceed similarly, keeping in mind your system architecture (Linux/Windows/Apple Intel).

  • Rancher Desktop

We need a Docker runtime for our k8s cluster. Rancher Desktop by SUSE is what I will be using — https://rancherdesktop.io/.

Other alternatives are Docker Desktop — https://www.docker.com/products/docker-desktop/

  • Kind

Since I will be needing a Multi-Node K8s cluster, what better tool we have than to bootstrap it with Kind — https://kind.sigs.k8s.io/docs/user/quick-start.

This will also allow us to ssh into the k8s nodes and allow us to play around with the node configurations.

Later in the blog, after installing istio-ambient profile, we also encounter an issue with the kind nodes: https://kind.sigs.k8s.io/docs/user/known-issues/#pod-errors-due-to-too-many-open-files. We need to modify sysctl.conf on the affected nodes post which the istio-system pods will spawn up fine.

  • CNI (Container Networking Interface)

Istio Ambient Mode is supported with CNIs like Calico, Cilium etc. The default CNI for Kind clusters is “kindnetd”, which I tested myself and found that ztunnel pods were not able to spawn up. Hence, I needed a different CNI.

Istio Installation

Cluster overview

Prior to Istio, let us go through the cluster specs.

Here is the Kind cluster setup script.

We have 3 nodes: 1 master-node and 2 worker-nodes.

(⎈|kind-ambient:istio-system)➜  ~ kg nodes
NAME STATUS ROLES AGE VERSION
ambient-control-plane Ready control-plane 117m v1.30.0
ambient-worker Ready <none> 116m v1.30.0
ambient-worker2 Ready <none> 116m v1.30.0

We also ensure that Calico CNI daemonset is up and running. We also verify that cluster CoreDNS is also ready.

(⎈|kind-ambient:kube-system)➜  ~ kgpo
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-564985c589-xmsrp 1/1 Running 1 (96m ago) 117m
calico-node-796kq 1/1 Running 0 116m
calico-node-l6fsg 1/1 Running 0 116m
calico-node-zkckp 1/1 Running 0 116m
coredns-7db6d8ff4d-h88q6 1/1 Running 0 118m
coredns-7db6d8ff4d-rtncd 1/1 Running 0 118m

Install Istio Ambient Profile

The installation is straight-forward and can be found here: https://istio.io/v1.20/docs/ops/ambient/getting-started/

sabuj $ istioctl install --set profile=ambient --set "components.ingressGateways[0].enabled=true" --set "components.ingressGateways[0].name=istio-ingressgateway" --skip-confirmation
✔ Istio core installed
✔ Ztunnel installed
✔ Istiod installed
✔ CNI installed
✔ Ingress gateways installed
✔ Installation complete
Making this installation the default for injection and validation.
  • Here is the version of Istio I am using.
(⎈|kind-ambient:kube-system)➜  ~ istioctl version
client version: 1.15.0
control plane version: 1.22.1
data plane version: 1.22.1 (4 proxies)
  • Ztunnel is a daemonset that runs on each node — 3 nodes in this case. Let us verify if ztunnel is up and running in istio-system namespace.
(⎈|kind-ambient:istio-system)➜  ~ kgpo -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
istio-cni-node-7q6z7 1/1 Running 0 117m 172.18.0.2 ambient-worker2 <none> <none>
istio-cni-node-fg6n4 1/1 Running 0 117m 172.18.0.4 ambient-worker <none> <none>
istio-cni-node-gwvl8 1/1 Running 0 117m 172.18.0.3 ambient-control-plane <none> <none>
istio-ingressgateway-6f48dfb7db-862sm 1/1 Running 0 117m 192.168.184.70 ambient-worker <none> <none>
istiod-6875bc5c58-n4j7d 1/1 Running 0 118m 192.168.246.2 ambient-worker2 <none> <none>
ztunnel-62hp8 1/1 Running 0 117m 192.168.246.3 ambient-worker2 <none> <none>
ztunnel-fv5f8 1/1 Running 0 117m 192.168.184.71 ambient-worker <none> <none>
ztunnel-gs52c 1/1 Running 0 117m 192.168.208.1 ambient-control-plane <none> <none>

(⎈|kind-ambient:istio-system)➜ ~ kg ds
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
istio-cni-node 3 3 3 3 3 kubernetes.io/os=linux 119m
ztunnel 3 3 3 3 3 kubernetes.io/os=linux 118m

Indeed it has 3 pods !

  • Similarly, Istio-cni is another daemonset that is also installed.
  • Istio control-plane a.k.a, istiod is also up and running
  • We have also installed a default istio ingressgateway for public traffic.

Workload Setup

All the manifest files for this project can be found here: https://github.com/JanaSabuj/istio-ambient-mesh-exploration

Agenda

  1. We will setup our app without an ambient-profile, with normal Istio crds. Then we will call the app via i) ingress ii) debug client pod
  2. Post that, we will switch the istio dataplane mode to ambient and observe the difference in the traffic path i.e traffic flowing through ztunnel pods.

Namespace

Let us create a ns called ambient-demo. We will be hosting our app in this namespace.

(⎈|kind-ambient:istio-system)➜  ~ k create ns ambient-demo

(⎈|kind-ambient:istio-system)➜ ~ kg ns
NAME STATUS AGE
default Active 132m
istio-system Active 125m
kube-node-lease Active 132m
kube-public Active 132m
kube-system Active 132m
local-path-storage Active 132m

Application

Then we deploy our application using a deployment, service and service-account yaml.

Here is what it looks like in cluster.

(⎈|kind-ambient:ambient-demo)➜  ~ kg all
NAME READY STATUS RESTARTS AGE
pod/httpbin-6f4dc97cb-5dpz9 1/1 Running 0 3m21s
pod/httpbin-6f4dc97cb-swdlb 1/1 Running 0 3m31s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/httpbin ClusterIP 10.96.157.221 <none> 80/TCP 143m

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/httpbin 2/2 2 2 143m

NAME DESIRED CURRENT READY AGE
replicaset.apps/httpbin-6f4dc97cb 2 2 2 3m31s

Istio Configurations

To expose it to the outside world, we create an Istio Gateway and Istio VirtualService so as to access this via the istio ingress.

  • We are adding a listener on istio ingress pod via Gateway on port 8081
  • Post that, a VirtualService routes it to httpbin service which is running in our ambient-demo namespace

Verification via Istio Ingress

Now we can verify that our application is accessible.

(⎈|kind-ambient:istio-system)➜  ~ kgs
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.96.77.172 <pending> 15021:30807/TCP,80:30587/TCP,443:31004/TCP 147m

Since, Kind has not given us a external-ip for the ingress service — we will simulate the external access by port-forwarding the ingress pods at our desired listener 8081.

(⎈|kind-ambient:istio-system)➜  ~ kpf istio-ingressgateway-6f48dfb7db-862sm 8081
Forwarding from 127.0.0.1:8081 -> 8081
Forwarding from [::1]:8081 -> 8081

On hitting 127.0.0.1:8081 in our browser, we see that our application is live !

http://127.0.0.1:8081/

httpbin.org
0.9.2
[ Base URL: 127.0.0.1:8081/ ]
A simple HTTP Request & Response Service.

Run locally: $ docker run -p 80:80 kennethreitz/httpbin

the developer - Website
Send email to the developer

  • Another way to verify is via curl from local. We see that the request is served by the istio-envoy pods.
(⎈|kind-ambient:istio-system)➜  ~ curl 127.0.0.1:8081 -v
* Trying 127.0.0.1:8081...
* Connected to 127.0.0.1 (127.0.0.1) port 8081
> GET / HTTP/1.1
> Host: 127.0.0.1:8081
> User-Agent: curl/8.6.0
> Accept: */*
>
< HTTP/1.1 200 OK
< server: istio-envoy
< date: Sun, 16 Jun 2024 11:52:13 GMT
< content-type: text/html; charset=utf-8
< content-length: 9593
< access-control-allow-origin: *
< access-control-allow-credentials: true
< x-envoy-upstream-service-time: 271
<
<!DOCTYPE html>
<html lang="en">
...

Verification via Debug Client Pods

We also want to test out the mesh-internal calls. Hence, we need client pods, preferably one in each node. To achieve this, we can setup a debugger daemonset client workload — https://github.com/digitalocean/doks-debug

Modified Manifest I have used : https://gist.github.com/JanaSabuj/a4dd2504752b8c2b30d2d2d05320f7ef

I have deployed this ds in our ambient-demo namespace itself.

(⎈|kind-ambient:ambient-demo)➜  ~ kg ds
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
doks-debug 3 3 3 3 3 <none> 89m

(⎈|kind-ambient:ambient-demo)➜ ~ kgpo -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
doks-debug-j9mm5 1/1 Running 0 90m 192.168.246.5 ambient-worker2 <none> <none>
doks-debug-rdhgq 1/1 Running 0 90m 192.168.184.73 ambient-worker <none> <none>
doks-debug-v7cld 1/1 Running 0 90m 192.168.208.3 ambient-control-plane <none>

To test it out, we exec into a debug pod and try to curl the k8s fqdn httpbin.ambient-demo.svc.cluster.local .

It is giving 200 OK.

(⎈|kind-ambient:ambient-demo)➜  ~ k exec -it doks-debug-j9mm5 -- /bin/bash

root@doks-debug-j9mm5:~# curl httpbin.ambient-demo.svc.cluster.local -v
* Trying 10.96.157.221:80...
* Connected to httpbin.ambient-demo.svc.cluster.local (10.96.157.221) port 80 (#0)
> GET / HTTP/1.1
> Host: httpbin.ambient-demo.svc.cluster.local
> User-Agent: curl/7.88.1
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: gunicorn/19.9.0
< Date: Sun, 16 Jun 2024 12:06:11 GMT
< Connection: keep-alive
< Content-Type: text/html; charset=utf-8
< Content-Length: 9593
< Access-Control-Allow-Origin: *
< Access-Control-Allow-Credentials: true
<

Ambient Injection

Till now, there is no ambient mode enabled. The requests are landing on the Istio ingress pods since it has listeners added by Istio Gateway and thereafter, getting routed to the app pods via the VirtualService.

For mesh-internal calls, it is a direct pod-to-pod communication between the client and server pod.

While injecting ambient, we will also be tailing the logs of

  • istio-cni
  • ztunnel

to verify that Istio is internally configuring routes, iptables etc to enable ambient dataplane mode for the namespace ambient-demo.

(⎈|kind-ambient:ambient-demo)➜ 
~ kubectl label namespace ambient-demo istio.io/dataplane-mode=ambient

Logs observed

  • istio-cni
(⎈|kind-ambient:ambient-demo)➜ ~ stern istio-cni -n istio-system

istio-cni-node-7q6z7 install-cni 2024-06-16T10:13:59.243451Z info ambient Namespace ambient-demo is enabled in ambient mesh
istio-cni-node-7q6z7 install-cni 2024-06-16T10:13:59.264500Z info ambient in pod mode - adding pod ambient-demo/httpbin-5bd875fbdd-84vs8 to ztunnel
istio-cni-node-gwvl8 install-cni 2024-06-16T10:13:59.291505Z info ambient Namespace ambient-demo is enabled in ambient mesh
istio-cni-node-fg6n4 install-cni 2024-06-16T10:13:59.281587Z info ambient Namespace ambient-demo is enabled in ambient mesh
istio-cni-node-fg6n4 install-cni 2024-06-16T10:13:59.313668Z info ambient in pod mode - adding pod ambient-demo/httpbin-5bd875fbdd-dp4ct to ztunnel

istio-cni-node-7q6z7 install-cni 2024-06-16T10:13:59.264500Z info ambient in pod mode - adding pod ambient-demo/httpbin-5bd875fbdd-84vs8 to ztunnel
istio-cni-node-7q6z7 install-cni 2024-06-16T10:13:59.325907Z info iptables Running iptables-restore with the following input:
istio-cni-node-7q6z7 install-cni * nat
istio-cni-node-7q6z7 install-cni -N ISTIO_OUTPUT
istio-cni-node-7q6z7 install-cni -A OUTPUT -j ISTIO_OUTPUT
istio-cni-node-7q6z7 install-cni -A ISTIO_OUTPUT -d 169.254.7.127 -p tcp -m tcp -j ACCEPT
istio-cni-node-7q6z7 install-cni -A ISTIO_OUTPUT -p tcp -m mark --mark 0x111/0xfff -j ACCEPT
istio-cni-node-7q6z7 install-cni -A ISTIO_OUTPUT ! -d 127.0.0.1/32 -o lo -j ACCEPT
istio-cni-node-7q6z7 install-cni -A ISTIO_OUTPUT ! -d 127.0.0.1/32 -p tcp -m mark ! --mark 0x539/0xfff -j REDIRECT --to-ports 15001
istio-cni-node-7q6z7 install-cni COMMIT
istio-cni-node-7q6z7 install-cni * mangle
istio-cni-node-7q6z7 install-cni -N ISTIO_PRERT
istio-cni-node-7q6z7 install-cni -N ISTIO_OUTPUT
istio-cni-node-7q6z7 install-cni -A PREROUTING -j ISTIO_PRERT
istio-cni-node-7q6z7 install-cni -A OUTPUT -j ISTIO_OUTPUT
istio-cni-node-7q6z7 install-cni -A ISTIO_PRERT -m mark --mark 0x539/0xfff -j CONNMARK --set-xmark 0x111/0xfff
istio-cni-node-7q6z7 install-cni -A ISTIO_PRERT -s 169.254.7.127 -p tcp -m tcp -j ACCEPT
istio-cni-node-7q6z7 install-cni -A ISTIO_PRERT ! -d 127.0.0.1/32 -p tcp -i lo -j ACCEPT
istio-cni-node-7q6z7 install-cni -A ISTIO_PRERT -p tcp -m tcp --dport 15008 -m mark ! --mark 0x539/0xfff -j TPROXY --on-port 15008 --tproxy-mark 0x111/0xfff
istio-cni-node-7q6z7 install-cni -A ISTIO_PRERT -p tcp -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
istio-cni-node-7q6z7 install-cni -A ISTIO_PRERT ! -d 127.0.0.1/32 -p tcp -m mark ! --mark 0x539/0xfff -j TPROXY --on-port 15006 --tproxy-mark 0x111/0xfff
istio-cni-node-7q6z7 install-cni -A ISTIO_OUTPUT -m connmark --mark 0x111/0xfff -j CONNMARK --restore-mark --nfmask 0xffffffff --ctmask 0xffffffff
istio-cni-node-7q6z7 install-cni COMMIT
istio-cni-node-7q6z7 install-cni 2024-06-16T10:13:59.335115Z info Running command (with wait lock): iptables-restore --noflush -v --wait=30
istio-cni-node-7q6z7 install-cni 2024-06-16T10:13:59.550687Z info ambient About to send added pod: 3ab72f78-8e2b-4e49-bc47-45fa4f90dbf7 to ztunnel: add:{uid:"3ab72f78-8e2b-4e49-bc47-45fa4f90dbf7" workload_info:{name:"httpbin-5bd875fbdd-84vs8" namespace:"ambient-demo" service_account:"default" trust_domain:"cluster.local"}}

It generated a lot of logs. We see that it is adding each of the httpbin pods in the ambient-demo namespace to the ambient path via creating iptable rules and ipsets in the pods.

  • ztunnel
(⎈|kind-ambient:ambient-demo)➜ ~ stern ztunnel -n istio-system

ztunnel-62hp8 istio-proxy 2024-06-16T10:13:59.559505Z info inpod::statemanager pod WorkloadUid("3ab72f78-8e2b-4e49-bc47-45fa4f90dbf7") received netns, starting proxy
ztunnel-fv5f8 istio-proxy 2024-06-16T10:13:59.557545Z info inpod::statemanager pod WorkloadUid("7f8be6a6-64f2-40e9-8926-6c3a618eb7d9") received netns, starting proxy
ztunnel-fv5f8 istio-proxy 2024-06-16T10:13:59.560458Z info proxy::inbound listener established address=[::]:15008 component="inbound" transparent=true
ztunnel-fv5f8 istio-proxy 2024-06-16T10:13:59.561604Z info proxy::inbound_passthrough listener established address=[::]:15006 component="inbound plaintext" transparent=true
ztunnel-fv5f8 istio-proxy 2024-06-16T10:13:59.561647Z info proxy::outbound listener established address=[::]:15001 component="outbound" transparent=true
ztunnel-62hp8 istio-proxy 2024-06-16T10:13:59.573883Z info proxy::inbound listener established address=[::]:15008 component="inbound" transparent=true
ztunnel-62hp8 istio-proxy 2024-06-16T10:13:59.586596Z info proxy::inbound_passthrough listener established address=[::]:15006 component="inbound plaintext" transparent=true
ztunnel-62hp8 istio-proxy 2024-06-16T10:13:59.586819Z info proxy::outbound listener established address=[::]:15001 component="outbound" transparent=true
ztunnel-fv5f8 istio-proxy 2024-06-16T10:13:59.811460Z info xds::client:xds{id=14} received response type_url="type.googleapis.com/istio.workload.Address" size=2 removes=0
ztunnel-62hp8 istio-proxy 2024-06-16T10:13:59.816352Z info xds::client:xds{id=14} received response type_url="type.googleapis.com/istio.workload.Address" size=2 removes=0
ztunnel-gs52c istio-proxy 2024-06-16T10:13:59.821310Z info xds::client:xds{id=14} received response type_url="type.googleapis.com/istio.workload.Address" size=2 removes=0

Looks like ztunnel is setting up inbound and outbound listeners in itself.

Traffic Flow — via Istio Ingress

Similar as before, I port-forward the istio-ingress pod and access it via localhost.

We want to extract 2 calls to ingress such that they fall on each of the 2 httpbin pods, and we capture the ztunnel logs of the same.

(⎈|kind-ambient:istio-system)➜  ~ kpf istio-ingressgateway-6f48dfb7db-862sm 8081
Forwarding from 127.0.0.1:8081 -> 8081

$ curl localhost:8081/
$ curl localhost:8081/
(⎈|kind-ambient:istio-system)➜  ~ stern ztunnel -n istio-system

ztunnel-fv5f8 istio-proxy 2024-06-16T12:26:24.154500Z
info access connection complete src.addr=192.168.184.70:52792
src.workload=istio-ingressgateway-6f48dfb7db-862sm src.namespace=istio-system
src.identity="spiffe://cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account"
dst.addr=192.168.184.74:80 dst.hbone_addr=192.168.184.74:80
dst.service=httpbin.ambient-demo.svc.cluster.local
dst.workload=httpbin-6f4dc97cb-swdlb dst.namespace=ambient-demo
dst.identity="spiffe://cluster.local/ns/ambient-demo/sa/httpbin-sa"
direction="inbound" bytes_sent=51083 bytes_recv=4180 duration="2166ms"

ztunnel-62hp8 istio-proxy 2024-06-16T12:28:40.267690Z
info access connection complete src.addr=192.168.184.70:55036
src.workload=istio-ingressgateway-6f48dfb7db-862sm src.namespace=istio-system
src.identity="spiffe://cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account"
dst.addr=192.168.246.6:80 dst.hbone_addr=192.168.246.6:80
dst.service=httpbin.ambient-demo.svc.cluster.local
dst.workload=httpbin-6f4dc97cb-5dpz9 dst.namespace=ambient-demo
dst.identity="spiffe://cluster.local/ns/ambient-demo/sa/httpbin-sa"
direction="inbound" bytes_sent=41251 bytes_recv=2007 duration="2331ms"

Since istio ingress is not falling inside the ambient dataplane path, calls from the ingress pod is directly landing on the ztunnel pods — one node at a time, and they are then inbounded to their corresponding inner node pod.

From log capture

direction="inbound"
  • This confirms that traffic to ambient labelled ns pods is via ztunnel always.
dst.hbone_addr=192.168.184.74:80
dst.hbone_addr=192.168.246.6:80

httpbin-6f4dc97cb-5dpz9 1/1 Running 0 56m 192.168.246.6 ambient-worker2 <none> <none>
httpbin-6f4dc97cb-swdlb 1/1 Running 0 56m 192.168.184.74 ambient-worker <none> <none>
  • This captures the dest pod ips which correspond to the actual httpbin pod ips.

Traffic Flow — via Mesh Internal

Let us exec into a client debug pod and try mesh internal calls to our httpbin service, and in parallel, capture ztunnel logs to verify the behaviour of the same.

(⎈|kind-ambient:ambient-demo)➜  ~ kgpo -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
doks-debug-j9mm5 1/1 Running 0 122m 192.168.246.5 ambient-worker2 <none> <none>
doks-debug-rdhgq 1/1 Running 0 122m 192.168.184.73 ambient-worker <none> <none>
doks-debug-v7cld 1/1 Running 0 122m 192.168.208.3 ambient-control-plane <none> <none>
httpbin-6f4dc97cb-5dpz9 1/1 Running 0 56m 192.168.246.6 ambient-worker2 <none> <none>
httpbin-6f4dc97cb-swdlb 1/1 Running 0 56m 192.168.184.74 ambient-worker <none> <none>

Just to correlate between the client and app pods on the same node —

(⎈|kind-ambient:ambient-demo)➜  ~ kgpo -A -owide | grep "ambient-worker "
ambient-demo doks-debug-rdhgq 1/1 Running 0 133m 192.168.184.73 ambient-worker <none> <none>
istio-system ztunnel-fv5f8 1/1 Running 0 3h24m 192.168.184.71 ambient-worker <none> <none>
ambient-demo httpbin-6f4dc97cb-swdlb 1/1 Running 0 68m 192.168.184.74 ambient-worker <none> <none>


(⎈|kind-ambient:ambient-demo)➜ ~ kgpo -A -owide | grep "ambient-worker2"
kube-system doks-debug-9nlh7 1/1 Running 0 3h6m 192.168.246.4 ambient-worker2 <none> <none>
istio-system ztunnel-62hp8 1/1 Running 0 3h23m 192.168.246.3 ambient-worker2 <none> <none>
ambient-demo httpbin-6f4dc97cb-5dpz9 1/1 Running 0 67m 192.168.246.6 ambient-worker2 <none> <none>

Let us exec into the debug pod doks-debug-rdhgq scheduled in ambient-worker node.

mesh internal experiment
Capture 1: client and server in same node
---------
ztunnel-fv5f8 istio-proxy 2024-06-16T12:40:48.707707Z info access connection complete src.addr=192.168.184.73:56463 src.workload=doks-debug-rdhgq src.namespace=ambient-demo src.identity="spiffe://cluster.local/ns/ambient-demo/sa/default" dst.addr=192.168.184.74:80 dst.hbone_addr=192.168.184.74:80 dst.service=httpbin.ambient-demo.svc.cluster.local dst.workload=httpbin-6f4dc97cb-swdlb dst.namespace=ambient-demo dst.identity="spiffe://cluster.local/ns/ambient-demo/sa/httpbin-sa" direction ="inbound" bytes_sent=9832 bytes_recv=84 duration="178ms"
ztunnel-fv5f8 istio-proxy 2024-06-16T12:40:48.708070Z info access connection complete src.addr=192.168.184.73:34190 src.workload=doks-debug-rdhgq src.namespace=ambient-demo src.identity="spiffe://cluster.local/ns/ambient-demo/sa/default" dst.addr=192.168.184.74:15008 dst.hbone_addr=192.168.184.74:80 dst.service=httpbin.ambient-demo.svc.cluster.local dst.workload=httpbin-6f4dc97cb-swdlb dst.namespace=ambient-demo dst.identity="spiffe://cluster.local/ns/ambient-demo/sa/httpbin-sa" direction="outbound" bytes_sent=84 bytes_recv=9832 duration="203ms"

For same node client and server, we can see that outbound packet comes to ztunnel from the pod and then gets redirected as inbound packet to the destination pod via the same ztunnel.

So, the same ztunnel recieves an outbound packet and an inbound packet — “bound” specifies the direction to/from the application pod in the same node.

Capture 2: client and server in different node
---------
ztunnel-62hp8 istio-proxy 2024-06-16T12:48:51.792527Z info access connection complete src.addr=192.168.184.73:53265 src.workload=doks-debug-rdhgq src.namespace=ambient-demo src.identity="spiffe://cluster.local/ns/ambient-demo/sa/default" dst.addr=192.168.246.6:80 dst.hbone_addr=192.168.246.6:80 dst.service=httpbin.ambient-demo.svc.cluster.local dst.workload=httpbin-6f4dc97cb-5dpz9 dst.namespace=ambient-demo dst.identity="spiffe://cluster.local/ns/ambient-demo/sa/httpbin-sa" direction="inbound" bytes_sent=9832 bytes_recv=84 duration="60ms"
ztunnel-fv5f8 istio-proxy 2024-06-16T12:48:51.793112Z info access connection complete src.addr=192.168.184.73:60162 src.workload=doks-debug-rdhgq src.namespace=ambient-demo src.identity="spiffe://cluster.local/ns/ambient-demo/sa/default" dst.addr=192.168.246.6:15008 dst.hbone_addr=192.168.246.6:80 dst.service=httpbin.ambient-demo.svc.cluster.local dst.workload=httpbin-6f4dc97cb-5dpz9 dst.namespace=ambient-demo dst.identity="spiffe://cluster.local/ns/ambient-demo/sa/httpbin-sa" direction="outbound" bytes_sent=84 bytes_recv=9832 duration="61ms"

If client and server are in different nodes, source traffic hits the same node ztunnel as outbound traffic packet and then it is sent to the destination pod’s node’s ztunnel as inbound packet.

Conclusion

This experiment enabled us to visualize the packet flows via ztunnel in Istio Ambient Mesh.

In the next part, we will be exploring the L4 Authorization policies that can be enforced via ztunnel.

We will also start exploring the Waypoint Proxy — the L7 Proxy in Ambient.

Read my other tech blogs here: https://janasabuj.github.io/posts/

--

--

Sabuj Jana

Building software @Flipkart . ex-Amazon, Wells Fargo | Follow me for linux, k8s, go, elb, istio, cilium and other intriguing tech | https://janasabuj.github.io