A Deep Dive into Azure Kubernetes Service and Istio Ambient for sidecar-less microservices integration (part 2)

Lior Yantovski
AT&T Israel Tech Blog
14 min readMay 18, 2023

For those who wants to read the first part of this series of articles:

Now we can start exposing our microservices (firstly with TLS termination passthrough on pod and secondly on Istio Ingress Gateway):

I will deploy Kubernetes UI dashboard based on instructions provided in the portal.

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml

We will use Istio CRDs to expose our Kubernetes dashboard.

The Gateway resource:

apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: web-k8s-dashboard-gateway
namespace: kubernetes-dashboard
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: PASSTHROUGH
hosts:
- "web-k8s-dashboard.dev.company.com"

Pay attention that the tls mode is PASSTHROUGH, since Kubernetes dashboard is doing SSL termination on the pod itself. It will create listener on port 443 for the host name mentioned on CRD.

The second CRD resource is VirtualService which performs tls routing internally based on received host name to the corresponding service.

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: web-k8s-dashboard-vs
namespace: kubernetes-dashboard
spec:
hosts:
- "web-k8s-dashboard.dev.company.com "
gateways:
- web-k8s-dashboard-gateway
tls:
- route:
- destination:
host: kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local
port:
number: 443
match:
- port: 443
sniHosts:
- web-k8s-dashboard.dev.company.com

We will tag the namespace to be part of the Ambient Mesh:

$ kubectl label namespace istio-system istio.io/dataplane-mode=ambient

We can verify that listener is open on our Istio Ingress Gateway component.

$ istioctl proxy-config listener istio-ingressgateway-748bdc4999-gdddr -n istio-system
ADDRESS PORT MATCH DESTINATION
0 ALL Cluster: connect_originate
0.0.0.0 8443 SNI: web-k8s-dashboard.dev.company.com Cluster: outbound|443||kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local
0.0.0.0 15021 ALL Inline Route: /healthz/ready*
0.0.0.0 15090 ALL Inline Route: /stats/prometheus*

We can now browse to https://web-k8s-dashboard.dev.company.com and get our well-known Kubernetes dashboard.

In a similar way we can expose a web service (nginx for example). This time it will perform SSL termination on the Istio Ingress Gateway controller.

Create new workload namespace:

$ kubectl create ns istio-test

Create new deployment for nginx pods:

$ kubectl create deployment nginx-deployment --image=nginx -n istio-test --replicas=2

Create the manifest for the service resource and apply it:

apiVersion: v1
kind: Service
metadata:
name: nginx-service
namespace: istio-test
labels:
app: nginx-deployment
spec:
type: ClusterIP
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx-deployment

Create a TLS secret that will be used via HTTPS connection:

$ kubectl create secret tls nginx-ingress-tls --key nginx-svc.key --cert nginx-svc.cert -n istio-system

We will tag this namespace to also be part of the Ambient Mesh:

$ kubectl label namespace istio-test istio.io/dataplane-mode=ambient

Create the manifest for Gateway resource and apply it:

apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: nginx-ingress-gateway
namespace: istio-test
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: nginx-ingress-tls
hosts:
- "my-nginx.dev.company.com"

Notice now that TLS mode is defined as SIMPLE and we provide the TLS secret name that holds the key and certificate for our host domain.

You can read more about TLS modes and Gateway settings in following link.

Create the manifest for VirtualServiceresource and apply it:

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: nginx-ingress-vs
namespace: istio-test
spec:
hosts:
- "my-nginx.dev.company.com"
gateways:
- nginx-ingress-gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
host: nginx-service.istio-test.svc.cluster.local
port:
number: 80

We can see that the listener is open on our Istio Ingress Gateway component for this new service:

$ istioctl proxy-config listener istio-ingressgateway-748bdc4999-gdddr -n istio-system
ADDRESS PORT MATCH DESTINATION
0 ALL Cluster: connect_originate
0.0.0.0 8080 ALL Route: http.8080
0.0.0.0 8443 SNI: my-nginx.dev.company.com Route: https.443.https.nginx-ingress-gateway.istio-test
0.0.0.0 8443 SNI: web-k8s-dashboard.dev.company.com Cluster: outbound|443||kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local
0.0.0.0 15021 ALL Inline Route: /healthz/ready*
0.0.0.0 15090 ALL Inline Route: /stats/prometheus*

We can now browsehttps://my-nginx.dev.company.com and get the Nginx welcome page.

We saw how we can easily expose our web service using different TLS modes to outbound connections.

As we didn’t create L7 “waypoint” proxy, all traffic runs through the “ztunnel” components on layer 4 with HBONE encapsulation.

Observability in Istio Ambient

There are several tools we can use to provide more detailed information and in a more visual way than just scrolling the logs of ztunnel, Istio-ingress-gateway pods.

However, I suggest that anyone who wants to understand how the traffic works — scroll the logs immediately after performing a testing web request.

$ wget https://raw.githubusercontent.com/istio/istio/master/samples/addons/kiali.yaml
$ kubectl apply -f kiali.yaml
$ wget https://raw.githubusercontent.com/istio/istio/master/samples/addons/prometheus.yaml
$ kubectl apply -f prometheus.yaml
$ wget https://raw.githubusercontent.com/istio/istio/master/samples/addons/grafana.yaml
$ kubectl apply -f grafana.yaml
$ wget https://raw.githubusercontent.com/istio/istio/master/samples/addons/jaeger.yaml
$ kubectl apply -f jaeger.yaml

Now we can expose Kiali via the Istio Ingress controller:

apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: kiali-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: nginx-ingress-tls
hosts:
- "my-kiali.dev.company.com"
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: kiali-vs
namespace: istio-system
spec:
hosts:
- "my-kiali.dev.company.com"
gateways:
- kiali-gateway
http:
- route:
- destination:
host: kiali.istio-system.svc.cluster.local
port:
number: 20001
match:
- uri:
prefix: /kiali

Please create both manifests and run “kubectl apply -f “.

Verify that the listener is up and then browse to the Kiali portal.

$ istioctl proxy-config listener istio-ingressgateway-748bdc4999-gdddr -n istio-system
ADDRESS PORT MATCH DESTINATION
0 ALL Cluster: connect_originate
0.0.0.0 8080 ALL Route: http.8080
0.0.0.0 8443 SNI: my-nginx.dev.company.com Route: https.443.https.nginx-ingress-gateway.istio-test
0.0.0.0 8443 SNI: my-kiali.dev.company.com Route: https.443.https.kiali-gateway.istio-system
0.0.0.0 8443 SNI: web-k8s-dashboard.dev.company.com Cluster: outbound|443||kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local
0.0.0.0 15021 ALL Inline Route: /healthz/ready*
0.0.0.0 15090 ALL Inline Route: /stats/prometheus*

Check the following commands and the info provided by them and query your Istio Ingress gateway pod.

$ istioctl proxy-config routes istio-ingressgateway-748bdc4999-gdddr -n istio-system

$ istioctl proxy-config cluster istio-ingressgateway-748bdc4999-gdddr -n istio-system

We can now browse to Kiali web page now and I suggest you make similar expose settings to the Grafana web service.

Now we can observe our nginx service metrics via Kiali dashboard:

Let’s now try creating new applications and setting L7 “waypoint” proxies

We will use files located in my Github repo under “purchase-history”.

Let’s apply all files located in purchase-history/install/ . We should get the following resources up and running.

$ kubectl get pods,svc -n test -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/notsleep-7dc8b55755-xx5rb 1/1 Running 0 3m15s 10.224.0.37 aks-systempool-14515445-vmss000000 <none> <none>
pod/purchase-history-v1-6cdb997954-8psww 1/1 Running 0 3m15s 10.224.0.118 aks-systempool-14515445-vmss000003 <none> <none>
pod/purchase-history-v2-db864f6f-5sgjq 1/1 Running 0 3m15s 10.224.0.112 aks-systempool-14515445-vmss000003 <none> <none>
pod/purchase-history-v3-bb4c6b7fd-qlwjq 1/1 Running 0 3m15s 10.224.0.134 aks-systempool-14515445-vmss000003 <none> <none>
pod/recommendation-79f7844ff-qxq4t 1/1 Running 0 3m15s 10.224.0.18 aks-systempool-14515445-vmss000000 <none> <none>
pod/sleep-5f5d4b5bfb-kphhr 1/1 Running 0 3m15s 10.224.0.23 aks-systempool-14515445-vmss000000 <none> <none>
pod/web-api-6c95c8d759-q9srd 1/1 Running 0 3m15s 10.224.0.31 aks-systempool-14515445-vmss000000 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/notsleep ClusterIP 10.0.216.100 <none> 80/TCP 3m15s app=notsleep
service/purchase-history ClusterIP 10.0.144.38 <none> 8080/TCP 3m15s app=purchase-history
service/recommendation ClusterIP 10.0.135.91 <none> 8080/TCP 3m15s app=recommendation
service/sleep ClusterIP 10.0.123.17 <none> 80/TCP 3m15s app=sleep
service/web-api ClusterIP 10.0.93.30 <none> 8080/TCP 3m15s app=web-api

We already deployed the “web-api”, “recommendation”, and “purchase-history” services + curl clients named “sleep” and “notsleep”.

The “web-api” service calls the “recommendation” service via HTTP, and the “recommendation” service calls the “purchase-history” service, also via HTTP.

Test the sample application using the following command:

$ kubectl -n test exec deploy/sleep -- curl http://web-api:8080/

You should get something like this and see that the response contains information regarding the microservices the request goes through:

{
"name": "web-api",
"uri": "/",
"type": "HTTP",
"ip_addresses": [
"10.224.0.31"
],
"start_time": "2023-05-14T11:00:35.312861",
"end_time": "2023-05-14T11:00:35.328292",
"duration": "15.430256ms",
"body": "Hello From Web API",
"upstream_calls": [
{
"name": "recommendation",
"uri": "http://recommendation:8080",
"type": "HTTP",
"ip_addresses": [
"10.224.0.18"
],
"start_time": "2023-05-14T11:00:35.315366",
"end_time": "2023-05-14T11:00:35.327673",
"duration": "12.307003ms",
"body": "Hello From Recommendations!",
"upstream_calls": [
{
"name": "purchase-history-v1",
"uri": "http://purchase-history:8080",
"type": "HTTP",
"ip_addresses": [
"10.224.0.118"
],
"start_time": "2023-05-14T11:00:35.323553",
"end_time": "2023-05-14T11:00:35.323666",
"duration": "113.108µs",
"body": "Hello From Purchase History (v1)!",
"code": 200
}
],
"code": 200
}
],
"code": 200
}

With Istio Gateway and VirtualService we can route inbound traffic based on hostname and reach the relevant K8S service.

$ cat ./purchase-history/web-api-gw.yaml 
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: web-api-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "my-webapp.dev.att.com"

$ cat ./purchase-history/web-api-gw-vs.yaml
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: web-api-gw-vs
spec:
hosts:
- "my-webapp.dev.att.com"
gateways:
- web-api-gateway
http:
- route:
- destination:
host: web-api.test.svc.cluster.local
port:
number: 8080
# Run kubectl apply -f for both files above:
$ kubectl -n test apply -f ./purchase-history/web-api-gw-vs.yaml
virtualservice.networking.istio.io/web-api-gw-vs created
$ kubectl -n test apply -f ./purchase-history/web-api-gw.yaml
gateway.networking.istio.io/web-api-gateway created

Now we can add our new namespace to Istio Ambient service mesh, by adding the label:

istio.io/dataplane-mode=ambient to our “test” namespace and have all pods managed by Ambient.

$ kubectl label namespace test istio.io/dataplane-mode=ambient

Next, check the Istio Ingress gateway listeners:

$ istioctl proxy-config routes istio-ingressgateway-748bdc4999-gdddr -n istio-system
NAME DOMAINS MATCH VIRTUAL SERVICE
http.8080 my-webapp.dev.att.com /* web-api-gw-vs.test
* /stats/prometheus*
* /healthz/ready*

We can test our “web-api” without a problem as there is no authorization policy that limits us and we can verify that our traffic was switched to L4 “istio-cni” and “ztunnel” components by checking their logs.

$ kubectl -n test exec deploy/sleep -- curl http://web-api:8080/

From ztunnel-g58zp pod logs:

2023-05-16T11:20:00.821029Z  INFO outbound{id=c4b3ce7271b97e9c52dfe5f45f7a6cc4}: ztunnel::proxy::outbound: proxying to 10.224.0.31:8081 using node local fast path
2023-05-16T11:20:00.821238Z INFO outbound{id=c4b3ce7271b97e9c52dfe5f45f7a6cc4}: ztunnel::proxy::outbound: complete dur=285.805µs

Now we can add an authorization policy to block all traffic from reaching “web-api”:

$ cat auth-policy-deny-all.yaml
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: allow-nothing
namespace: test
spec:
{}

$ kubectl apply -f auth-policy-deny-all.yaml
authorizationpolicy.security.istio.io/allow-nothing created

Now sending same curl request to “web-api” will fail:

$ kubectl -n test exec deploy/sleep -- curl http://web-api:8080/
curl: (56) Recv failure: Connection reset by peer
command terminated with exit code 56

From the ztunnel-g58zp pods logs:

2023-05-16T19:50:09.120681Z  INFO outbound{id=c18bba7b6e9f81f15be660959bbb79d3}: ztunnel::proxy::outbound: proxying to 10.224.0.31:8081 using node local fast path
2023-05-16T19:50:09.120719Z INFO outbound{id=c18bba7b6e9f81f15be660959bbb79d3}: ztunnel::proxy::outbound: RBAC rejected conn=10.224.0.23(spiffe://cluster.local/ns/test/sa/sleep)->10.224.0.31:8081
2023-05-16T19:50:09.120774Z WARN outbound{id=c18bba7b6e9f81f15be660959bbb79d3}: ztunnel::proxy::outbound: failed dur=191.503µs err=http status: 401 Unauthorized

While 10.224.0.23 is the “sleep” pod and 10.224.0.31 is the “web-api” pod and the request failed on http status: 401 Unauthorized as expected.

Below we can see a good example of the “least privilege” principle, as we can add new RBAC Auth policies in a granular way and permit some service to reach other service (“web-api” to reach “recommendation” and “recommendation” to reach “purchase-history”).

$ cat auth-policy-purchase-history.yaml
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: "purchase-history-rbac"
namespace: test
spec:
selector:
matchLabels:
istio.io/gateway-name: purchase-history
action: ALLOW
rules:
- from:
- source:
principals: ["cluster.local/ns/test/sa/recommendation"]

$ kubectl apply -f auth-policy-purchase-history.yaml
authorizationpolicy.security.istio.io/purchase-history-rbac created

$ cat auth-policy-web-api-l4.yaml
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: "web-api-rbac"
namespace: test
spec:
selector:
matchLabels:
app: web-api
action: ALLOW
rules:
- from:
- source:
principals: ["cluster.local/ns/test/sa/sleep","cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account"]

$ kubectl apply -f auth-policy-web-api-l4.yaml
authorizationpolicy.security.istio.io/web-api-rbac created

We can try to run our “curl” test command and see that it will fail since we haven’t yet added an authorization policy to the “recommendation” service.

Let’s try it:

$ cat auth-policy-recommendation.yaml 
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: "recommendation-rbac"
namespace: test
spec:
selector:
matchLabels:
app: recommendation
action: ALLOW
rules:
- from:
- source:
principals: ["cluster.local/ns/test/sa/web-api"]

$ kubectl apply -f auth-policy-recommendation.yaml
authorizationpolicy.security.istio.io/recommendation-rbac created
# Delete deny-all policy we created before
$ kubectl delete AuthorizationPolicy allow-nothing -n test
authorizationpolicy.security.istio.io "allow-nothing" deleted

And now let’s try same request:

$ kubectl -n test exec deploy/sleep -- curl -sI http://web-api:8080/ 
HTTP/1.1 200 OK
Vary: Origin
Date: Tue, 16 May 2023 20:05:06 GMT
Content-Length: 1108
Content-Type: text/plain; charset=utf-8

In Kiali, it reflected as you can see in this nice picture:

Traffic Routing in simple Canary way

Now we can add also L7 waypoint proxy pod to perform Canary traffic routing among different version of “purchase-history” (v1,v2,v3).

We can run apply to following file:

$ cat purchase-history-gw-waypoint.yaml
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
name: purchase-history
namespace: test
annotations:
istio.io/for-service-account: purchase-history
spec:
gatewayClassName: istio-waypoint
listeners:
- allowedRoutes:
namespaces:
from: Same
name: mesh
port: 15008
protocol: ALL

Or it can be done by running following command:

$ istioctl x waypoint apply --service-account purchase-history -n test
waypoint test/purchase-history applied

Now we will add VirtualService resource that configures the mapping for traffic routing targets and weights per target.

$ cat purchase-history-vs.yaml
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: purchase-history-vs
namespace: test
spec:
hosts:
- purchase-history.test.svc.cluster.local
http:
- route:
- destination:
host: purchase-history.test.svc.cluster.local
subset: v1
port:
number: 8080
weight: 80
- destination:
host: purchase-history.test.svc.cluster.local
subset: v2
port:
number: 8080
weight: 19
- destination:
host: purchase-history.test.svc.cluster.local
subset: v3
port:
number: 8080
weight: 1

$ kubectl apply -f purchase-history-vs.yaml
virtualservice.networking.istio.io/purchase-history-vs created

We should also define the subsets to run mapping between deployment labels and VirtualService subsets. This is done by creating a resource called DestinationRule.

$ cat purchase-history-dr.yaml
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: purchase-history-dr
namespace: test
spec:
host: purchase-history.test.svc.cluster.local
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
- name: v3
labels:
version: v3

$ kubectl apply -f purchase-history-dr.yaml
destinationrule.networking.istio.io/purchase-history-dr created

We can check that our traffic was sent as expected:

$ kubectl -n test exec deploy/sleep -- sh -c 'for i in $(seq 1 100); do curl -s http://web-api:8080/; done | grep -c purchase-history-v2'
18
$ kubectl -n test exec deploy/sleep -- sh -c 'for i in $(seq 1 100); do curl -s http://web-api:8080/; done | grep -c purchase-history-v3'
1
$ kubectl -n test exec deploy/sleep -- sh -c 'for i in $(seq 1 100); do curl -s http://web-api:8080/; done | grep -c purchase-history-v1'
81

As we see the results are exactly accordingly to the settings we applied in the VirtualService and DestinationRule resources.

The topology of the resources are arranged in the following way:

Let’s try now to also add a L7 “waypoint” proxy to the “web-api” service and change its authorization policy a little.

$ cat web-api-gw-waypoint.yaml 
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
name: web-api
namespace: test
annotations:
istio.io/for-service-account: web-api
spec:
gatewayClassName: istio-waypoint
listeners:
- allowedRoutes:
namespaces:
from: Same
name: mesh
port: 15008
protocol: ALL

$ kubectl apply -f web-api-gw-waypoint.yaml
gateway.gateway.networking.k8s.io/web-api created

$ cat auth-policy-web-api-l7.yaml
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: "web-api-rbac"
namespace: test
spec:
selector:
matchLabels:
istio.io/gateway-name: web-api
action: ALLOW
rules:
- from:
- source:
principals: ["cluster.local/ns/test/sa/sleep","cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account"]
to:
- operation:
methods: ["GET"]

$ kubectl apply -f auth-policy-web-api-l7.yaml
authorizationpolicy.security.istio.io/web-api-rbac configured

We can check that new pod “web-api-istio-waypoint” was added.

$ kubectl get pods -n test -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
notsleep-7dc8b55755-xx5rb 1/1 Running 0 6d1h 10.224.0.37 aks-systempool-14515445-vmss000000 <none> <none>
purchase-history-istio-waypoint-77d985979-fdmb4 1/1 Running 0 40m 10.224.0.20 aks-systempool-14515445-vmss000000 <none> <none>
purchase-history-v1-6cdb997954-8psww 1/1 Running 0 6d1h 10.224.0.118 aks-systempool-14515445-vmss000003 <none> <none>
purchase-history-v2-db864f6f-5sgjq 1/1 Running 0 6d1h 10.224.0.112 aks-systempool-14515445-vmss000003 <none> <none>
purchase-history-v3-bb4c6b7fd-qlwjq 1/1 Running 0 6d1h 10.224.0.134 aks-systempool-14515445-vmss000003 <none> <none>
recommendation-79f7844ff-qxq4t 1/1 Running 0 6d1h 10.224.0.18 aks-systempool-14515445-vmss000000 <none> <none>
sleep-5f5d4b5bfb-kphhr 1/1 Running 0 6d1h 10.224.0.23 aks-systempool-14515445-vmss000000 <none> <none>
web-api-6c95c8d759-q9srd 1/1 Running 0 6d1h 10.224.0.31 aks-systempool-14515445-vmss000000 <none> <none>
web-api-istio-waypoint-7d5b7d44fc-fkvwk 1/1 Running 0 105s 10.224.0.47 aks-systempool-14515445-vmss000000 <none> <none>

$ kubectl -n test exec deploy/sleep -- curl -sI http://web-api:8080/
HTTP/1.1 200 OK
vary: Origin
date: Tue, 16 May 2023 21:15:57 GMT
content-length: 1219
content-type: text/plain; charset=utf-8
x-envoy-upstream-service-time: 158
server: istio-envoy
x-envoy-decorator-operation: :8080/*

We can also see that new information was added and that there is an info about “waypoint” proxy pod built on Envoy.

We can start testing our authorization policy applied on L7 proxy level and blocks all other methods except “GET”.

$ kubectl -n test exec deploy/sleep -- curl http://web-api:8080/
{
"name": "web-api",
"uri": "/",
"type": "HTTP",
"ip_addresses": [
"10.224.0.31"
],
"start_time": "2023-05-16T21:24:58.443299",
"end_time": "2023-05-16T21:24:58.451066",
"duration": "7.766954ms",
"body": "Hello From Web API",
"upstream_calls": [
{
"name": "recommendation",
"uri": "http://recommendation:8080",
"type": "HTTP",
"ip_addresses": [
"10.224.0.18"
],
"start_time": "2023-05-16T21:24:58.444839",
"end_time": "2023-05-16T21:24:58.450640",
"duration": "5.801215ms",
"body": "Hello From Recommendations!",
"upstream_calls": [
{
"name": "purchase-history-v1",
"uri": "http://purchase-history:8080",
"type": "HTTP",
"ip_addresses": [
"10.224.0.118"
],
"start_time": "2023-05-16T21:24:58.448714",
"end_time": "2023-05-16T21:24:58.448787",
"duration": "73.205µs",
"body": "Hello From Purchase History (v1)!",
"code": 200
}
],
"code": 200
}
],
"code": 200
}

$ kubectl -n test exec deploy/notsleep -- curl http://web-api:8080/
RBAC: access denied
$ kubectl -n test exec deploy/notsleep -- curl -XPUT http://web-api:8080/
RBAC: access denied
$ kubectl -n test exec deploy/sleep -- curl -XPUT http://web-api:8080/
RBAC: access denied

The first result provides a full answer with response 200 as expected, all other requests got response 403 with a “RBAC denied” description.

In Kiali graph we can see all our resources and traffic splitting we did for “purchase-history”, “web-api” requests and more.

We can also reach our “web-api” service via Istio Ingress Gateway, by using the browser to access:

http://my-webapp.dev.company.com

Last few points:

1. The deployment of Istio Ambient on AKS cluster was quite an easy task with a few tweaks to get Istio Ingress Gateway working well.

2. For those who are familiar with Nginx Ingress and to work with one CR (Ingress) to switch to Istio’s CRDs and understand the meaning of each one (Gateway, VirtualService, DestinationRule and more) and what are the settings that should be assigned — this requires some learning effort of Istio components and its internal logic.

3. As we can see, Ambient mode completely removes the need for injecting sidecars into our application pods. In addition a fully secured solution using mTLS (+HBONE) and SPIFFE methods to ensure encrypted and authenticated traffic flow was provided.

4. The separation between L4 and L7 modes, provides more flexibility in defining policies and migrating our workloads to use an Ambient mode, and then enabling L7 capabilities when needed.

5. There is no problem to use sidecar-less workloads with sidecar injected workloads on same cluster.

6. For those who are already familiar with Istio Service mesh as a sidecar pattern, you will find ambient mode as a new extension/addition to well-known resources and this will reduce the first adoption effort and learning curve.

7. Adoption of K8S Gateway API (our “waypoint” L7 proxy manifests) looks like the correct way to have common and standard “language” acceptable by other vendors for traffic routing.

8. It would be interesting to compare between Istio Ambient and eBPF Cilium, as both are sidecar-less service mesh networking solutions. Maybe in my next article 😊

Useful links and references to pictures Istio Ambient:

https://www.solo.io/blog/understanding-istio-ambient-ztunnel-and-secure-overlay/

https://preliminary.istio.io/latest/docs/ops/ambient/getting-started/

https://istio.io/latest/blog/2022/ambient-security/

“Istio Ambient Explained” book by Lin Sun & Christian Posta

Istio Ambient : Online free Course + lab by Solo.io

--

--