Understanding Istio — Peer Authentication

Secure service-service communication with Istio

M Allandhir
12 min readJul 24, 2020

Prerequisites

→ Knowledge of Kubernetes concepts
→ Understanding of Istio Architecture

Introduction

With majority of the applications’ architecture adopting microservice type over monolith in order to be more sensitive to need for scaling and many other, how good is the architecture in securing the interactions between the tens or hundreds of these micro-services running? A service mesh like Istio is what promises a solution by allowing to engineer security of the cluster at a much more granular level.

Apart from Security, Istio offers traffic management and monitoring. This post focuses on security and to be more specific, how to secure the traffic between pods running in kubernetes cluster with Istio service mesh. (Mutual TLS)

Authentication

There are two types of authentication provided by Istio

  • Peer Authentication
    For service-to-service authentication
  • Request Authentication
    For end-user authentication. Using JSON Web Tokens (JWT)

This post deals with only Peer Authentication.

Peer Authentication

Peer Authentication policies are used to secure service to service communication in kubernetes cluster with Istio Service Mesh by automating the process of generation, distribution and rotation of certificates and keys.

Who does the automated process of generation, distribution and rotation of certificates and keys?

ISTIOD (unified single binary for istio’s control plane)does

  • Istiod automates key and certificate rotation at scale
  • Istiod enables strong service-to-service and end-user authentication with built-in identity and credential management.
  • Istiod maintains a CA and generates certificates to allow secure mTLS communication in the data plane.
source https://istio.io/latest/docs/concepts/security/
  1. Envoy is the sidecar container running along with the container running your application which proxies all the traffic in and out of the pod, sends a certificate and key request via the Envoy Secret Discovery Service (A flexible API to deliver secrets/certificates) to Istio Agent.
  2. Istio Agent on receiving the request creates a certificate and private key and then sends a Certificate Signing Request(CSR) along with the necessary credentials to Istiod
  3. The Certificate Authority(CA) maintained by Istiod then validates the credentials carried in the CSR and signs the CSR to generate the certificate which will only work with the private key that was generated with it.
  4. The Istio agent sends the certificate received from Istiod and the private key to Envoy via the Envoy SDS API.

Before you begin

  • Kubernetes environment up and running. (minikube in my case)
  • At the time of this post, the following versions were used
Minikube 1.12.0 
Kubernetes 1.18.3
Istio 1.6.5
//installation steps
$ minikube addons enable istio
$ curl -L https://istio.io/downloadIstio | sh -
$ cd istio-1.6.5
$ export PATH=$PWD/bin:$PATH
$ istioctl manifest apply --set profile=demo --set values.global.proxy.privileged=true

What will you perform through the post?

  • Write a minimal node.js server to perform only required
  • Compose Dockerfile to build docker image
  • Create a kubernetes deployment, service and a service account
  • Deploy application into three different namespaces namely foo, bar and legacy
    * foo → pods with manually injected istio sidecars
    * bar → pods with manually injected istio sidecars
    *legacy → pods with no istio sidecar
  • Write peer authentication policies to enable istio mutual TLS (mTLS):
    1. Mesh wide
    2. Namespace level
    3. Workload specific
    4. Port level mTLS
    Along with Destination Rules.
  • Check if mTLS is enabled and traffic between services is encrypted using:
    1. Curl commands and
    2. Capturing traffic in istio-proxy sidecar

Get your hands dirty

  • Node app with minimal configuration only to realize required.
    FILE Index.js
var express = require("express");
var app = express();
app.get("/test", (req, res) => {
res.send("HELLO TEST");
});
app.get("/headers", function(req, res) {
res.json(req.headers); //sends request headers in json format
});
app.listen(8001, function() {
console.log("Server running on port 8001");
});

Why do we want request headers (line 9 →res.json(req.headers))?
Istio docs mention that if mTLS is working/enabled, the proxy injects the “X-Forwarded-Client-Cert” header to the upstream request to the backend. That header’s presence is evidence that mTLS is in use.

your package.jsonshould include

...
"dependencies": {
"express": "^4.17.1"
}
...
  • Dockerfile to build image
    FILE Dockerfile
FROM node:lts-slim
WORKDIR '/usr/app/test'
COPY package.json .
RUN npm install
RUN apt-get update && apt-get install curl jq -y
COPY . .
EXPOSE 8001
CMD ["node","index.js"]

lines 1–2 → use node:lts-slim as base image to run a node application and set working directory of your choice
lines 3–5 → copy the package.json to working directory and install dependencies. ‘jq’ (json query)is required to parse json response received from curl
lines 6–8 → copy remaining files to current directory. Expose 8001 as node app listens on 8001 and run node index.js to start the application.

Build the docker image docker build -t auth:v1 .

  • Kubernetes deployment file
    FILE auth-deployment.yaml

Lines 1-4 → create a service account. Why?
Istio uses Kubernetes service accounts as service identity, which offers stronger security than service name (for more details, see Istio identity).
Creating service account automatically creates token.

Lines 6–28 → Deployment. line 23 mention the service account name in the container spec

Lines 30–44 → Expose the deployment with service. The service port is 80 which maps to container port 8001.

  • Create namespace foo
    kubectl create ns foo

Run the command to create SA, deployment, service and to inject istio sidecar using istioctl
kubectl apply -f <(istioctl kube-inject -f auth-deployment.yaml) -n foo

To verify pod is up and running:
kubectl get pods -n foo -o wide

  • Create namespace bar
    kubectl create ns bar

create the SA, deployment, service and use istioctl to inject istio sidecar
kubectl apply -f <(istioctl kube-inject -f auth-deployment.yaml) -n bar

To verify pod is up and running:
kubectl get pods -n bar -o wide

  • Create namespace legacy
    kubectl create ns legacy

create the SA, deployment and service without sidecar
kubectl create -f auth-deployment.yaml -n legacy

To verify pod is up and running:
kubectl get pods -n legacy -o wide

CHECK

Run the following command in terminal to get the http responses
The following command is frequently used further in this post.

for from in "foo" "bar" "legacy"; do for to in "foo" "bar" "legacy"; do kubectl exec "$(kubectl get pod -l app=auth-test -n ${from} -o jsonpath={.items..metadata.name})" -c auth-test -n ${from} -- curl http://auth-test-service.${to}:80/headers -s -o /dev/null -w "${from} ---> ${to}: %{http_code}\n" -k; done; done

What does this do?
It will exec into auth-test container (specified in auth-deployment.yaml) of namespaces foo, bar, legacy and returns http response code by running curl command in the respective terminals of the containers in a loop.

You should see the output:
(Note: foo → barimplies: request from auth-test-service in namespace foo to auth-test-service in namespace bar) followed by the response http code

foo → foo 200
foo → bar 200
foo → legacy 200
bar → foo 200
bar → bar 200
bar → legacy 200
legacy → foo 200
legacy → bar 200
legacy →legacy 200

All responses are 200, since there are no peer authentication policies applied. Pods in foo and bar accept plain text traffic from legacy

You can do this manually instead of running the above command. Run the following command to open the terminal of the container
kubectl exec -ti <pod> -c <container-name> -n <namespace> -- /bin/bash

and run
curl http://auth-test-service.bar/test -s -o /dev/null -w "%{http_code}" -k

Service port is 80. So we need not explicitly mention it.
“-k” in curl command is used because, as mentioned earlier, Istio uses Kubernetes service accounts as service identity rather than service names. So the certificates used by Istio do not have service names. The “-k” option prevents the client from verifying and looking for the server name i.e, in our case it is “auth-test-service.bar.svc.cluster.local” in the certificate provided by the server.

  • Since all the traffic in and out of the pod passes through the proxy sidecar,
    we can capture traffic on port 8001 and see if the traffic is encrypted or not.

Check tcpdump in istio-proxy sidecar

Exec into istio-proxy sidecar of the pod in namespace foo
kubectl exec -ti <pod-name> -c istio-proxy -n foo -- /bin/bash

You need to replace <pod-name> with whatever pod name you see when you run kubectl get pods -n foo

Run ifconfig and note the IP address and then run:

sudo tcpdump -vvvv -A -i eth0 '((dst port <port-to-capture-traffic> and (net <ip-address>))'

Replace <port-to-capture-traffic> with 8001which is the container port
and <ip-address> with ip address noted from running ifconfig

Now send a request from foo→ legacy or from legacy →foo.
you should see plain text captured something like:

12:38:41.228747 IP (tos 0x0, ttl 64, id 10748, offset 0, flags [DF], proto TCP (6), length 60)auth-test-deployment-55f6c8fc4b-k2blq.48996 > 192-168-219-114.auth-test-service.legacy.svc.cluster.local.8001: Flags [S], cksum 0x385a
.....
.....
...&..'.GET /test HTTP/1.1
host: auth-test-service.legacy
user-agent: curl/7.52.1
accept: */*
x-forwarded-proto: http
x-request-id: bfaa03b5-bbc6-9563-9e3b-9d0e2ec628cb
x-envoy-decorator-operation: auth-test-service.legacy.svc.cluster.local:80/*
x-envoy-peer-metadata: ChoKCkNMVVNURVJfSUQSDBoKS3ViZXJuZXRlcwohCgxJTlNUQU5DRV9JUFMSERoPMTkyLjE2OC4yMTkuMTAzCn0 ...
x-envoy-peer-metadata ...

Plain text is captured, why? Legacy has no sidecar and thus plain text traffic.
Also the request legacy →foo is successful because there are no peerauthetication policies currently active

But when you curl from foo →bar or from bar → foo
you should see something like

12:40:12.598235 IP (tos 0x0, ttl 64, id 21877, offset 0, flags [DF], proto TCP (6), length 1207)auth-test-deployment-55f6c8fc4b-k2blq.54996 > 192-168-219-74.auth-test-service.bar.svc.cluster.local.8001: Flags [P.], cksum 0x3cad (incorrect -> 0x5b80), seq 418160345:418161500, ack 2207553641, win 502, options [nop,nop,TS val 1489019456 ecr 2031669559], length 1155
E...Uu@.@......g...J...A.......i....<......
X..@y..7....~........`=q&...}.a.X(.....7/...
{g.W..s.:|8...........`..RQ5\&.TD......c..1.f?_....7...G...,x. .@.....S.&..._..n....v...-t.v../9.+..@v._`G.i....*O.v.....Y..X.f>gI+..z.....)K.....}..=..1.........".w(.....wy.... e.^.#...zA..C4.g.y0..q....Q...5..\.a.].s.k...........:..8B..R.qa..^....

All traffic is encrypted since both foo and bar have sidecars.
Why is the traffic encrypted even though no peer authentication policies are created?
By default Istio enables Mutual TLS in PERMISSIVE mode. We’ll learn what is Permissive mode later in this post.

Check for x-forwarded-client-cert in header

What does the presence of “x-forwarded-client-cert” in the request header implies?

Istio docs mention that if mTLS is working/enabled, the proxy injects the X-Forwarded-Client-Cert header to the upstream request to the backend. That header’s presence is evidence that mTLS is in use.

In Kubernetes, the format of the URI field of an X.509 certificate is: spiffe://<domain>/ns/<namespace>/sa/<serviceaccount>
This enables Istio services to establish and accept connections with other SPIFFE-compliant systems. (SPIFFE Secure Production Identity Framework for Everyone)

exec into auth-test container of the pod in namespace foo and run the following command:

curl -s http://auth-test-service.bar/headers | jq .\"x-forwarded-client-cert\"

OUTPUT (“x-forwarded-client-cert”)

“By=spiffe://cluster.local/ns/bar/sa/auth-test-sa;Hash=7c60a6491f951ed8b35b75239a6b7c9f2b7671571a6e8d346adcfd5adce46db7;Subject=\”\”;URI=spiffe://cluster.local/ns/foo/sa/auth-test-sa”

But Running the below command returns null .Why? The pod in legacy namespace has no envoy sidecar to encrypt traffic and inject the certificate

$ curl -s http://auth-test-service.legacy/headers | jq .\"x-forwarded-client-cert\"null

Before we move on to the next section:

The following modes in peerauthentication for mTLS are supported:
Source: istio docs

PERMISSIVE (Default): Workloads accept both mutual TLS and plain text traffic. This mode is most useful during migrations when workloads without sidecar cannot use mutual TLS. Once workloads are migrated with sidecar injection, you should switch the mode to STRICT.

STRICT: Workloads only accept mutual TLS traffic.

DISABLE: Mutual TLS is disabled. From a security perspective, you shouldn’t use this mode unless you provide your own security solution.

You can repeat the checks like looking for presence of x-forwarded-client-cert in the request headers or tcpdump in istio-proxy sidecar as explained in previous section as we apply different Peer Authentication Policy

Mesh wide peer authentication

  • Enable Istio mTLS in STRICT mode
kubectl apply -f - <<EOF
apiVersion: "security.istio.io/v1beta1"
kind: "PeerAuthentication"
metadata:
name: "default"
namespace: "istio-system"
spec:
mtls:
mode: STRICT
EOF

Check for http responses, you should see traffic from legacy to bar/foo failing.

foo → foo 200
foo → bar 200
foo → legacy 200
bar → foo 200
bar → bar 200
bar → legacy 200
legacy → foo 000
command terminated with exit code 56
legacy → bar 000
command terminated with exit code 56
legacy → legacy 200

exit code 56 implies → failed to receive network data.

WHY?
Since mTLS STRICT mode is enabled globally, for requests to succeed it is expected to be encrypted. Since legacy has no sidecar, plain text is sent which is rejected by foo/bar.

cleanup kubectl delete peerauthentication -n istio-system default

NOTE:

  • workload specific peerauthentication overrides namespace and namespace level overrides global mesh level.
  • If destination rules have explicit TLS configuration that is different from mesh level/namespace/workload specific peerauthentication mode, then the client sidecars’ TLS configuration is overridden with that mentioned in the destination rule’s tls block.

Namespace level peerauthentication

  • Run the following in terminal:
kubectl apply -f - <<EOF
apiVersion: "security.istio.io/v1beta1"
kind: "PeerAuthentication"
metadata:
name: "namespace-level"
namespace: "foo"
spec:
mtls:
mode: STRICT
EOF
  • Check for http response
foo --> foo: 200
foo --> bar: 200
foo --> legacy: 200
bar --> foo: 200
bar --> bar: 200
bar --> legacy: 200
legacy --> foo: 000
command terminated with exit code 56
legacy --> bar: 200
legacy --> legacy: 200

Since the policy is namespace foo specific, legacy → foo fails with code 56 (http_code 000), but legacy →bar succeeds.

cleanup before you proceed to next section:
kubectl delete peerauthentication -n foo namespace-level

Workload specific

  • Run the following in terminal:
kubectl apply -f - <<EOF
apiVersion: "security.istio.io/v1beta1"
kind: "PeerAuthentication"
metadata:
name: "bar-peerauthentication"
namespace: "bar"
spec:
selector:
matchLabels:
app: auth-test
mtls:
mode: STRICT
EOF

With peerauthentication in place, the destination rule should explicitly have TLS configuration with the same mode as the corresponding peerauthentication (ISTIO_MUTUAL in this case). In most cases we use destination rules as it defines other crucial routing config like load balancing and other.

kubectl apply -n bar -f - <<EOF
apiVersion: "networking.istio.io/v1alpha3"
kind: "DestinationRule"
metadata:
name: "auth-test-dr"
spec:
host: "auth-test-service.bar.svc.cluster.local"
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
EOF

You can have multiple pods running in the namespace bar, but the selector field is defined to apply the policy only to those with label “app: auth-test”

  • Check for http responses
foo --> foo: 200
foo --> bar: 200
foo --> legacy: 200
bar --> foo: 200
bar --> bar: 200
bar --> legacy: 200
legacy --> foo: 200
legacy --> bar: 000
command terminated with exit code 56
legacy --> legacy: 200

As expected, legacy → bar fails with exit code 56.

cleanup:
kubectl delete peerauthentication -n bar bar-peerauthentication
kubectl delete destinationrule -n bar auth-test-dr

Port level mTLS

You can have different mTLS modes enabled on different ports. For example, you might want STRICT mode on port 8001 and PERMISSIVE on some other port(must have a service exposing that port).

In peerauthentication we use container port number, not service port.

To enable port level mTLS, the port should be exposed by service like we have a service exposing port 8001 else it is ignored.

kubectl apply -f - <<EOF
apiVersion: "security.istio.io/v1beta1"
kind: "PeerAuthentication"
metadata:
name: "portlevel-peerauthentication"
namespace: "foo"
spec:
selector:
matchLabels:
app: auth-test
mtls:
mode: STRICT
portLevelMtls:
8001:
mode: STRICT
# 8002: #you must have a service exposing 8002 to take effect
# mode: PERMISSIVE
EOF

Corresponding destination rule should have the port with respective mTLS mode defined.

  • The port in destination rule is the service port(80), which maps to respective target container port(8001)
kubectl apply -n foo -f - <<EOF
apiVersion: "networking.istio.io/v1alpha3"
kind: "DestinationRule"
metadata:
name: "auth-test-dr"
spec:
host: auth-test-service.foo.svc.cluster.local
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
portLevelSettings:
- port:
number: 80
tls:
mode: ISTIO_MUTUAL
EOF

Check for http responses

Requests from legacy →foo fail with exit code 56 again.

For all the above cases, you can exec into istio-proxy sidecar of respective pods in respective namespaces(foo or bar) and capture traffic to check if it is encrypted/pain text or check for the x-forwarded-client-certificate in the request header.

cleanup:
kubectl delete peerauthentication -n foo portlevel-peerauthentication
kubectl delete destinationrule -n foo auth-test-dr

Destination Rule Overrides

A destination rule defines policies that apply to traffic intended for a service after routing has occurred and has configurations for load balancing, connection pool size from the sidecar, and outlier detection settings but we focus on the defining the tls block with necessary config for mTLS modes.

kubectl apply -f - <<EOF
apiVersion: "networking.istio.io/v1alpha3"
kind: "DestinationRule"
metadata:
name: "default"
namespace: "istio-system"
spec:
host: "*.local"
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
EOF

host is generally specified as <service-name>.<namespace>.svc.cluster.local
so host: “*.local” selects all services across all namespaces and applies mTLS in ISTIO_MUTUAL mode.

Run the command to get http responses

foo --> foo: 200
foo --> bar: 200
foo --> legacy: 503
bar --> foo: 200
bar --> bar: 200
bar --> legacy: 503
legacy --> foo: 000
command terminated with exit code 56
legacy --> bar: 000
command terminated with exit code 56
legacy --> legacy: 200

As expected legacy →foo and legacy →bar fail with exit code 56.

But why did foo →legacy and bar →legacy fail with http_code 503?
host: “*.local” selects all services, including auth-test-service.legacy
and Istio configures clients to use mTLS (ISTIO_MUTUAL)as we explicitly mentioned it in the destination rule that applies to all services, but the sidecar is absent in namespace legacyand thus will fail to handle it returning a 503.

Now write a destination rule selecting services in legacy and define the tls block with DISABLE mode.

kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: "auth-test-dr-legacy"
namespace: "legacy"
spec:
host: "auth-test-service.legacy.svc.cluster.local"
trafficPolicy:
tls:
mode: DISABLE
EOF

Run the command to get http responses
Now you should see foo →legacy and bar →legacy succeeding
(DISABLE mode is not recommended unless you have specific use case)

--

--