Load balancing, Session Affinity and Observability with Istio on WSL2 Ubuntu and Docker Desktop

Lightphos
actual-tech
Published in
7 min readJul 2, 2021

See also updated video version here:

In this article we look at Istio’s load balancing features plus the Kiali and Grafana add ons.

We will be using WSL2 Ubuntu 20.04, in Windows 10, Docker Desktop with Kubernetes and K9s.

Assuming you already have WSL2 and Ubuntu 20.04 installed.

Getting Started

Start Docker desktop (see link below for how to install).

Docker engine is v20.10.6, enable Kubernetes. Takes a while to start.

Check versions in wsl2 ubuntu:

Ubuntu

cat /etc/*release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.04
DISTRIB_CODENAME=focal
DISTRIB_DESCRIPTION="Ubuntu 20.04.2 LTS"

Docker

docker version
Client: Docker Engine - Community
Cloud integration: 1.0.14
Version: 20.10.6
API version: 1.41
Go version: go1.13.15
Git commit: 370c289
Built: Fri Apr 9 22:46:45 2021
OS/Arch: linux/amd64
Context: default
Experimental: true

Kubectl

kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:23:52Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:15:20Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}

Install k9s

A useful tool to manage the cluster (there is also Lens, see link below).

sudo curl -ssL https://gist.githubusercontent.com/bplasmeijer/a4845a4858f1c0b0a22848984475322d/raw/0768fa37a96a319f7e784e77ba24a085fe527369/k9s-setup.sh | sudo sh

Echo Server Image

Our test image to deploy.

docker pull inanimate/echo-serverkubectl create deployment echo-server --image=inanimate/echo-server
deployment.apps/echo-server created

in k9s default namespace

Please l for logs

Port Forward

With k9s

select pod view :po, select pod, select container

Press Shift-f, define ports

Navigate to it:

http://localhost:8080

Welcome to echo-server!  Here's what I know.
> Head to /ws for interactive websocket echo!

-> My hostname is: echo-server-f5666cc48-k5hlr

:pf to view it and ctrl-d to delete the port forward

Create echo service layer in kubernetes

echo "
apiVersion: v1
kind: Service
metadata:
name: echo-server
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: echo-server
" | kubectl apply -f -

Echo-server is now available as a service.

Now lets add Istio…

Istioctl

Install it:

curl -L https://istio.io/downloadIstio | sh -export PATH=”$PATH:/mnt/c/Users/<>/istio-1.10.2/bin”istioctl x precheckistioctl version1.10.2

istioctl install

This will install the Istio 1.10.2  profile with ["Istio core" "Istiod" "Ingress gateways"] components into the cluster. Proceed? (y/N) y
✔ Istio core installed
✔ Istiod installed
✔ Ingress gateways installed
✔ Installation complete

Add the addons

kubectl apply -f /mnt/c/Users/<>/istio-1.10.2/samples/addons

Install of kiali may spit out “no matches for kind…” just ignore.

Enable istio injection for default namespace (in a true deployment you would want your own separate namespace for your services)

kubectl label namespace default istio-injection=enabled

Delete the existing pod for echo-server container so it restarts with the istio/envoy side car containers.

It will now restart with the side car proxy:

Check out the istio services :ns istio-system :svc

Configure the gateway

echo "
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: echo-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: http
hosts:
- host.docker.internal
" | kubectl apply -f -

We are using the built in host.docker.internal (which points to local machine IP in the windows hosts file) set up by docker desktop.

Then add virtual service to this tying up the g/w host echo-gateway to the service in k8s echo-server.

echo " 
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: echo-service-vs
namespace: default
spec:
hosts:
- host.docker.internal
gateways:
- echo-gateway
http:
- route:
- destination:
host: echo-server.default.svc.cluster.local
port:
number: 80
" | kubectl apply -f -

Call our service via the g/w:

curl -s host.docker.internal | grep host
-> My hostname is: echo-server-f5666cc48-pp8tw
Host: host.docker.internal

or

http://host.docker.internal/

Scaling and Load balancing

Scale the deployment, in k9s, :ns default, :dp, s , replicas change to 2 or more

Should start up extra pods each with istio side cars.

Call the service again via the g/w:

curl -s host.docker.internal | grep host> My hostname is: echo-server-f5666cc48-tss5ccurl -s host.docker.internal | grep host> My hostname is: echo-server-f5666cc48-vkt6v

Load balancing in a round robin fashion hitting an alternative pod on each call.

Stick Sessions (Session Affinity)

In case you have a need to ensure all requests from a source go to the same server. Here we use the source IP, options are http header name (httpHeaderName), cookies (httpCookie) and others.

cat <<EOF | kubectl apply -f -
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: echo-service-dr
spec:
host: echo-server.default.svc.cluster.local
trafficPolicy:
loadBalancer:
consistentHash:
httpHeaderName: x-user
EOF

Try calling:

curl -s -H “x-user: anon” host.docker.internal | grep host
-> My hostname is: echo-server-f5666cc48–6kxfb
curl -s -H “x-user: anon” host.docker.internal | grep host
-> My hostname is: echo-server-f5666cc48–6kxfb

Now it always hits the same one, on every request, a sticky session.

curl -s -H “x-user: auser” host.docker.internal | grep host
-> My hostname is: echo-server-f5666cc48-tss5c

Try with

httpQueryParameterName

cat <<EOF | kubectl apply -f -
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: echo-service-dr
spec:
host: echo-server.default.svc.cluster.local
trafficPolicy:
loadBalancer:
consistentHash:
httpQueryParameterName: echoq
EOF

Call with different parameters:

curl -s host.docker.internal?echoq=test1 | grep host
-> My hostname is: echo-server-f5666cc48-b2z8c
curl -s host.docker.internal?echoq=test2 | grep host
-> My hostname is: echo-server-f5666cc48-gg44l

The following other LB algorithms are available:

ROUND_ROBIN, LEAST_CONN, RANDOM, or PASSTHROUGH.

Lets try RANDOM

cat <<EOF | kubectl apply -f -
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: echo-service
spec:
host: echo-server.default.svc.cluster.local
trafficPolicy:
loadBalancer:
simple: RANDOM
EOF

Now get random servers for each request

curl -s host.docker.internal | grep host
-> My hostname is: echo-server-f5666cc48-vkt6v
curl -s host.docker.internal | grep host
-> My hostname is: echo-server-f5666cc48-tss5c

Observability/Stats

Note: To really see tracing and metrics, you need a service that sends tracing data to these services. See previous blog here for hooks in java spring https://levelup.gitconnected.com/observability-of-springboot-services-in-k8s-with-prometheus-and-grafana-61c4e7a9d814

Add some load to get some data, using apache benchmark

ab -c 10 -n 100 host.docker.internal/ 
This is ApacheBench, Version 2.3 <$Revision: 1843412 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking host.docker.internal (be patient).....doneServer Software: istio-envoy
Server Hostname: host.docker.internal
Server Port: 80

Grafana

istioctl dashboard grafana

Peaks are apache bench mark being run.

Kiali

Update Kialis configmap adding this to it:

external_services:
tracing:
url: http://localhost:16686
in_cluster_url: http://tracing.istio-system/jaeger
grafana:
url: http://localhost:3000
in_cluster_url: http://grafana.istio-system
custom_dashboards:
enabled: true

Restart the kiali pod.

Start the dashboard

istioctl dashboard kiali

Jaeger

istioctl dashboard jaeger

--

--