Istio Service Mesh by Practical Example

  • Installation of Fundamental Components
  • Observability/Traceability
  • Service Discovery/Routing
  • Ingress/Load Balancing
  • AuthN/AuthZ for Zero Trust Model
  • Network Resilience: Circuit Breaker and Others
  • Egress
  • Cross-Cluster Mesh
  • Performance/Load Testing
  • When Isito meets Knative
$ git clone ...
$ cd release
$ cat migrate.sh
IMGLI=$(grep "gcr.io/google-samples" kubernetes-manifests.yaml | sed -e 's/^[ \t]*//' | cut -d ' ' -f 2)
DSTR='harbor.run.haas-481.pez.vmware.com/microservices'
for img in $IMGLI
do
docker pull $img
IMGN=$(echo $img | cut -d '/' -f 4)
echo $IMGN
docker tag $img $DSTR/$IMGN
docker push $DSTR/$IMGN
done

Installation

  1. Install cert-manager
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.6.0/cert-manager.yaml
$ openssl genrsa -out ca.key 2048
$ export COMMON_NAME="haas-495.pez.vmware.com"
$ openssl req -x509 -new -nodes -key ca.key -subj "/CN=${COMMON_NAME}" -days 3650 -reqexts v3_req -extensions v3_ca -out ca.crt
$ kubectl create secret tls tls-ca-secret -n cert-manager --cert ./ca.crt --key ./ca.key
$ cat << EOF | kubectl apply -f -
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: tls-selfsigned-issuer
spec:
ca:
secretName: tls-ca-secret
EOF
$ curl -L https://istio.io/downloadIstio | sh -
$ cd istio-1.12.0. ##depends on the latest version
$ sudo cp bin/* /usr/local/bin
$ istioctl install --set profile=default -y
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm pull bitnami/kube-prometheus
$ tar xvfz kube-prometheus.tgz
$ cd kube-prometheus
$ vim values.yaml
prometheus.ingress.enable: true
prometheus.ingress.annotations
kubernetes.io/ingress.class: istio
cert-manager.io/cluster-issuer: tls-selfsigned-issuer
$ cat << EOF | k apply -f -
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
labels:
app.kubernetes.io/component: prometheus
app.kubernetes.io/instance: prometheus
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kube-prometheus
helm.sh/chart: kube-prometheus-6.4.1
name: prometheus.haas-481.pez.vmware.com-tls
namespace: istio-system
spec:
dnsNames:
- prometheus.haas-481.pez.vmware.com
issuerRef:
group: cert-manager.io
kind: ClusterIssuer
name: tls-selfsigned-issuer
secretName: prometheus.haas-481.pez.vmware.com-tls
usages:
- digital signature
- key encipherment
EOF
$ cd istio-1.12.0 
$ kubectl apply -f samples/addons/extras/prometheus-operator.yaml
7639 11829 7636 7630 7645
$ helm repo add kiali https://kiali.org/helm-charts
$ helm repo update
$ helm install \
--namespace kiali-operator \
--create-namespace \
kiali-operator \
kiali/kiali-operator
$ cat << EOF | kubectl apply -f -
apiVersion: kiali.io/v1alpha1
kind: Kiali
metadata:
annotations:
meta.helm.sh/release-name: kiali-operator
meta.helm.sh/release-namespace: kiali-operator
labels:
app: kiali-operator
app.kubernetes.io/instance: kiali-operator
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kiali-operator
app.kubernetes.io/part-of: kiali-operator
app.kubernetes.io/version: v1.43.0
helm.sh/chart: kiali-operator-1.43.0
version: v1.43.0
name: kiali
namespace: istio-system
spec:
deployment:
accessible_namespaces:
- '**'
service_type: LoadBalancer
external_services:
prometheus:
url: http://promethus-kube-prometheus-prometheus.monitoring.svc.cluster.local:9090/
server:
port: 8080
web_fqdn: kiali.haas-495.pez.vmware.com
EOF
k get secret $(k get secret -n istio-system | grep kiali-service-account-token | cut -d ' ' -f 1) -o=jsonpath="{.data.token}" -n istio-system |base64 -d

Observability/Traceability

Service Discovery/Routing

$ kubectl create ns ms-workload
$ kubectl label namespace ms-workload istio-injection=enabled
$ cd release
$ vim kubernetes-manifests.yaml ##uncomment DISABLE_PROFILER for service recommendationservice
$ kubectl apply -f kubernetes-manifests.yaml -n ms-workload
##Create tls certicate for front-end service
$ cat << EOF | kubectl apply -f -
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: mall.haas-495.pez.vmware.com-cert
namespace: istio-system
spec:
dnsNames:
- mall.haas-495.pez.vmware.com
issuerRef:
kind: ClusterIssuer
name: app-selfsigned-issuer
secretName: mall.haas-495.pez.vmware.com-tls
usages:
- digital signature
- key encipherment
##Modify gateway to add https access
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: frontend-gateway
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 80 #####for test only
name: http
protocol: HTTP
hosts:
- mall.haas-495.pez.vmware.com
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- mall.haas-495.pez.vmware.com
tls:
credentialName: mall.haas-495.pez.vmware.com-tls
mode: SIMPLE ##TLS terminates here.
##Virtual servcies
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: frontend-ingress
spec:
hosts:
- mall.haas-495.pez.vmware.com
gateways:
- frontend-gateway
http:
- route:
- destination:
host: frontend
port:
number: 80

Ingress/Load Balancing

AuthN/AuthZ for Zero Trust Model

  1. workload-specific
  2. namespace-wide
  3. mesh-wide
apiVersion: "security.istio.io/v1beta1"
kind: "PeerAuthentication"
metadata:
name: "default"
namespace: "default"
spec:
mtls:
mode: STRICT
  • The location of the token in the request
  • The issuer or the request
  • The public JSON Web Key Set (JWKS)
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: require-jwt
namespace: default
spec:
selector:
matchLabels:
app: frontend
action: ALLOW
rules:
- from:
- source:
requestPrincipals:
["testing@secure.istio.io/testing@secure.istio.io"]
  • The selector field specifies the target of the policy
  • The action field specifies whether to allow or deny the request
  • The rules specify when to trigger the action
  • The from field in the rules specifies the sources of the request
  • The to field in the rules specifies the operations of the request
  • The when field specifies the conditions needed to apply the rule
apiVersion: "security.istio.io/v1beta1"
kind: "AuthorizationPolicy"
metadata:
name: "frontend"
namespace: default
spec:
selector:
matchLabels:
app: frontend
rules:
- when:
- key: request.headers[hello]
values: ["world"]
kubectl exec $(kubectl get pod -l app=productcatalogservice -o jsonpath={.items..metadata.name}) -c istio-proxy \
-- curl http://frontend:80/ -o /dev/null -s -w '%{http_code}\n'

403
kubectl exec $(kubectl get pod -l app=productcatalogservice -o jsonpath={.items..metadata.name}) -c istio-proxy \
-- curl --header "hello:world" http://frontend:80 -o /dev/null -s -w '%{http_code}\n'

200

Circuit Breaker Pattern

  • Retries and Timeouts.
  • Circuit breakers
  • Health checks
  • Outlier detection
  • Fault injection.
Server.gopackage main

import (
"fmt"
"log"
"net/http"
"time"
)

func main() {
http.HandleFunc("/index", func(w http.ResponseWriter, r *http.Request) {
time.Sleep(5 * time.Second)
fmt.Fprintf(w, "Hello")
})
log.Fatal(http.ListenAndServe(":8081", nil))
}
------
server-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: go-web-server
spec:
selector:
matchLabels:
app: go-web-server
replicas: 1
template:
metadata:
labels:
app: go-web-server
spec:
containers:
- name: go-web-server
image: harbor.haas-495.pez.vmware.com/app/go-web-server:v1.0
imagePullPolicy: Always
ports:
- containerPort: 8081
---
apiVersion: v1
kind: Service
metadata:
name: go-web-service
spec:
selector:
app: go-web-server
ports:
- protocol: "TCP"
port: 80
targetPort: 8081
type: ClusterIP
Client.gopackage main

import (
"fmt"
"net/http"
"time"
)

func workerRoute(url string) {
_start := time.Now()
resp, err := http.Get(url)
if err != nil {
panic(err)
}
defer resp.Body.Close()

_end := time.Now()

fmt.Printf("STATUS: %s START: %02d:%02d:%02d, END: %02d:%02d:%02d, TIME: %d \n",
resp.Status,
_start.Hour(),
_start.Minute(),
_start.Second(),
_end.Hour(),
_end.Minute(),
_end.Second(),
_end.Sub(_start))
}

func main() {
time.Sleep(30 * time.Second)
for {
switch time.Now().Second() {
case 0, 20, 40:
for i := 0; i < 10; i++ {
go workerRoute("http://go-web-service/index")
}
time.Sleep(time.Second)
default:
time.Sleep(time.Second)
continue
}
}
}
---
Client-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: go-web-client
spec:
selector:
matchLabels:
app: go-web-client
replicas: 4
template:
metadata:
labels:
app: go-web-client
spec:
containers:
- name: go-web-client
image: harbor.haas-495.pez.vmware.com/app/go-web-client:v1.0
imagePullPolicy: Always

Egress

  • Allocate dedicated cluster nodes to deploy egress gateway. All the outbound traffic must and only pass through theses nodes. Security team could enforce specific hardened measures on these externally facing nodes(pool) , plus enhanced configuration on firewall, so as to ensure high security level.
  • Use Taints and Tolerations on egress nodes to prevent workload from deploying on theses nodes. That way, we can isolate internal workload from external world.
  1. Allow the Envoy proxy to pass requests through to services that are not configured inside the mesh.
  2. Configure service entries to provide controlled access to external services.
  3. Completely bypass the Envoy proxy for a specific range of IPs.
  1. Configure dedicated nodes:
$ kubectl label node node1 egress-node=true
$ kubectl taint node node1 egress-key=egress-value:NoSchedule
egressGateways:
- name: istio-egressgateway
enabled: true
k8s:
nodeSelector:
egress-node: "true"
tolerations:
- key: "egress-key"
operator: "Equal"
values: "egress-values"
effect: "NoSchedule"
$ istioctl manifest apply -f mesh.yaml
$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: mall
spec:
hosts:
- mall.haas-495.pez.vmware.com
ports:
- number: 80
name: http-port
protocol: HTTP
- number: 443
name: https
protocol: HTTPS
resolution: DNS
EOF
$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: mall
spec:
hosts:
- mall.haas-495.pez.vmware.com
http:
- timeout: 3s
route:
- destination:
host: mall.haas-495.pez.vmware.com
weight: 100
EOF

Cross-Cluster Mesh

Primary and remote clusters on separate networks from istio.io
$ kubectl --context control create ns istio-system
$ kubectl --context remote create ns istio-system
  1. Configure trust
CA Hierarchy from istio.io
$ cd $ISTIO_HOME_DIRECTORY
$ mkdir -p certs
$ pushed certs
$ make -f ../tools/certs/Makefile.selfsigned.mk root-ca
$ make -f ../tools/certs/Makefile.selfsigned.mk control-cacerts
$ make -f ../tools/certs/Makefile.selfsigned.mk remote-cacerts
$ kubectl --context control create secret generic cacerts -n istio-system \
--from-file=control-cacerts/ca-cert.pem \
--from-file=control-cacerts/ca-key.pem \
--from-file=control-cacerts/root-cert.pem \
--from-file=control-cacerts/cert-chain.pem
$ kubectl --context remote create secret generic cacerts -n istio-system \
--from-file=remote-cacerts/ca-cert.pem \
--from-file=remote-cacerts/ca-key.pem \
--from-file=remote-cacerts/root-cert.pem \
--from-file=remote-cacerts/cert-chain.pem
$ popd
$ kubectl --context=control get namespace istio-system && \   kubectl --context=control label namespace istio-system topology.istio.io/network=net-control
$ cat meshinfo/istio-control.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
name: control
namespace: istio-system
spec:
values:
global:
meshID: mesh1
network: net-control
multiCluster:
clusterName: control
components:
ingressGateways:
- name: istio-eastwestgateway
label:
istio: eastwestgateway
app: istio-eastwestgateway
topology.istio.io/network: net-control
enabled: true
k8s:
env:
# sni-dnat adds the clusters required for AUTO_PASSTHROUGH mode
- name: ISTIO_META_ROUTER_MODE
value: "sni-dnat"
# traffic through this gateway should be routed inside the network
- name: ISTIO_META_REQUESTED_NETWORK_VIEW
value: net-control
service:
ports:
- name: status-port
port: 15021
targetPort: 15021
- name: tls
port: 15443
targetPort: 15443
- name: tls-istiod
port: 15012
targetPort: 15012
- name: tls-webhook
port: 15017
targetPort: 15017
$ istioctl --context control manifest apply -f meshinfo/istio-control.yaml
kubectl --context control get pods -n istio-system
kubectl --context=control get svc istio-eastwestgateway -n istio-system
$ kubectl apply --context=control -n istio-system -f samples/multicluster/expose-istiod.yaml
$ kubectl --context=control apply -n istio-system -f \     samples/multicluster/expose-services.yaml
$ kubectl --context=remote get namespace istio-system && \   kubectl --context=remote label namespace istio-system topology.istio.io/network=net-remote
  • Enables the control plane to authenticate connection requests from workloads running in the remote cluster. Without API Server access, the control plane will reject the requests.
  • Enables discovery of service endpoints running in the remote cluster.
$ istioctl x create-remote-secret \
--context=remote \
--name=remote | \
kubectl apply -f - --context=control
  • Create a service account named istio-reader-service-account in remote cluster
  • Create two cluster roles and bind to istio-reader-service-account in remote cluster
  • Create a secret named istio-remote-secret-{remote cluster name} in control cluster
$ cat meshinfo/istio-remote.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: remote
values:
global:
meshID: mesh1
multiCluster:
clusterName: remote
network: net-remote
remotePilotAddress: 10.212.133.59
caAddress: istiod.istio-system.svc:15012
components:
ingressGateways:
- name: istio-eastwestgateway
label:
istio: eastwestgateway
app: istio-eastwestgateway
topology.istio.io/network: net-remote
enabled: true
k8s:
env:
# sni-dnat adds the clusters required for AUTO_PASSTHROUGH mode
- name: ISTIO_META_ROUTER_MODE
value: "sni-dnat"
# traffic through this gateway should be routed inside the network
- name: ISTIO_META_REQUESTED_NETWORK_VIEW
value: net-remote
service:
ports:
- name: status-port
port: 15021
targetPort: 15021
- name: tls
port: 15443
targetPort: 15443
- name: tls-istiod
port: 15012
targetPort: 15012
- name: tls-webhook
port: 15017
targetPort: 15017
$ kubectl --context=control -n istio-system \
get svc istio-eastwestgateway \
-o jsonpath='{.status.loadBalancer.ingress[0].ip}'
$ istioctl --context remote manifest apply -f meshinfo/istio-remote.yaml
$ kubectl --context remote get pods -n istio-system
$ kubectl --context=remote get svc istio-eastwestgateway -n istio-system
$ kubectl --context=remote apply -n istio-system -f \     samples/multicluster/expose-services.yaml
Diagram from Google Cloud
$ istioctl proxy status
istioctl --context control pc endpoint frontend-59b7d8554c-g64dp.ms-workload | grep ms-workload
istioctl --context remote pc endpoint cartservice-7b45d78f99-zbcdh.ms-workload | grep ms-workload
kind: IstioOperator
metadata:
name: control
namespace: istio-system
spec:
meshConfig:
defaultConfig:
proxyMetadata:
# Enable basic DNS proxying
ISTIO_META_DNS_CAPTURE: "true"
# Enable automatic address allocation, optional
ISTIO_META_DNS_AUTO_ALLOCATE: "true"

values:
global:
meshID: mesh1
network: net-control
multiCluster:
clusterName: control
components:
ingressGateways:
- name: istio-eastwestgateway
label:
istio: eastwestgateway
app: istio-eastwestgateway
topology.istio.io/network: net-control
enabled: true
k8s:
env:
# sni-dnat adds the clusters required for AUTO_PASSTHROUGH mode
- name: ISTIO_META_ROUTER_MODE
value: "sni-dnat"
# traffic through this gateway should be routed inside the network
- name: ISTIO_META_REQUESTED_NETWORK_VIEW
value: net-control
service:
ports:
- name: status-port
port: 15021
targetPort: 15021
- name: tls
port: 15443
targetPort: 15443
- name: tls-istiod
port: 15012
targetPort: 15012
- name: tls-webhook
port: 15017
targetPort: 15017
  1. Adjust log level for pod during troubleshooting
istioctl pc log pod-name --level:debug
Solution 1: add imagepullsecrets for istiod or other service account
Solution 2: add global configuration
apiVersion: install.istio.io/v1alpha2
kind: IstioControlPlane
spec:
values:
global:
imagePullSecrets:
— myregistrykey

Performance/Load Testing

git clone https://github.com/istio/tools.git
cd tools/perf/benchmark
export NAMESPACE=app-workload
export INTERCEPTION_MODE=REDIRECT
export ISTIO_INJECT=false
export LOAD_GEN_TYPE=fortio
export DNS_DOMAIN=local
./setup_test.sh
CLIENTPOD=$(kubectl get pod -n app-workload | grep client | cut -d ' '  -f 1)
for (( k = 2; k < 9000; k=k*2 )); do
k -n app-workload exec $CLIENTPOD -c captured -- fortio load \
-jitter=False \
-c $k \
-qps 0 \
-t 60s \
-a http://fortioserver:8080/echo\?size\=1024
done
export NAMESPACE=app-workload
export INTERCEPTION_MODE=REDIRECT
export ISTIO_INJECT=true
export LOAD_GEN_TYPE=fortio
export DNS_DOMAIN=local
./setup_test.sh
$ kubectl edit deployment fortioserver -n app-workload
add:
annotation:
proxy.istio.io/config: |-
concurrency: 8
proxyStatsMatcher:
inclusionRegexps:
- "listener.*.downstream_cx_total"
- "listener.*.downstream_cx_active"
{__name__=~”envoy_listener_worker_.+_downstream_cx_active”}

When Isito meets Knative

  1. Istio must be used for cluster Knative Ingress.
  2. Istio sidecar injection must be enabled
kubectl label namespace knative-serving istio-injection=enabled
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: allow-serving-tests
namespace: serving-tests
spec:
action: ALLOW
rules:
- from:
- source:
namespaces: ["serving-tests", "knative-serving"]
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: allowlist-by-paths
namespace: serving-tests
spec:
action: ALLOW
rules:
- to:
- operation:
paths:
- /metrics # The path to collect metrics by system pod.
- /healthz # The path to probe by system pod.
From istio.io
  1. https://thenewstack.io/sailing-faster-with-istio-part-1/
  2. https://kiali.io/docs/installation/installation-guide/install-with-helm/
  3. https://cloud.google.com/architecture/building-gke-multi-cluster-service-mesh-with-istio-shared-control-plane-disparate-networks
  4. https://cloud.google.com/architecture/service-meshes-in-microservices-architecture
  5. “Service Mesh for Mere Mortals” by Bruce Basil Mathews
  6. https://istio.io/latest/docs/ops/deployment/deployment-models/#network-model
  7. https://istio.io/latest/docs/ops/configuration/traffic-management/dns-proxy/
  8. https://tech.olx.com/demystifying-istio-circuit-breaking-27a69cac2ce4
  9. https://istio.io/latest/docs/tasks/traffic-management/egress/egress-control/

--

--

--

Love podcasts or audiobooks? Learn on the go with our new app.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Jeffrey Wang

Jeffrey Wang

More from Medium

IBM MQ On Cloud as a REST Service

Working with Java development IDE

How to configure SonarLint to connect to SonarQube for VS Code

Git Cheat Sheet

Git Cheat Sheet