Istio in AWS
Recently I tried to install Istio into AWS. It was only for testing purposes for webinar, but maybe some parts may be helpful for somebody…
The original web pages can be found here:
Create EKS cluster
Start with provisioning the Amazon EKS in AWS.
Use the MY_DOMAIN
variable containing domain and LETSENCRYPT_ENVIRONMENT
variable. The LETSENCRYPT_ENVIRONMENT
variable should be one of:
staging
- Let’s Encrypt will create testing certificate (not valid)production
- Let’s Encrypt will create valid certificate (use with care)
export MY_DOMAIN=${MY_DOMAIN:-mylabs.dev}
export LETSENCRYPT_ENVIRONMENT=${LETSENCRYPT_ENVIRONMENT:-staging}
echo "${MY_DOMAIN} | ${LETSENCRYPT_ENVIRONMENT}"
Prepare the local working environment
You can skip these steps if you have all the required software already installed.
Install necessary software:
test -x /usr/bin/apt && \
apt update -qq && \
DEBIAN_FRONTEND=noninteractive apt-get install -y -qq awscli curl gettext-base git openssh-client siege sudo > /dev/null
Install kubectl binary:
if [ ! -x /usr/local/bin/kubectl ]; then
sudo curl -s -Lo /usr/local/bin/kubectl https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
sudo chmod a+x /usr/local/bin/kubectl
fi
Install eksctl:
if [ ! -x /usr/local/bin/eksctl ]; then
curl -s -L "https://github.com/weaveworks/eksctl/releases/download/latest_release/eksctl_Linux_amd64.tar.gz" | sudo tar xz -C /usr/local/bin/
fi
Install AWS IAM Authenticator for Kubernetes:
if [ ! -x /usr/local/bin/aws-iam-authenticator ]; then
sudo curl -s -Lo /usr/local/bin/aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.11.5/2018-12-06/bin/linux/amd64/aws-iam-authenticator
sudo chmod a+x /usr/local/bin/aws-iam-authenticator
fi
Configure AWS
Authorize to AWS using AWS CLI: https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html
aws configure
...
Create DNS zone:
aws route53 create-hosted-zone --name ${MY_DOMAIN} --caller-reference ${MY_DOMAIN}
Use your domain registrar to change the nameservers for your zone (for example mylabs.dev
) to use the Amazon Route 53 nameservers. Here is the way how you can find out the the Route 53 nameservers:
aws route53 get-hosted-zone \
--id $(aws route53 list-hosted-zones --query "HostedZones[?Name==\`${MY_DOMAIN}.\`].Id" --output text) \
--query "DelegationSet.NameServers"
Create policy allowing the cert-manager to change Route 53 settings. This will allow cert-manager to generate wildcard SSL certificates by Let's Encrypt certificate authority.
aws iam create-policy \
--policy-name ${USER}-AmazonRoute53Domains-cert-manager \
--description "Policy required by cert-manager to be able to modify Route 53 when generating wildcard certificates using Lets Encrypt" \
--policy-document file://files/route_53_change_policy.json
Create user which will use the policy above allowing the cert-manager to change Route 53 settings:
aws iam create-user --user-name ${USER}-eks-cert-manager-route53POLICY_ARN=$(aws iam list-policies --query "Policies[?PolicyName==\`${USER}-AmazonRoute53Domains-cert-manager\`].{ARN:Arn}" --output text)aws iam attach-user-policy --user-name "${USER}-eks-cert-manager-route53" --policy-arn $POLICY_ARNaws iam create-access-key --user-name ${USER}-eks-cert-manager-route53 > $HOME/.aws/${USER}-eks-cert-manager-route53-${MY_DOMAIN}export EKS_CERT_MANAGER_ROUTE53_AWS_ACCESS_KEY_ID=$(awk -F\" "/AccessKeyId/ { print \$4 }" $HOME/.aws/${USER}-eks-cert-manager-route53-${MY_DOMAIN})export EKS_CERT_MANAGER_ROUTE53_AWS_SECRET_ACCESS_KEY=$(awk -F\" "/SecretAccessKey/ { print \$4 }" $HOME/.aws/${USER}-eks-cert-manager-route53-${MY_DOMAIN})
The AccessKeyId
and SecretAccessKey
is need for creating the ClusterIssuer
definition for cert-manager
.
Create Amazon EKS
Generate SSH keys if not already exist:
test -f $HOME/.ssh/id_rsa || ( install -m 0700 -d $HOME/.ssh && \
ssh-keygen -b 2048 -t rsa -f $HOME/.ssh/id_rsa -q -N "" )
Create Amazon EKS in AWS by using eksctl. It’s a tool from Weaveworks based on official AWS CloudFormation templates which will be used to launch and configure our EKS cluster and nodes.
eksctl create cluster \
--name=${USER}-k8s-istio-webinar \
--tags "Application=Istio,Owner=${USER},Environment=Test" \
--region=eu-central-1 \
--node-type=t3.medium \
--ssh-access \
--node-ami=auto \
--node-labels "Application=Istio,Owner=${USER},Environment=Test" \
--kubeconfig=kubeconfig.conf
Output:
[ℹ] using region eu-central-1
[ℹ] setting availability zones to [eu-central-1a eu-central-1b eu-central-1c]
[ℹ] subnets for eu-central-1a - public:192.168.0.0/19 private:192.168.96.0/19
[ℹ] subnets for eu-central-1b - public:192.168.32.0/19 private:192.168.128.0/19
[ℹ] subnets for eu-central-1c - public:192.168.64.0/19 private:192.168.160.0/19
[ℹ] nodegroup "ng-5be027b5" will use "ami-07c77043ca4cb9123" [AmazonLinux2/1.11]
[ℹ] importing SSH public key "/root/.ssh/id_rsa.pub" as "eksctl-pruzicka-k8s-istio-webinar-nodegroup-ng-5be027b5-f8:37:5c:d1:62:35:1e:21:66:a1:8e:3d:19:d0:8f:86"
[ℹ] creating EKS cluster "pruzicka-k8s-istio-webinar" in "eu-central-1" region
[ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup
[ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=eu-central-1 --name=pruzicka-k8s-istio-webinar'
[ℹ] building cluster stack "eksctl-pruzicka-k8s-istio-webinar-cluster"
[ℹ] creating nodegroup stack "eksctl-pruzicka-k8s-istio-webinar-nodegroup-ng-5be027b5"
[ℹ] --nodes-min=2 was set automatically for nodegroup ng-5be027b5
[ℹ] --nodes-max=2 was set automatically for nodegroup ng-5be027b5
[✔] all EKS cluster resource for "pruzicka-k8s-istio-webinar" had been created
[✔] saved kubeconfig as "kubeconfig.conf"
[ℹ] adding role "arn:aws:iam::822044714040:role/eksctl-pruzicka-k8s-istio-webinar-NodeInstanceRole-DVZ6BH8KDQ1K" to auth ConfigMap
[ℹ] nodegroup "ng-5be027b5" has 0 node(s)
[ℹ] waiting for at least 2 node(s) to become ready in "ng-5be027b5"
[ℹ] nodegroup "ng-5be027b5" has 2 node(s)
[ℹ] node "ip-192-168-26-217.eu-central-1.compute.internal" is ready
[ℹ] node "ip-192-168-69-19.eu-central-1.compute.internal" is ready
[ℹ] kubectl command should work with "kubeconfig.conf", try 'kubectl --kubeconfig=kubeconfig.conf get nodes'
[✔] EKS cluster "pruzicka-k8s-istio-webinar" in "eu-central-1" region is ready
Check if the new EKS cluster is available:
export KUBECONFIG=$PWD/kubeconfig.conf
kubectl get nodes -o wide
Output:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-192-168-26-217.eu-central-1.compute.internal Ready <none> 4m v1.11.9 192.168.26.217 18.194.16.192 Amazon Linux 2 4.14.104-95.84.amzn2.x86_64 docker://18.6.1
ip-192-168-69-19.eu-central-1.compute.internal Ready <none> 4m v1.11.9 192.168.69.19 18.184.88.98 Amazon Linux 2 4.14.104-95.84.amzn2.x86_64 docker://18.6.1
Both worker nodes should be accessible via SSH:
for EXTERNAL_IP in $(kubectl get nodes --output=jsonpath="{.items[*].status.addresses[?(@.type==\"ExternalIP\")].address}"); do
echo "*** ${EXTERNAL_IP}"
ssh -q -o StrictHostKeyChecking=no -l ec2-user ${EXTERNAL_IP} uptime
done
Output:
*** 18.194.16.192
09:39:19 up 5 min, 0 users, load average: 0.06, 0.17, 0.08
*** 18.184.88.98
09:39:20 up 5 min, 0 users, load average: 0.18, 0.12, 0.05
At the end of the output you should see 2 IP addresses which should be accessible by SSH using your public key ~/.ssh/id_rsa.pub
.
Install Helm
Helm Architecture:
Install Helm binary:
curl -s https://raw.githubusercontent.com/helm/helm/master/scripts/get | bash
Output:
Downloading https://kubernetes-helm.storage.googleapis.com/helm-v2.13.1-linux-amd64.tar.gz
Preparing to install helm and tiller into /usr/local/bin
helm installed into /usr/local/bin/helm
tiller installed into /usr/local/bin/tiller
Run 'helm init' to configure helm.
Install Tiller (the Helm server-side component) into the Kubernetes cluster:
kubectl create serviceaccount tiller --namespace kube-systemkubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tillerhelm init --wait --service-account tillerhelm repo update
Output:
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller-cluster-rule created
Creating /root/.helm
Creating /root/.helm/repository
Creating /root/.helm/repository/cache
Creating /root/.helm/repository/local
Creating /root/.helm/plugins
Creating /root/.helm/starters
Creating /root/.helm/cache/archive
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /root/.helm.Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈
Check if the tiller was installed properly:
kubectl get pods -l app=helm -n kube-system
Output:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system tiller-deploy-54fc6d9ccc-48n4w 1/1 Running 0 19s
Istio — Installation
Istio architecture:
Install Istio
Either download Istio directly from https://github.com/istio/istio/releases or get the latest version by using curl:
export ISTIO_VERSION="1.1.0"
test -d tmp || mkdir tmp
cd tmp
curl -sL https://git.io/getLatestIstio | sh -
Output:
Downloading istio-1.1.0 from https://github.com/istio/istio/releases/download/1.1.0/istio-1.1.0-linux.tar.gz ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 614 0 614 0 0 884 0 --:--:-- --:--:-- --:--:-- 883
100 15.0M 100 15.0M 0 0 5252k 0 0:00:02 0:00:02 --:--:-- 12.4M
Downloaded into istio-1.1.0:
LICENSE README.md bin install istio.VERSION samples tools
Add /mnt/k8s-istio-webinar/k8s-istio-webinar/tmp/istio-1.1.0/bin to your path; e.g copy paste in your shell and/or ~/.profile:
export PATH="$PATH:/mnt/k8s-istio-webinar/k8s-istio-webinar/tmp/istio-1.1.0/bin"
Change the directory to the Istio installation files location:
cd istio*
Install istioctl
:
test -x /usr/local/bin/istioctl || sudo mv bin/istioctl /usr/local/bin/
Install the istio-init
chart to bootstrap all the Istio's CRDs:
helm install install/kubernetes/helm/istio-init --wait \
--name istio-init --namespace istio-system --set certmanager.enabled=true
sleep 30
Install Istio with add-ons (Kiali, Jaeger, Grafana, Prometheus, cert-manager):
helm install install/kubernetes/helm/istio --wait --name istio --namespace istio-system \
--set certmanager.enabled=true \
--set certmanager.email=petr.ruzicka@gmail.com \
--set gateways.istio-ingressgateway.sds.enabled=true \
--set global.k8sIngress.enabled=true \
--set global.k8sIngress.enableHttps=true \
--set global.k8sIngress.gatewayName=ingressgateway \
--set grafana.enabled=true \
--set kiali.enabled=true \
--set kiali.createDemoSecret=true \
--set kiali.contextPath=/ \
--set kiali.dashboard.grafanaURL=http://grafana.${MY_DOMAIN}/ \
--set kiali.dashboard.jaegerURL=http://jaeger.${MY_DOMAIN}/ \
--set servicegraph.enabled=true \
--set tracing.enabled=true
Create DNS records
Create DNS record mylabs.dev
for the loadbalancer created by Istio:
export LOADBALANCER_HOSTNAME=$(kubectl get svc istio-ingressgateway -n istio-system -o jsonpath="{.status.loadBalancer.ingress[0].hostname}")export CANONICAL_HOSTED_ZONE_NAME_ID=$(aws elb describe-load-balancers --query "LoadBalancerDescriptions[?DNSName==\`$LOADBALANCER_HOSTNAME\`].CanonicalHostedZoneNameID" --output text)export HOSTED_ZONE_ID=$(aws route53 list-hosted-zones --query "HostedZones[?Name==\`${MY_DOMAIN}.\`].Id" --output text)cat << EOF | aws route53 change-resource-record-sets --hosted-zone-id ${HOSTED_ZONE_ID} --change-batch=file:///dev/stdin
{
"Comment": "A new record set for the zone.",
"Changes": [
{
"Action": "UPSERT",
"ResourceRecordSet": {
"Name": "*.${MY_DOMAIN}.",
"Type": "A",
"AliasTarget":{
"HostedZoneId": "${CANONICAL_HOSTED_ZONE_NAME_ID}",
"DNSName": "dualstack.${LOADBALANCER_HOSTNAME}",
"EvaluateTargetHealth": false
}
}
},
{
"Action": "UPSERT",
"ResourceRecordSet": {
"Name": "${MY_DOMAIN}.",
"Type": "A",
"AliasTarget":{
"HostedZoneId": "${CANONICAL_HOSTED_ZONE_NAME_ID}",
"DNSName": "dualstack.${LOADBALANCER_HOSTNAME}",
"EvaluateTargetHealth": false
}
}
}
]
}
EOF
Create SSL certificate using Let’s Encrypt
Create ClusterIssuer
and Certificate
for Route53 used by cert-manager. It will allow Let's encrypt to generate certificate. Route53 (DNS) method of requesting certificate from Let's Encrypt must be used to create wildcard certificate *.mylabs.dev
(details here).
export EKS_CERT_MANAGER_ROUTE53_AWS_SECRET_ACCESS_KEY_BASE64=$(echo -n "$EKS_CERT_MANAGER_ROUTE53_AWS_SECRET_ACCESS_KEY" | base64)cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
name: aws-route53-secret-access-key-secret
namespace: istio-system
data:
secret-access-key: $EKS_CERT_MANAGER_ROUTE53_AWS_SECRET_ACCESS_KEY_BASE64
---
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging-dns
namespace: istio-system
spec:
acme:
# The ACME server URL
server: https://acme-staging-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: petr.ruzicka@gmail.com
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-staging-dns
dns01:
# Here we define a list of DNS-01 providers that can solve DNS challenges
providers:
- name: aws-route53
route53:
accessKeyID: ${EKS_CERT_MANAGER_ROUTE53_AWS_ACCESS_KEY_ID}
region: eu-central-1
secretAccessKeySecretRef:
name: aws-route53-secret-access-key-secret
key: secret-access-key
---
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-production-dns
namespace: istio-system
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: petr.ruzicka@gmail.com
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-production-dns
dns01:
# Here we define a list of DNS-01 providers that can solve DNS challenges
# https://docs.cert-manager.io/en/latest/tasks/acme/configuring-dns01/index.html
providers:
- name: aws-route53
route53:
accessKeyID: ${EKS_CERT_MANAGER_ROUTE53_AWS_ACCESS_KEY_ID}
region: eu-central-1
secretAccessKeySecretRef:
name: aws-route53-secret-access-key-secret
key: secret-access-key
---
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: ingress-cert-${LETSENCRYPT_ENVIRONMENT}
namespace: istio-system
spec:
secretName: ingress-cert-${LETSENCRYPT_ENVIRONMENT}
issuerRef:
kind: ClusterIssuer
name: letsencrypt-${LETSENCRYPT_ENVIRONMENT}-dns
commonName: "*.${MY_DOMAIN}"
dnsNames:
- "*.${MY_DOMAIN}"
- ${MY_DOMAIN}
acme:
config:
- dns01:
provider: aws-route53
domains:
- "*.${MY_DOMAIN}"
- ${MY_DOMAIN}
EOF
Let istio-ingressgateway
to enable TLS certificates delivery via SDS:
kubectl -n istio-system patch gateway istio-autogenerated-k8s-ingress \
--type=json \
-p="[{"op": "replace", "path": "/spec/servers/1/tls", "value": {"credentialName": "ingress-cert-${LETSENCRYPT_ENVIRONMENT}", "mode": "SIMPLE", "privateKey": "sds", "serverCertificate": "sds"}}]"
Check and configure Istio
Allow the default
namespace to use Istio injection:
kubectl label namespace default istio-injection=enabled
Check namespaces:
kubectl get namespace -L istio-injection
Output:
NAME STATUS AGE ISTIO-INJECTION
default Active 19m enabled
istio-system Active 7m
kube-public Active 19m
kube-system Active 19m
See the Istio components:
kubectl get --namespace=istio-system svc,deployment,pods,job,horizontalpodautoscaler,destinationrule
Output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/grafana ClusterIP 10.100.84.93 <none> 3000/TCP 7m
service/istio-citadel ClusterIP 10.100.203.5 <none> 8060/TCP,15014/TCP 7m
service/istio-galley ClusterIP 10.100.224.231 <none> 443/TCP,15014/TCP,9901/TCP 7m
service/istio-ingressgateway LoadBalancer 10.100.241.162 abd0be556520611e9ac0602dc9c152bf-2144127322.eu-central-1.elb.amazonaws.com 80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:31705/TCP,15030:30101/TCP,15031:30032/TCP,15032:32493/TCP,15443:31895/TCP,15020:31909/TCP 7m
service/istio-pilot ClusterIP 10.100.68.4 <none> 15010/TCP,15011/TCP,8080/TCP,15014/TCP 7m
service/istio-policy ClusterIP 10.100.24.13 <none> 9091/TCP,15004/TCP,15014/TCP 7m
service/istio-sidecar-injector ClusterIP 10.100.252.24 <none> 443/TCP 7m
service/istio-telemetry ClusterIP 10.100.103.164 <none> 9091/TCP,15004/TCP,15014/TCP,42422/TCP 7m
service/jaeger-agent ClusterIP None <none> 5775/UDP,6831/UDP,6832/UDP 7m
service/jaeger-collector ClusterIP 10.100.32.192 <none> 14267/TCP,14268/TCP 7m
service/jaeger-query ClusterIP 10.100.196.113 <none> 16686/TCP 7m
service/kiali ClusterIP 10.100.66.131 <none> 20001/TCP 7m
service/prometheus ClusterIP 10.100.246.253 <none> 9090/TCP 7m
service/servicegraph ClusterIP 10.100.163.157 <none> 8088/TCP 7m
service/tracing ClusterIP 10.100.90.197 <none> 80/TCP 7m
service/zipkin ClusterIP 10.100.8.55 <none> 9411/TCP 7mNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.extensions/certmanager 1 1 1 1 7m
deployment.extensions/grafana 1 1 1 1 7m
deployment.extensions/istio-citadel 1 1 1 1 7m
deployment.extensions/istio-galley 1 1 1 1 7m
deployment.extensions/istio-ingressgateway 1 1 1 1 7m
deployment.extensions/istio-pilot 1 1 1 1 7m
deployment.extensions/istio-policy 1 1 1 1 7m
deployment.extensions/istio-sidecar-injector 1 1 1 1 7m
deployment.extensions/istio-telemetry 1 1 1 1 7m
deployment.extensions/istio-tracing 1 1 1 1 7m
deployment.extensions/kiali 1 1 1 1 7m
deployment.extensions/prometheus 1 1 1 1 7m
deployment.extensions/servicegraph 1 1 1 1 7mNAME READY STATUS RESTARTS AGE
pod/certmanager-7478689867-6n8r7 1/1 Running 0 7m
pod/grafana-7b46bf6b7c-w7ms2 1/1 Running 0 7m
pod/istio-citadel-75fdb679db-v8bqh 1/1 Running 0 7m
pod/istio-galley-c864b5c86-8xfpm 1/1 Running 0 7m
pod/istio-ingressgateway-6cb65d86cb-5ptgp 2/2 Running 0 7m
pod/istio-init-crd-10-stcw2 0/1 Completed 0 7m
pod/istio-init-crd-11-fgdh9 0/1 Completed 0 7m
pod/istio-init-crd-certmanager-10-rhmv9 0/1 Completed 0 7m
pod/istio-init-crd-certmanager-11-dv24d 0/1 Completed 0 7m
pod/istio-pilot-f4c98cfbf-pwp45 2/2 Running 0 7m
pod/istio-policy-6cbbd844dd-4dzbx 2/2 Running 2 7m
pod/istio-sidecar-injector-7b47cb4689-5x7ph 1/1 Running 0 7m
pod/istio-telemetry-ccc4df498-w77hk 2/2 Running 2 7m
pod/istio-tracing-75dd89b8b4-frg8w 1/1 Running 0 7m
pod/kiali-7787748c7d-lb454 1/1 Running 0 7m
pod/prometheus-89bc5668c-54pdj 1/1 Running 0 7m
pod/servicegraph-5d4b49848-cscbp 1/1 Running 1 7mNAME DESIRED SUCCESSFUL AGE
job.batch/istio-init-crd-10 1 1 7m
job.batch/istio-init-crd-11 1 1 7m
job.batch/istio-init-crd-certmanager-10 1 1 7m
job.batch/istio-init-crd-certmanager-11 1 1 7mNAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
horizontalpodautoscaler.autoscaling/istio-ingressgateway Deployment/istio-ingressgateway <unknown>/80% 1 5 1 7m
horizontalpodautoscaler.autoscaling/istio-pilot Deployment/istio-pilot <unknown>/80% 1 5 1 7m
horizontalpodautoscaler.autoscaling/istio-policy Deployment/istio-policy <unknown>/80% 1 5 1 7m
horizontalpodautoscaler.autoscaling/istio-telemetry Deployment/istio-telemetry <unknown>/80% 1 5 1 7mNAME HOST AGE
destinationrule.networking.istio.io/istio-policy istio-policy.istio-system.svc.cluster.local 7m
destinationrule.networking.istio.io/istio-telemetry istio-telemetry.istio-system.svc.cluster.local 7m
Configure the Istio services (Jaeger, Prometheus, Grafana, Kiali, Servicegraph) to be visible externally:
cat << EOF | kubectl apply -f -
---
##################
# Grafana
##################
---apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: grafana-destination-rule
namespace: istio-system
spec:
host: grafana.istio-system.svc.cluster.local
trafficPolicy:
tls:
mode: DISABLE
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana-virtual-service
namespace: istio-system
spec:
hosts:
- "grafana.${MY_DOMAIN}"
gateways:
- istio-autogenerated-k8s-ingress
http:
- route:
- destination:
host: grafana.istio-system.svc.cluster.local
port:
number: 3000
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: grafana-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 15031
name: http2-grafana
protocol: HTTP2
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana-vs
namespace: istio-system
spec:
hosts:
- "*"
gateways:
- grafana-gateway
http:
- match:
- port: 15031
route:
- destination:
host: grafana.istio-system.svc.cluster.local
port:
number: 3000---
##################
# Jaeger
##################
---apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: jaeger-destination-rule
namespace: istio-system
spec:
host: tracing.istio-system.svc.cluster.local
trafficPolicy:
tls:
mode: DISABLE
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: jaeger-virtual-service
namespace: istio-system
spec:
hosts:
- "jaeger.${MY_DOMAIN}"
gateways:
- istio-autogenerated-k8s-ingress
http:
- route:
- destination:
host: tracing.istio-system.svc.cluster.local
port:
number: 80
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: tracing-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 15032
name: http2-tracing
protocol: HTTP2
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: tracing-vs
namespace: istio-system
spec:
hosts:
- "*"
gateways:
- tracing-gateway
http:
- match:
- port: 15032
route:
- destination:
host: tracing.istio-system.svc.cluster.local
port:
number: 80---
##################
# Kiali
##################
---apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: kiali-destination-rule
namespace: istio-system
spec:
host: kiali.istio-system.svc.cluster.local
trafficPolicy:
tls:
mode: DISABLE
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: kiali-virtual-service
namespace: istio-system
spec:
hosts:
- "kiali.${MY_DOMAIN}"
gateways:
- istio-autogenerated-k8s-ingress
http:
- route:
- destination:
host: kiali.istio-system.svc.cluster.local
port:
number: 20001
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: kiali-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 15029
name: http2-kiali
protocol: HTTP2
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: kiali-vs
namespace: istio-system
spec:
hosts:
- "*"
gateways:
- kiali-gateway
http:
- match:
- port: 15029
route:
- destination:
host: kiali.istio-system.svc.cluster.local
port:
number: 20001---
##################
# Prometheus
##################
---apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: prometheus-destination-rule
namespace: istio-system
spec:
host: prometheus.istio-system.svc.cluster.local
trafficPolicy:
tls:
mode: DISABLE
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: prometheus-virtual-service
namespace: istio-system
spec:
hosts:
- "prometheus.${MY_DOMAIN}"
gateways:
- istio-autogenerated-k8s-ingress
http:
- route:
- destination:
host: prometheus.istio-system.svc.cluster.local
port:
number: 9090
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: prometheus-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 15030
name: http2-prometheus
protocol: HTTP2
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: prometheus-vs
namespace: istio-system
spec:
hosts:
- "*"
gateways:
- prometheus-gateway
http:
- match:
- port: 15030
route:
- destination:
host: prometheus.istio-system.svc.cluster.local
port:
number: 9090---
##################
# Servicegraph
##################
---apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: servicegraph-destination-rule
namespace: istio-system
spec:
host: servicegraph.istio-system.svc.cluster.local
trafficPolicy:
tls:
mode: DISABLE
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: servicegraph-virtual-service
namespace: istio-system
spec:
hosts:
- "servicegraph.${MY_DOMAIN}"
gateways:
- istio-autogenerated-k8s-ingress
http:
- route:
- destination:
host: servicegraph.istio-system.svc.cluster.local
port:
number: 8088
EOF
Istio — Bookinfo Application
Deploy the demo of Bookinfo application:
# kubectl apply -f <(istioctl kube-inject -f samples/bookinfo/platform/kube/bookinfo.yaml)tail -40 samples/bookinfo/platform/kube/bookinfo.yamlkubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
Output:
---
##################################################################################################
# Productpage services
##################################################################################################
apiVersion: v1
kind: Service
metadata:
name: productpage
labels:
app: productpage
service: productpage
spec:
ports:
- port: 9080
name: http
selector:
app: productpage
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: productpage-v1
labels:
app: productpage
version: v1
spec:
replicas: 1
template:
metadata:
labels:
app: productpage
version: v1
spec:
containers:
- name: productpage
image: istio/examples-bookinfo-productpage-v1:1.10.1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
Example with istioctl
:
istioctl kube-inject -f samples/bookinfo/platform/kube/bookinfo.yaml | tail -172
Output:
...
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: productpage
version: v1
name: productpage-v1
spec:
replicas: 1
strategy: {}
template:
metadata:
annotations:
sidecar.istio.io/status: '{"version":"1d03c7b8369fddca69b40289a75eabb02e48b68ad5516e6975265f215d382f74","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-certs"],"imagePullSecrets":null}'
creationTimestamp: null
labels:
app: productpage
version: v1
spec:
containers:
- image: istio/examples-bookinfo-productpage-v1:1.10.1
imagePullPolicy: IfNotPresent
name: productpage
ports:
- containerPort: 9080
resources: {}
...
image: docker.io/istio/proxyv2:1.1.0
imagePullPolicy: IfNotPresent
name: istio-proxy
ports:
- containerPort: 15090
name: http-envoy-prom
protocol: TCP
The Bookinfo application is broken into four separate microservices:
productpage
- the productpage microservice calls the details and reviews microservices to populate the page.details
- the details microservice contains book information.reviews
- the reviews microservice contains book reviews. It also calls the ratings microservice.ratings
- the ratings microservice contains book ranking information that accompanies a book review.
There are 3 versions of the reviews
microservice:
Version v1
- doesn't call the ratings service.
Version v2
- calls the ratings service, and displays each rating as 1 to 5 black stars.
Version v3
- calls the ratings service, and displays each rating as 1 to 5 red stars.
Bookinfo application architecture:
Confirm all services and pods are correctly defined and running:
kubectl get svc,deployment,pods -o wide
Output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/details ClusterIP 10.100.84.225 <none> 9080/TCP 2m app=details
service/kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 21m <none>
service/productpage ClusterIP 10.100.111.89 <none> 9080/TCP 2m app=productpage
service/ratings ClusterIP 10.100.217.110 <none> 9080/TCP 2m app=ratings
service/reviews ClusterIP 10.100.83.162 <none> 9080/TCP 2m app=reviewsNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.extensions/details-v1 1 1 1 1 2m details istio/examples-bookinfo-details-v1:1.10.1 app=details,version=v1
deployment.extensions/productpage-v1 1 1 1 1 2m productpage istio/examples-bookinfo-productpage-v1:1.10.1 app=productpage,version=v1
deployment.extensions/ratings-v1 1 1 1 1 2m ratings istio/examples-bookinfo-ratings-v1:1.10.1 app=ratings,version=v1
deployment.extensions/reviews-v1 1 1 1 1 2m reviews istio/examples-bookinfo-reviews-v1:1.10.1 app=reviews,version=v1
deployment.extensions/reviews-v2 1 1 1 1 2m reviews istio/examples-bookinfo-reviews-v2:1.10.1 app=reviews,version=v2
deployment.extensions/reviews-v3 1 1 1 1 2m reviews istio/examples-bookinfo-reviews-v3:1.10.1 app=reviews,version=v3NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
pod/details-v1-68868454f5-sphh7 2/2 Running 0 2m 192.168.13.128 ip-192-168-26-217.eu-central-1.compute.internal <none>
pod/productpage-v1-5cb458d74f-wwcqc 2/2 Running 0 2m 192.168.76.160 ip-192-168-69-19.eu-central-1.compute.internal <none>
pod/ratings-v1-76f4c9765f-lzgpb 2/2 Running 0 2m 192.168.91.69 ip-192-168-69-19.eu-central-1.compute.internal <none>
pod/reviews-v1-56f6855586-rnkjj 2/2 Running 0 2m 192.168.77.69 ip-192-168-69-19.eu-central-1.compute.internal <none>
pod/reviews-v2-65c9df47f8-sq2vh 2/2 Running 0 2m 192.168.8.68 ip-192-168-26-217.eu-central-1.compute.internal <none>
pod/reviews-v3-6cf47594fd-nw8hv 2/2 Running 0 2m 192.168.6.236 ip-192-168-26-217.eu-central-1.compute.internal <none>
Check the container details — you should see also container istio-proxy
next to productpage
container.
kubectl describe pod -l app=productpage
Output:
...
Containers:
productpage:
Container ID: docker://62984fbf7913e8cd91e5188571c7efad781880966a0d9b36279f368ad9cbf2a0
Image: istio/examples-bookinfo-productpage-v1:1.10.1
...
istio-proxy:
Container ID: docker://17a2c6c87b1e8f315417b284973452332ea34162543af46776075ad1f43db327
Image: docker.io/istio/proxyv2:1.1.0
...
The kubectl logs
command will show you the output of the envoy proxy (istio-proxy
):
kubectl logs $(kubectl get pod -l app=productpage -o jsonpath="{.items[0].metadata.name}") istio-proxy | head -70
Output:
...
2019-03-29T09:49:07.660863Z info Effective config: binaryPath: /usr/local/bin/envoy
concurrency: 2
configPath: /etc/istio/proxy
connectTimeout: 10s
discoveryAddress: istio-pilot.istio-system:15010
drainDuration: 45s
parentShutdownDuration: 60s
proxyAdminPort: 15000
serviceCluster: productpage.default
statNameLength: 189
tracing:
zipkin:
address: zipkin.istio-system:94112019-03-29T09:49:07.660886Z info Monitored certs: []envoy.CertSource{envoy.CertSource{Directory:"/etc/certs/", Files:[]string{"cert-chain.pem", "key.pem", "root-cert.pem"}}}
2019-03-29T09:49:07.660896Z info PilotSAN []string(nil)
2019-03-29T09:49:07.660996Z info Opening status port 150202019-03-29T09:49:07.661159Z info Starting proxy agent
2019-03-29T09:49:07.661340Z info Received new config, resetting budget
2019-03-29T09:49:07.661349Z info Reconciling retry (budget 10)
2019-03-29T09:49:07.661359Z info Epoch 0 starting
2019-03-29T09:49:07.662335Z info Envoy command: [-c /etc/istio/proxy/envoy-rev0.json --restart-epoch 0 --drain-time-s 45 --parent-shutdown-time-s 60 --service-cluster productpage.default --service-node sidecar~192.168.76.160~productpage-v1-5cb458d74f-wwcqc.default~default.svc.cluster.local --max-obj-name-len 189 --allow-unknown-fields -l warning --concurrency 2]
...
Define the Istio gateway for the application:
cat samples/bookinfo/networking/bookinfo-gateway.yaml
kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
sleep 5
Output:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: bookinfo
spec:
hosts:
- "*"
gateways:
- bookinfo-gateway
http:
- match:
- uri:
exact: /productpage
- uri:
exact: /login
- uri:
exact: /logout
- uri:
prefix: /api/v1/products
route:
- destination:
host: productpage
port:
number: 9080
Create and display default destination rules (subsets) for the Bookinfo services:
kubectl apply -f samples/bookinfo/networking/destination-rule-all.yaml
kubectl get destinationrules -o yaml
Display the destination rules:
Output:
...
- apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
...
name: reviews
namespace: default
...
spec:
host: reviews
subsets:
- labels:
version: v1
name: v1
- labels:
version: v2
name: v2
- labels:
version: v3
name: v3
...
Confirm the gateway and virtualsevice has been created:
kubectl get gateway,virtualservice,destinationrule
Output:
NAME AGE
gateway.networking.istio.io/bookinfo-gateway 13sNAME GATEWAYS HOSTS AGE
virtualservice.networking.istio.io/bookinfo [bookinfo-gateway] [*] 13sNAME HOST AGE
destinationrule.networking.istio.io/details details 8s
destinationrule.networking.istio.io/productpage productpage 8s
destinationrule.networking.istio.io/ratings ratings 8s
destinationrule.networking.istio.io/reviews reviews 8s
Check the SSL certificate:
echo | openssl s_client -showcerts -connect ${MY_DOMAIN}:443 2>/dev/null | openssl x509 -inform pem -noout -text
Output:
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
03:ba:eb:a2:34:43:0c:ae:7b:63:64:4d:4a:ee:c1:25:b4:35
Signature Algorithm: sha256WithRSAEncryption
Issuer: C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3
Validity
Not Before: Mar 29 08:46:52 2019 GMT
Not After : Jun 27 08:46:52 2019 GMT
Subject: CN = *.mylabs.dev
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
...
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature, Key Encipherment
X509v3 Extended Key Usage:
TLS Web Server Authentication, TLS Web Client Authentication
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Subject Key Identifier:
AB:60:E9:ED:3F:40:72:83:7D:62:08:F9:EB:8F:EA:1C:42:CC:76:4E
X509v3 Authority Key Identifier:
keyid:A8:4A:6A:63:04:7D:DD:BA:E6:D1:39:B7:A6:45:65:EF:F3:A8:EC:A1 Authority Information Access:
OCSP - URI:http://ocsp.int-x3.letsencrypt.org
CA Issuers - URI:http://cert.int-x3.letsencrypt.org/ X509v3 Subject Alternative Name:
DNS:*.mylabs.dev, DNS:mylabs.dev
X509v3 Certificate Policies:
Policy: 2.23.140.1.2.1
Policy: 1.3.6.1.4.1.44947.1.1.1
CPS: http://cps.letsencrypt.org
...
You can see it in the certificate transparency log: https://crt.sh/?q=mylabs.dev)
You can also use the cert-manager directly to see the status of the certificate:
kubectl describe certificates ingress-cert-${LETSENCRYPT_ENVIRONMENT} -n istio-system
Output:
Name: ingress-cert-production
Namespace: istio-system
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"certmanager.k8s.io/v1alpha1","kind":"Certificate","metadata":{"annotations":{},"name":"ingress-cert-production","namespace"...
API Version: certmanager.k8s.io/v1alpha1
Kind: Certificate
Metadata:
Creation Timestamp: 2019-03-29T09:43:02Z
Generation: 1
Resource Version: 2854
Self Link: /apis/certmanager.k8s.io/v1alpha1/namespaces/istio-system/certificates/ingress-cert-production
UID: 0b677790-5207-11e9-ac06-02dc9c152bfa
Spec:
Acme:
Config:
Dns 01:
Provider: aws-route53
Domains:
*.mylabs.dev
mylabs.dev
Common Name: *.mylabs.dev
Dns Names:
*.mylabs.dev
mylabs.dev
Issuer Ref:
Kind: ClusterIssuer
Name: letsencrypt-production-dns
Secret Name: ingress-cert-production
Status:
Conditions:
Last Transition Time: 2019-03-29T09:46:53Z
Message: Certificate is up to date and has not expired
Reason: Ready
Status: True
Type: Ready
Not After: 2019-06-27T08:46:52Z
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning IssuerNotReady 9m9s (x2 over 9m9s) cert-manager Issuer letsencrypt-production-dns not ready
Normal Generated 9m8s cert-manager Generated new private key
Normal OrderCreated 9m8s cert-manager Created Order resource "ingress-cert-production-3383842614"
Normal OrderComplete 5m18s cert-manager Order "ingress-cert-production-3383842614" completed successfully
Normal CertIssued 5m18s cert-manager Certificate issued successfully
Confirm the app is running:
curl -o /dev/null -s -w "%{http_code}" http://${MY_DOMAIN}/productpage; echo
Output:
200
Generate some traffic for next 5 minutes to gather some data for monitoring:
siege --log=/tmp/siege --concurrent=1 -q --internet --time=10M http://${MY_DOMAIN}/productpage &> /dev/null &
In case of DNS issue you can use the services exposed on ports directly from loadbalancer:
kubectl -n istio-system get service istio-ingressgateway -o jsonpath="{.status.loadBalancer.ingress[0].hostname}"; echo
Output:
abd0be556520611e9ac0602dc9c152bf-2144127322.eu-central-1.elb.amazonaws.com
- Kiali:
http://<IP ADDRESS OF CLUSTER INGRESS>:15029
- Prometheus:
http://<IP ADDRESS OF CLUSTER INGRESS>:15030
- Grafana:
http://<IP ADDRESS OF CLUSTER INGRESS>:15031
- Tracing:
http://<IP ADDRESS OF CLUSTER INGRESS>:15032
Open the Bookinfo site in your browser http://mylabs.dev/productpage and refresh the page several times — you should see different versions of reviews shown in productpage, presented in a round robin style (red stars, black stars, no stars), since we haven’t yet used Istio to control the version routing.
Check the flows in Kiali:
Open the browser with these pages:
- Servicegraph: https://servicegraph.mylabs.dev/force/forcegraph.html
- Servicegraph: https://servicegraph.mylabs.dev/dotviz
- Kiali: https://kiali.mylabs.dev (admin/admin)
- Prometheus - https://prometheus.mylabs.dev/graph?g0.range_input=1h&g0.expr=istio_requests_total&g0.tab=0
- Prometheus - Total count of all requests to the productpage service: https://prometheus.mylabs.dev/graph?g0.range_input=1h&g0.expr=istio_requests_total%7Bdestination_service%3D%22productpage.default.svc.cluster.local%22%7D&g0.tab=0
- Prometheus - Total count of all requests to
v1
of the reviews service: https://prometheus.mylabs.dev/graph?g0.range_input=1h&g0.expr=istio_requests_total%7Bdestination_service%3D%22reviews.default.svc.cluster.local%22%2C%20destination_version%3D%22v1%22%7D&g0.tab=0
- Prometheus - Rate of requests over the past 5 minutes to all instances of the productpage service: https://prometheus.mylabs.dev/graph?g0.range_input=1h&g0.expr=rate(istio_requests_total%7Bdestination_service%3D~%22productpage.*%22%2C%20response_code%3D%22200%22%7D%5B5m%5D)&g0.tab=0
- Grafana: https://grafana.mylabs.dev Grafana -> Home -> Istio -> Istio Performance Dashboard
- Grafana: https://grafana.mylabs.dev Grafana -> Home -> Istio -> Istio Service Dashboard
- Grafana: https://grafana.mylabs.dev Grafana -> Home -> Istio -> Istio Workload Dashboard
- Grafana: https://grafana.mylabs.dev Grafana -> Home -> Istio -> Istio Galley Dashboard
- Grafana: https://grafana.mylabs.dev Grafana -> Home -> Istio -> Istio Mixer Dashboard
- Grafana: https://grafana.mylabs.dev Grafana -> Home -> Istio -> Istio Pilot Dashboard
Istio — Configuring Request Routing
https://istio.io/docs/tasks/traffic-management/request-routing/
This part shows you how to route requests dynamically to multiple versions of a microservice.
Apply a virtual service
Apply and display the virtual services which will route all traffic to v1
of each microservice:
kubectl apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml
kubectl get virtualservices -o yaml
Output:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: details
...
spec:
hosts:
- details
http:
- route:
- destination:
host: details
subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: productpage
...
spec:
gateways:
- bookinfo-gateway
- mesh
hosts:
- productpage
http:
- route:
- destination:
host: productpage
subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ratings
...
spec:
hosts:
- ratings
http:
- route:
- destination:
host: ratings
subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
...
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v1
Open the Bookinfo site in your browser http://mylabs.dev/productpage and notice that the reviews part of the page displays with no rating stars, no matter how many times you refresh.
Route based on user identity
https://istio.io/docs/tasks/traffic-management/request-routing/#route-based-on-user-identity
All traffic from a user named jason
will be routed to the service reviews:v2
by forwarding HTTP requests with custom end-user header to the appropriate reviews service.
Enable user-based routing:
kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml
kubectl get virtualservice reviews -o yaml
Output:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
...
spec:
hosts:
- reviews
http:
- match:
- headers:
end-user:
exact: jason
route:
- destination:
host: reviews
subset: v2
- route:
- destination:
host: reviews
subset: v1
On the /productpage
of the Bookinfo app, log in as user jason
and refresh the browser. The black star ratings appear next to each review.
Log in as another user (pick any name you wish) and refresh the browser. Now the stars are gone. This is because traffic is routed to reviews:v1
for all users except jason
user.
You can do the same with user-agent header
or URI
for example:
...
http:
- match:
- headers:
user-agent:
regex: '.*Firefox.*'
...
http:
- match:
- uri:
prefix: /api/v1
...
Istio — Injecting an HTTP delay fault
https://istio.io/docs/tasks/traffic-management/fault-injection/#injecting-an-http-delay-fault
Inject a 7 seconds delay between the reviews:v2
and ratings microservices for user jason
:
kubectl apply -f samples/bookinfo/networking/virtual-service-ratings-test-delay.yaml
kubectl get virtualservice ratings -o yaml
Output:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ratings
...
spec:
hosts:
- ratings
http:
- fault:
delay:
fixedDelay: 7s
percent: 100
match:
- headers:
end-user:
exact: jason
route:
- destination:
host: ratings
subset: v1
- route:
- destination:
host: ratings
subset: v1
On the /productpage
, log in as user jason
and you should see:
Error fetching product reviews!
Sorry, product reviews are currently unavailable for this book.
Open the Developer Tools menu (F12) -> Network tab — web page actually loads in about 6 seconds.
The following example introduces a 5 second delay in 10% of the requests to the ratings:v1
microservice:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ratings
spec:
hosts:
- ratings
http:
- fault:
delay:
percent: 10
fixedDelay: 5s
route:
- destination:
host: ratings
subset: v1
Injecting an HTTP abort fault
https://istio.io/docs/tasks/traffic-management/fault-injection/#injecting-an-http-abort-fault
Let's introduce an HTTP abort to the ratings microservices for the test user jason
.
Create a fault injection rule to send an HTTP abort for user jason
:
kubectl apply -f samples/bookinfo/networking/virtual-service-ratings-test-abort.yaml
kubectl get virtualservice ratings -o yaml
Output:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ratings
...
spec:
hosts:
- ratings
http:
- fault:
abort:
httpStatus: 500
percent: 100
match:
- headers:
end-user:
exact: jason
route:
- destination:
host: ratings
subset: v1
- route:
- destination:
host: ratings
subset: v1
On the /productpage
, log in as user jason
- the page loads immediately and the product ratings not available message appears: Ratings service is currently unavailable
Check the flows in Kiali graph, where you should see the red communication between reviews:v2
and ratings
.
The following example returns an HTTP 400 error code for 10% of the requests to the ratings:v1
service:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ratings
spec:
hosts:
- ratings
http:
- fault:
abort:
percent: 10
httpStatus: 400
route:
- destination:
host: ratings
subset: v1
Istio — Weight-based routing
https://istio.io/docs/tasks/traffic-management/traffic-shifting/#apply-weight-based-routing
In Canary Deployments, newer versions of services are incrementally rolled out to users to minimize the risk and impact of any bugs introduced by the newer version.
Route a percentage of traffic to one service or another — send %50 of traffic to reviews:v1
and %50to reviews:v3
and finally complete the migration by sending %100 of traffic to reviews:v3
.
Route all traffic to the reviews:v1
version of each microservice:
kubectl apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml
Transfer 50% of the traffic from reviews:v1
to reviews:v3
:
kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-50-v3.yaml
kubectl get virtualservice reviews -o yaml
Output:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
...
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v1
weight: 50
- destination:
host: reviews
subset: v3
weight: 50
Refresh the /productpage
in your browser and you now see red colored star ratings approximately 50% of the time.
Check the flows in Kiali graph, where only reviews:{v1,v2}
are used:
Assuming you decide that the reviews:v3
microservice is stable, you can route 100% of the traffic to reviews:v3
by applying this virtual service.
kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-v3.yaml
kubectl get virtualservice reviews -o yaml
Output:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
...
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v3
When you refresh the /productpage
you will always see book reviews with red colored star ratings for each review.
Kiali graph:
Istio — Cleanup
Remove the Bookinfo application and clean it up (delete the routing rules and terminate the application pods):
# Clean everything - remove Bookinfo application and all Istio VirtualServices, Gateways, DestinationRules
sed -i "/read -r NAMESPACE/d" samples/bookinfo/platform/kube/cleanup.sh
samples/bookinfo/platform/kube/cleanup.sh
Output:
namespace ? [default] using NAMESPACE=default
destinationrule.networking.istio.io "details" deleted
destinationrule.networking.istio.io "productpage" deleted
destinationrule.networking.istio.io "ratings" deleted
destinationrule.networking.istio.io "reviews" deleted
virtualservice.networking.istio.io "bookinfo" deleted
virtualservice.networking.istio.io "details" deleted
virtualservice.networking.istio.io "productpage" deleted
virtualservice.networking.istio.io "ratings" deleted
virtualservice.networking.istio.io "reviews" deleted
gateway.networking.istio.io "bookinfo-gateway" deleted
Application cleanup may take up to one minute
service "details" deleted
deployment.extensions "details-v1" deleted
service "ratings" deleted
deployment.extensions "ratings-v1" deleted
service "reviews" deleted
deployment.extensions "reviews-v1" deleted
deployment.extensions "reviews-v2" deleted
deployment.extensions "reviews-v3" deleted
service "productpage" deleted
deployment.extensions "productpage-v1" deleted
Application cleanup successful
To remove Istio:
helm delete --purge istio
helm delete --purge istio-init
kubectl delete -f install/kubernetes/helm/istio-init/files
kubectl label namespace default istio-injection-
kubectl delete namespace istio-system
Output:
release "istio" deleted
release "istio-init" deleted
customresourcedefinition.apiextensions.k8s.io "virtualservices.networking.istio.io" deleted
...
customresourcedefinition.apiextensions.k8s.io "challenges.certmanager.k8s.io" deleted
namespace/default labeled
namespace "istio-system" deleted
Clean AWS:
# aws route53 delete-hosted-zone --id $(aws route53 list-hosted-zones --query "HostedZones[?Name==\`${MY_DOMAIN}.\`].Id" --output text)aws iam detach-user-policy --user-name "${USER}-eks-cert-manager-route53" --policy-arn $(aws iam list-policies --query "Policies[?PolicyName==\`${USER}-AmazonRoute53Domains-cert-manager\`].{ARN:Arn}" --output text)aws iam delete-policy --policy-arn $(aws iam list-policies --query "Policies[?PolicyName==\`${USER}-AmazonRoute53Domains-cert-manager\`].{ARN:Arn}" --output text)aws iam delete-access-key --user-name ${USER}-eks-cert-manager-route53 --access-key-id $(aws iam list-access-keys --user-name ${USER}-eks-cert-manager-route53 --query "AccessKeyMetadata[].AccessKeyId" --output text)aws iam delete-user --user-name ${USER}-eks-cert-manager-route53
Remove EKS cluster:
eksctl delete cluster --name=${USER}-k8s-istio-webinar --wait
Output:
[ℹ] using region eu-central-1
[ℹ] deleting EKS cluster "pruzicka-k8s-istio-webinar"
[ℹ] will delete stack "eksctl-pruzicka-k8s-istio-webinar-nodegroup-ng-5be027b5"
[ℹ] waiting for stack "eksctl-pruzicka-k8s-istio-webinar-nodegroup-ng-5be027b5" to get deleted
[ℹ] will delete stack "eksctl-pruzicka-k8s-istio-webinar-cluster"
[ℹ] waiting for stack "eksctl-pruzicka-k8s-istio-webinar-cluster" to get deleted
[✔] kubeconfig has been updated
[✔] the following EKS cluster resource(s) for "pruzicka-k8s-istio-webinar" will be deleted: cluster. If in doubt, check CloudFormation console
Enjoy… :-)