Getting started with Istio 0.8.0 on IBM Cloud Private

xu zhao
IBM Cloud
Published in
11 min readJul 31, 2018

This article demonstrates how to install Istio 0.8.0 on IBM Cloud Private and is a follow up to the original version article for Istio 0.7.1. In this article I will show how to play with Istio 0.8.0 using theibm-istio helm chart and how to manage and monitor microservices with Istio addons such as Jaeger, Prometheus, and Grafana.

Prerequisite

  • Ensure that you have access to a Kubernetes cluster, such as IBM Cloud Private.
  • Kubernetes version 1.7.3 or later with RBAC (Role-Based Access Control) enabled is required.
  • If you want to enable automatic sidecar injection, Kubernetes 1.9 or later with admissionregistration API is required. Also, the kube-apiserver process must have the admission-control flag set with the MutatingAdmissionWebhook and ValidatingAdmissionWebhook admission controllers added, and listed in the correct order. However, if you plan to install an IBM Cloud Private 2.1.0.3 cluster, then you don’t need to worry about any of these configurations. With IBM Cloud Private 2.1.0.3 cluster, you are ready to play with Istio on the fly.
  • If you ever installed ibm-istio 0.7.1 before, please execute the following command to cleanup the old custom resources for pilot, otherwise you can’t install ibm-istio 0.8.0 due to these resources already exists:
$ kubectl delete crd enduserauthenticationpolicyspecbindings.config.istio.io
$ kubectl delete crd enduserauthenticationpolicyspecs.config.istio.io
$ kubectl delete crd gateways.networking.istio.io
$ kubectl delete crd httpapispecbindings.config.istio.io
$ kubectl delete crd httpapispecs.config.istio.io
$ kubectl delete crd policies.authentication.istio.io
$ kubectl delete crd quotaspecbindings.config.istio.io
$ kubectl delete crd quotaspecs.config.istio.io

Install Istio on IBM Cloud Private

You can also install Istio if you already have an IBM Cloud Private 2.1.0.3 cluster installed, by deploying the ibm-istio chart from the Catalog.

Author note: At the time of this article being written, there was an issue with default values when deploying the Istio Helm chart using the catalog ui in IBM Cloud Private 2.1.0.3. To resolve this issue, please apply the following fix.

1.Create a namespace istio-system in which the ibm-istio chart is deployed.

2.Log in to the IBM Cloud Private management console and search istio from Catalog, ibm-istio chart will be displayed.

ibm-istio chart

3.Click on the chart, a readme file that includes information about installing, uninstalling, and configuring is provided for the ibm-istio chart.

ibm-istio chart details page

4.Click Configure button, you will be navigated to the configuration page. Name your release, select istio-system namespace and customize fields to your preference. Click Install to deploy ibm-istio chart and create a release.

ibm-istio chart installation page

Verifying the Installation

After installation completes, verify that the Istio control plane is created and running.

1.Ensure that the following mandatory Kubernetes services are deployed: istio-pilot, istio-ingressgateway, istio-citadel, istio-policy, istio-statsd-prom-bridge and istio-telemetry.

Note: grafana, prometheus, servicegraph, tracing and zipkin are optional. istio-ingress is replaced by istio-ingressgateway in Istio 0.8.0, but Istio 0.8.0 also can compatible with the istio-ingress .

$ kubectl -n istio-system get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana ClusterIP 10.0.0.61 <none> 3000/TCP 29m
istio-citadel ClusterIP 10.0.0.93 <none> 8060/TCP,9093/TCP 29m
istio-egressgateway ClusterIP 10.0.0.49 <none> 80/TCP,443/TCP 29m
istio-ingress LoadBalancer 10.0.0.99 <pending> 80:32000/TCP,443:31540/TCP 29m
istio-ingressgateway LoadBalancer 10.0.0.137 <pending> 80:31380/TCP,443:31390/TCP,31400:31400/TCP 29m
istio-pilot ClusterIP 10.0.0.199 <none> 15003/TCP,15005/TCP,15007/TCP,15010/TCP,15011/TCP,8080/TCP,9093/TCP 29m
istio-policy ClusterIP 10.0.0.165 <none> 9091/TCP,15004/TCP,9093/TCP 29m
istio-sidecar-injector ClusterIP 10.0.0.248 <none> 443/TCP 29m
istio-statsd-prom-bridge ClusterIP 10.0.0.92 <none> 9102/TCP,9125/UDP 29m
istio-telemetry ClusterIP 10.0.0.194 <none> 9091/TCP,15004/TCP,9093/TCP,42422/TCP 29m
prometheus ClusterIP 10.0.0.149 <none> 9090/TCP 29m
servicegraph ClusterIP 10.0.0.40 <none> 8088/TCP 29m
tracing ClusterIP 10.0.0.14 <none> 80/TCP 29m
zipkin ClusterIP 10.0.0.219 <none> 9411/TCP

2.Ensure the corresponding Kubernetes pods are deployed and all containers are up and running: istio-pilot-*, istio-ingressgateway-*, istio-egressgateway-*, istio-policy-*, istio-telemtry-*, istio-citadel-*, prometheus-* and, optionally, istio-sidecar-injector-*, grafana-*, prometheus-*, servicegraph-*.

$ kubectl -n istio-system get pods
NAME READY STATUS RESTARTS AGE
grafana-6c6845c885-5gkc6 1/1 Running 0 45m
istio-citadel-596bbf998d-4sqrg 1/1 Running 0 45m
istio-egressgateway-598bdc99fc-csgb4 1/1 Running 0 45m
istio-ingress-774847465-482rv 1/1 Running 0 45m
istio-ingressgateway-67b868865f-79k87 1/1 Running 0 45m
istio-pilot-7547959877-l44k7 2/2 Running 0 45m
istio-policy-96cd8cfd9-4qg6f 2/2 Running 0 45m
istio-sidecar-injector-76fc7df859-lkndd 1/1 Running 0 45m
istio-statsd-prom-bridge-6b99f48b49-bkwwv 1/1 Running 0 45m
istio-telemetry-6959fd8b88-7wcbq 2/2 Running 0 45m
istio-tracing-57987c8c6-sd7mb 1/1 Running 0 45m
prometheus-c865b55d9-9scfj 1/1 Running 0 45m
servicegraph-7b79cd6f9f-8c2ks 1/1 Running 0 45m

Deploy the Bookinfo application

We can now deploy the applications managed by Istio when the control plane has finished the deployment process. Here is an example of deployed Bookinfo application.

Create Secret

If you are using a private registry for the sidecar image, then you need to create a Secret of type docker-registry in the cluster that holds authorization token, and patch your application’s ServiceAccount.

$ kubectl create secret docker-registry private-registry-key \
--docker-server=<your-registry-server> \
--docker-username=<your-name> \
--docker-password=<your-pword> \
--docker-email=<your-email>
$ kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "private-registry-key"}]}'

Prepare the Bookinfo manifest

Create a new YAML file named bookinfo.yaml to save the Bookinfo application manifest.

apiVersion: v1
kind: Service
metadata:
name: details
labels:
app: details
spec:
ports:
- port: 9080
name: http
selector:
app: details
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: details-v1
spec:
replicas: 1
template:
metadata:
labels:
app: details
version: v1
spec:
containers:
- name: details
image: morvencao/istio-examples-bookinfo-details-v1:1.5.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
apiVersion: v1
kind: Service
metadata:
name: ratings
labels:
app: ratings
spec:
ports:
- port: 9080
name: http
selector:
app: ratings
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ratings-v1
spec:
replicas: 1
template:
metadata:
labels:
app: ratings
version: v1
spec:
containers:
- name: ratings
image: morvencao/istio-examples-bookinfo-details-v1:1.5.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
apiVersion: v1
kind: Service
metadata:
name: reviews
labels:
app: reviews
spec:
ports:
- port: 9080
name: http
selector:
app: reviews
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: reviews-v1
spec:
replicas: 1
template:
metadata:
labels:
app: reviews
version: v1
spec:
containers:
- name: reviews
image: morvencao/istio-examples-bookinfo-details-v1:1.5.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: reviews-v2
spec:
replicas: 1
template:
metadata:
labels:
app: reviews
version: v2
spec:
containers:
- name: reviews
image: morvencao/istio-examples-bookinfo-details-v1:1.5.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: reviews-v3
spec:
replicas: 1
template:
metadata:
labels:
app: reviews
version: v3
spec:
containers:
- name: reviews
image: morvencao/istio-examples-bookinfo-details-v1:1.5.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
apiVersion: v1
kind: Service
metadata:
name: productpage
labels:
app: productpage
spec:
ports:
- port: 9080
name: http
selector:
app: productpage
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: productpage-v1
spec:
replicas: 1
template:
metadata:
labels:
app: productpage
version: v1
spec:
containers:
- name: productpage
image: morvencao/istio-examples-bookinfo-details-v1:1.5.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---

Automatic Sidecar Injection

If you have enabled automatic sidecar injection, the istio-sidecar-injector automatically injects Envoy containers into your application pods that are running in the namespace labelled with istio-injection=enabled. For example, let's deploy the Bookinfo application to the default namespace.

$ kubectl label namespace default istio-injection=enabled
$ kubectl create -n default -f bookinfo.yaml

Manual Sidecar Injection

Alternatively, you can choose to manually inject Envoy containers to your application with injection tool istioctl. If you are using manual sidecar injection, use the following command

$ kubectl apply -f <(istioctl kube-inject -f bookinfo.yaml)

Access the Bookinfo application

After all pods for the Bookinfo application are in running state, you can access the Bookinfo product page. Since IBM Cloud Private doesn’t support external load balancer, you can use the host IP of the ingressgateway, along with the NodePort:

$ export BOOKINFO_URL=$(kubectl get po -l istio=ingressgateway -n istio-system -o 'jsonpath={.items[0].status.hostIP}'):$(kubectl get svc istio-ingressgateway -n istio-system -o 'jsonpath={.spec.ports[0].nodePort}')

To confirm that the Bookinfo application is ready, run the following curl command, the output should be 200:

$ curl -o /dev/null -s -w "%{http_code}\n" http://${BOOKINFO_URL}/productpage

You can also access the Bookinfo product page from the browser by specifying the address: http://${BOOKINFO_URL}/productpage. Try to refresh the page several times, you will see different versions of reviews randomly shown in the product page(red stars, black stars, no stars), because I haven’t created any routerule for the Bookinfo application.

bookinfo

Collect Trace Spans using Jaeger

In this section, I will take the Bookinfo application created above as an example to show how Istio enabled applications can be configured to collect traces using Jaeger.

1. By default, Istio will enable tracing with a service type of ClusterIP. You can change the default service type to NodePort during installation so that you can access Jaeger from an external environment. The method of edit tracing yaml file as follows.

$ kubectl edit -n istio-system svc/tracing# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2018-07-26T07:46:39Z
labels:
app: jaeger
chart: tracing
heritage: Tiller
release: istio
name: tracing
namespace: istio-system
resourceVersion: "158201"
selfLink: /api/v1/namespaces/istio-system/services/tracing
uid: 0806866c-90a8-11e8-8dd3-fa163e079af0
spec:
clusterIP: 10.0.0.121
externalTrafficPolicy: Cluster
ports:
- name: query-http
nodePort: 30352
port: 80
protocol: TCP
targetPort: 16686
selector:
app: jaeger
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}

Replace the ClusterIP by NodePort in the yaml file.

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2018-07-26T07:46:39Z
labels:
app: jaeger
chart: tracing
heritage: Tiller
release: istio
name: tracing
namespace: istio-system
resourceVersion: "158201"
selfLink: /api/v1/namespaces/istio-system/services/tracing
uid: 0806866c-90a8-11e8-8dd3-fa163e079af0
spec:
clusterIP: 10.0.0.121
externalTrafficPolicy: Cluster
ports:
- name: query-http
nodePort: 30352
port: 80
protocol: TCP
targetPort: 16686
selector:
app: jaeger
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}

You can get the Jaeger URL by runing the following command:

$ export JAEGER_URL=$(kubectl get po -l istio=ingressgateway -n \ istio-system -o 'jsonpath={.items[0].status.hostIP}'):$(kubectl \
get svc tracing -n istio-system -o 'jsonpath=\
{.spec.ports[0].nodePort}')
$ echo http://${JAEGER_URL}/

2. Send more traffic to the Bookinfo application by refreshing http://${BOOKINFO_URL}/productpage in your browser or run the following command several times:

curl http://${BOOKINFO_URL}/productpage

3. Verify that the trace spans are collected by Jaeger. You can access the trace spans from your browser at http://${JAEGER_URL}/.

Jaeger dashboard

If everything is set up correctly, choice istio-ingressgateway in Service label and click Find Trace, you will see a Jaeger dashboard similar to the following screenshot: http://${JAEGER_URL}/.

Jaeger Traces

From the Jaeger dashboard, you can see a list of traces that reflect each service call stack of the Bookinfo application. You can inspect each trace by clicking on the respective trace, then you can see details for the trace similar to the following screenshot:

Trace detail

Collect Metrics with Prometheus

In this section, you can see how to configure Istio to automatically gather telemetry and create new customized telemetry for services. I will use the Bookinfo application as an example.

1. Istio can enable Prometheus with a service type of ClusterIP. You can also expose another service of type NodePort and then access Prometheus by running the following command:

$ kubectl expose service prometheus --type=NodePort \
--name=prometheus-svc --namespace istio-system
$ export PROMETHEUS_URL=$(kubectl get po -l app=prometheus \
-n istio-system -o 'jsonpath={.items[0].status.hostIP}'):$(kubectl get svc \
prometheus-svc -n istio-system -o 'jsonpath={.spec.ports[0].nodePort}')
$ echo http://${PROMETHEUS_URL}/

2. Verify that the built-in metric values can be collected into Prometheus by accessing http://${PROMETHEUS_URL}/ from your browser.

Prometheus

3.Create and apply the new metric istio_double_request_count that Istio generates and collects automatically.

$ cat << EOF | istioctl create -f -
apiVersion: "config.istio.io/v1alpha2"
kind: metric
metadata:
name: doublerequestcount
namespace: istio-system
spec:
value: "2" # count each request twice
dimensions:
source: source.service | "unknown"
destination: destination.service | "unknown"
message: '"twice the fun!"'
monitored_resource_type: '"UNSPECIFIED"'
---
# Configuration for a Prometheus handler
apiVersion: "config.istio.io/v1alpha2"
kind: prometheus
metadata:
name: doublehandler
namespace: istio-system
spec:
metrics:
- name: double_request_count # Prometheus metric name
instance_name: doublerequestcount.metric.istio-system # Mixer instance name (fully-qualified)
kind: COUNTER
label_names:
- source
- destination
- message
---
# Rule to send metric instances to a Prometheus handler
apiVersion: "config.istio.io/v1alpha2"
kind: rule
metadata:
name: doubleprom
namespace: istio-system
spec:
actions:
- handler: doublehandler.prometheus
instances:
- doublerequestcount.metric
---
EOF

4. Verify that the new metric values are generated and collected by accessing http://${PROMETHEUS_URL}/ from your browser.

Prometheus new metric

You can see that the new metric istio_double_request_count is generated and collected into Prometheus. For each request, the new metric counts twice with message "twice the fun!".

Visualizing Metrics with Grafana

Now I will setup and use the Istio Dashboard to monitor the service mesh traffic. I will use the Bookinfo application as an example.

1. Similar to Jaeger and Prometheus, Istio enables Grafana with a service type of ClusterIP. You need to expose another service of type NodePort to access Grafana from the external environment by running the following commands:

$ kubectl expose service istio-grafana --type=NodePort \
--name=istio-grafana-svc --namespace istio-system
$ export GRAFANA_URL=$(kubectl get po -l app=grafana -n \
istio-system -o 'jsonpath={.items[0].status.hostIP}'):$(kubectl get svc \
istio-grafana-svc -n istio-system -o \
'jsonpath={.spec.ports[0].nodePort}')
$ echo http://${GRAFANA_URL}/

2. Access the Grafana web page from your browser http://${GRAFANA_URL}/.

By default, Istio grafana has three built-in dashboards: Istio Dashboard, Mixer Dashboard and Pilot Dashboard. Istio Dashboard is an overall view for all service traffic including high-level HTTP requests flowing and metrics about each individual service call, while Mixer Dashboard and Pilot Dashboard are mainly resources usage overview.

The Istio Dashboard resembles the following screenshot:

Grafana Istio Dashboard

Summary

This blog demonstrates how to install the Istio 0.8.0 on IBM Cloud Private 2.1.0.3. Also in this blog I have shown how to manage and monitor microservices with Istio addons such as Jaeger, Prometheus, and Grafana.

For more information about Istio, see https://istio.io/docs/.

--

--