Managing microservices with Istio on IBM Cloud Private

Morven Cao
IBM Cloud
Published in
12 min readMay 29, 2018

In recent years, with the development of container technology, more enterprise customers are turning to microservices. Microservices are a combination of lightweight and fine-grained services that work cohesively to allow for larger, application-wide functionality. This approach improves modularity and makes applications easier to develop and test when compared to traditional, monolithic application. With the adoption of microservices, new challenges emerge due to a myriad of services that exist in larger systems. Developers must now account for service discovery, load balancing, fault tolerance, dynamic routing, and communication security. Thanks to Istio, we can turn disparate microservices into an integrated service mesh by systemically injecting envoy proxy into the network layers while decoupling the operators to connect, manage, and secure microservices for application feature development.

This blog takes you step-by-step through the installation of Istio and the deployment of microservices-based applications in IBM Cloud Private.

Prerequisite

  • Ensure that you have access to a Kubernetes cluster, such as IBM Cloud Private.
  • Kubernetes version 1.7.3 or later with RBAC (Role-Based Access Control) enabled is required.
  • If you want to enable automatic sidecar injection, Kubernetes 1.9 or later with admissionregistration API is required. Also, thekube-apiserver process must have the admission-control flag set with the MutatingAdmissionWebhook and ValidatingAdmissionWebhook admission controllers added, and listed in the correct order. However, if you plan to install an IBM Cloud Private 2.1.0.3 cluster, then you don't need to worry about any of these configurations. With IBM Cloud Private 2.1.0.3 cluster, you are ready to play with Istio on the fly.

Install IBM Cloud Private 2.1.0.3 with Istio enabled

Istio can be logically divided into two parts: control plane and data plane.

  • The control plane includes the Istio core components pilot, mixer and Istio-Auth. The control plane is mainly responsible for managing and configuring proxies to route traffic.
  • The data plane is a set of Envoy sidecars deployed with applications that proxy and control all network communication between microservices.

To install Istio, you must install the control plane first and then inject sidecars into the microservices-based applications.

IBM Cloud Private 2.1.0.3 supports two methods to enable Istio. You can choose either to enable Istio during cluster installation, or to install the Istio chart from the Catalog after cluster installation.

Enabling Istio during cluster installation

1. To install an IBM Cloud Private cluster that has the Istio feature enabled, you must make sure that Istio is not listed in the disabled_management_services list. Your updated config.yaml file might resemble the following:

## Disabled Management Services Settings
disabled_management_services: ["vulnerability-advisor"]

2. Enable automatic sidecar injection. Istio also supports automatic sidecar injection, which is disabled by default in IBM Cloud Private. If you want to enable automatic sidecar injection, add the following content to the config.yaml file:

# Add the following content if you want to enable auto-injection
istio:
sidecar-injector:
enabled: true

Note: It is not recommended that you use sidecar auto-injection if your IBM Cloud Private cluster does not have internet access. Sidecar auto-injection needs to add extra imagePullSecrets. If your cluster does not have internet access, you might not able to pull images from the private Docker registry. This issue is fixed by this pulll request and will be available in Istio 0.8.

3. Install IBM Cloud Private and Istio. For complete steps to install an IBM Cloud Private cluster, see Installing IBM Cloud Private. Istio will be installed by the IBM Cloud Private Installer.

Enabling Istio for an existing cluster

You can also deploy Istio if you already have an IBM Cloud Private 2.1.0.3 cluster installed, by installing the ibm-istio chart from the Catalog.

1. Create a namespace istio-system in which the ibm-istio chart is deployed.

2. Log in to the IBM Cloud Private management console and search istio from Catalog, ibm-istio chart will be displayed.

ibm-istio chart

3. Click on the chart, a readme file that includes information about installing, uninstalling, and configuring is provided for the ibm-istio chart.

ibm-istio chart details page

4. Click Configure button, you will be navigated configuration page. Name your release, select istio-system namespace and customize fields to your preference. Click Install to deploy ibm-istio chart and create an release.

ibm-istio chart installation page

Verifying the Installation

After installation completes, verify that the Istio control plane is created and running.

1. Ensure that the following mandatory Kubernetes services are deployed: istio-security, istio-pilot, istio-mixer, istio-ingress.

Note: istio-grafana, istio-prometheus, istio-servicegraph and istio-zipkin are optional, but they are enabled by default in IBM Cloud Private.

$ kubectl -n istio-system get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-grafana ClusterIP 10.0.0.40 <none> 3000/TCP 30m
istio-ingress LoadBalancer 10.0.0.228 <pending> 80:31126/TCP,443:31881/TCP 30m
istio-mixer ClusterIP 10.0.0.15 <none> 9091/TCP,15004/TCP,9093/TCP,9094/TCP,9102/TCP,9125/UDP,42422/TCP 30m
istio-pilot ClusterIP 10.0.0.64 <none> 15005/TCP,15007/TCP,15003/TCP,15010/TCP,8080/TCP,9093/TCP,443/TCP 30m
istio-prometheus ClusterIP 10.0.0.195 <none> 9090/TCP 30m
istio-security ClusterIP 10.0.0.194 <none> 8060/TCP 30m
istio-servicegraph ClusterIP 10.0.0.172 <none> 8088/TCP 30m
istio-sidecar-injector ClusterIP 10.0.0.7 <none> 443/TCP 30m
istio-zipkin ClusterIP 10.0.0.97 <none> 9411/TCP 30m

2. Ensure that the corresponding Kubernetes pods are deployed and all containers are up and running: istio-ca-*, istio-ingress-*, istio-mixer-*, istio-pilot-*, and, optionally, istio-sidecar-injector-*, istio-grafana-*, istio-prometheus-*, istio-servicegraph-*, istio-zipkin-*.

$ kubectl -n istio-system get pods
NAME READY STATUS RESTARTS AGE
istio-ca-54bff46bd6-nb4sj 1/1 Running 0 31m
istio-grafana-fc96466f7-255zh 1/1 Running 0 31m
istio-ingress-7b4c9f4694-52zsv 1/1 Running 0 31m
istio-ingress-7b4c9f4694-nl9v5 1/1 Running 0 31m
istio-mixer-748745dc4f-sbskq 3/3 Running 0 31m
istio-pilot-7f984b5df-vjg7z 2/2 Running 0 31m
istio-prometheus-ddd5cf79c-t5tnv 1/1 Running 0 31m
istio-servicegraph-5bfb8b6f5f-r76jp 1/1 Running 0 31m
istio-sidecar-injector-7694899c68-s4xsh 1/1 Running 0 31m
istio-zipkin-5df6c984cd-kfljm 1/1 Running 0 31m

Deploy the Bookinfo application

If the control plane is deployed successfully, you can then start to deploy your applications that are managed by Istio. I will use the Bookinfo application as an example to illustrate the steps of deploying applications that are managed by Istio.

Create Secret

If you are using a private registry for the sidecar image, then you need to create a Secret of type docker-registry in the cluster that holds authorization token, and patch it to your application’s ServiceAccount.

$ kubectl create secret docker-registry private-registry-key \
--docker-server=<your-registry-server> \
--docker-username=<your-name> \
--docker-password=<your-pword> \
--docker-email=<your-email>
$ kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "private-registry-key"}]}'

Prepare the Bookinfo manifest

Create a new YAML file named bookinfo.yaml to save the Bookinfo application manifest.

apiVersion: v1
kind: Service
metadata:
name: details
labels:
app: details
spec:
ports:
- port: 9080
name: http
selector:
app: details
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: details-v1
spec:
replicas: 1
template:
metadata:
labels:
app: details
version: v1
spec:
containers:
- name: details
image: morvencao/istio-examples-bookinfo-details-v1:1.5.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
apiVersion: v1
kind: Service
metadata:
name: ratings
labels:
app: ratings
spec:
ports:
- port: 9080
name: http
selector:
app: ratings
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ratings-v1
spec:
replicas: 1
template:
metadata:
labels:
app: ratings
version: v1
spec:
containers:
- name: ratings
image: morvencao/istio-examples-bookinfo-ratings-v1:1.5.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
apiVersion: v1
kind: Service
metadata:
name: reviews
labels:
app: reviews
spec:
ports:
- port: 9080
name: http
selector:
app: reviews
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: reviews-v1
spec:
replicas: 1
template:
metadata:
labels:
app: reviews
version: v1
spec:
containers:
- name: reviews
image: morvencao/istio-examples-bookinfo-reviews-v1:1.5.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: reviews-v2
spec:
replicas: 1
template:
metadata:
labels:
app: reviews
version: v2
spec:
containers:
- name: reviews
image: morvencao/istio-examples-bookinfo-reviews-v2:1.5.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: reviews-v3
spec:
replicas: 1
template:
metadata:
labels:
app: reviews
version: v3
spec:
containers:
- name: reviews
image: morvencao/istio-examples-bookinfo-reviews-v3:1.5.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
apiVersion: v1
kind: Service
metadata:
name: productpage
labels:
app: productpage
spec:
ports:
- port: 9080
name: http
selector:
app: productpage
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: productpage-v1
spec:
replicas: 1
template:
metadata:
labels:
app: productpage
version: v1
spec:
containers:
- name: productpage
image: morvencao/istio-examples-bookinfo-productpage-v1:1.5.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gateway
annotations:
kubernetes.io/ingress.class: "istio"
spec:
rules:
- http:
paths:
- path: /productpage
backend:
serviceName: productpage
servicePort: 9080
- path: /login
backend:
serviceName: productpage
servicePort: 9080
- path: /logout
backend:
serviceName: productpage
servicePort: 9080
- path: /api/v1/products.*
backend:
serviceName: productpage
servicePort: 9080
---

Automatic Sidecar Injection

If you have enabled automatic sidecar injection, the istio-sidecar-injector automatically injects Envoy containers into your application pods that are running in the namespaces, labelled with istio-injection=enabled. For example, let's deploy the Bookinfo application to thedefault namesapce.

$ kubectl label namespace default istio-injection=enabled
$ kubectl create -n default -f bookinfo.yaml

Manual Sidecar Injection

Alternatively, you can choose to manually inject Envoy containers to your application with injection tool istioctl.

If you are using a MacOS or Linux system, download the tool by using the following steps:

1. Download and extract the Istio 0.7.1 release package that contains istioctl:

export ISTIO_VERSION=0.7.1
curl -L https://git.io/getLatestIstio | sh -

2. Enter the extracted Istio package and add the istioctl client to your PATH:

cd istio-0.7.1
export PATH=$PWD/bin:$PATH

3. Create a ConfigMap to include the sidecar injection configuration. This step is necessary for now if you're using manual sidecar injection. To create the ConfigMap, run the following command:

$ cat <<EOF | kubectl -n istio-system create -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: istio-sidecar-injector-configuration
namespace: istio-system
labels:
app: sidecar-injector
istio: sidecar-injector
data:
config: |-
policy: enabled
template: |-
initContainers:
- name: istio-init
image: ibmcom/istio-proxy_init:0.7.1
imagePullPolicy: IfNotPresent
args:
- "-p"
- {{ .MeshConfig.ProxyListenPort }}
- "-u"
- 1337
securityContext:
capabilities:
add:
- NET_ADMIN
privileged: true
restartPolicy: Always
containers:
- name: istio-proxy
image: ibmcom/istio-proxy:0.7.1
imagePullPolicy: IfNotPresent
args:
- proxy
- sidecar
- --configPath
- {{ .ProxyConfig.ConfigPath }}
- --binaryPath
- {{ .ProxyConfig.BinaryPath }}
- --serviceCluster
{{ if ne "" (index .ObjectMeta.Labels "app") -}}
- {{ index .ObjectMeta.Labels "app" }}
{{ else -}}
- "istio-proxy"
{{ end -}}
- --drainDuration
- {{ formatDuration .ProxyConfig.DrainDuration }}
- --parentShutdownDuration
- {{ formatDuration .ProxyConfig.ParentShutdownDuration }}
- --discoveryAddress
- {{ .ProxyConfig.DiscoveryAddress }}
- --discoveryRefreshDelay
- {{ formatDuration .ProxyConfig.DiscoveryRefreshDelay }}
- --zipkinAddress
- {{ .ProxyConfig.ZipkinAddress }}
- --connectTimeout
- {{ formatDuration .ProxyConfig.ConnectTimeout }}
- --statsdUdpAddress
- {{ .ProxyConfig.StatsdUdpAddress }}
- --proxyAdminPort
- {{ .ProxyConfig.ProxyAdminPort }}
- --controlPlaneAuthPolicy
- {{ .ProxyConfig.ControlPlaneAuthPolicy }}
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: INSTANCE_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
securityContext:
privileged: true
readOnlyRootFilesystem: false
runAsUser: 1337
restartPolicy: Always
volumeMounts:
- mountPath: /etc/istio/proxy
name: istio-envoy
- mountPath: /etc/certs/
name: istio-certs
readOnly: true
volumes:
- emptyDir:
medium: Memory
name: istio-envoy
- name: istio-certs
secret:
optional: true
{{ if eq .Spec.ServiceAccountName "" -}}
secretName: istio.default
{{ else -}}
secretName: {{ printf "istio.%s" .Spec.ServiceAccountName }}
{{ end -}}
EOF

4. Deploy the Bookinfo applications by specifying injectConfigMapName parameter to the ConfigMap created above.

$ kubectl apply -f <(istioctl kube-inject \
--injectConfigMapName istio-sidecar-injector-configuration \
-f bookinfo.yaml)

Access the Bookinfo application

After all pods for the Bookinfo application are in a running state, you can access the Bookinfo product page. Since IBM Cloud Private doesn’t support external load balancers, you can use the hostIP of the ingress service, along with the NodePort:

$ export BOOKINFO_URL=$(kubectl get po -l istio=ingress -n istio-system -o 'jsonpath={.items[0].status.hostIP}'):$(kubectl get svc istio-ingress -n istio-system -o 'jsonpath={.spec.ports[0].nodePort}')

To confirm that the Bookinfo application is ready, run the following curl command, the output should be 200:

$ curl -o /dev/null -s -w "%{http_code}\n" http://${BOOKINFO_URL}/productpage

You can also access the Bookinfo product page from the browser by specifying the address: http://${BOOKINFO_URL}/productpage. Try to refresh the page several times, you will see different versions of reviews randomly shown in the product page(red stars, black stars, no stars), because I haven’t created any routerule for the Bookinfo application.

Booinfo

Collect Trace Spans using Zipkin

In this section, I will take the Bookinfo application created above as an example to see how Istio enabled applications can be configured to collect trace span using Zipkin.

1. By default, Istio will enable Zipkin with a service type of ClusterIP. You can change the default service type to NodePort during installation so that you can access Zipkin from an external environment. Alternatively, you can also expose another service of type NodePort and then access Zipkin by running the following command:

$ kubectl expose service istio-zipkin --type=NodePort \
--name=istio-zipkin-svc --namespace istio-system
$ export ZIPKIN_URL=$(kubectl get po -l app=zipkin -n istio-system \
-o 'jsonpath={.items[0].status.hostIP}'):$(kubectl get svc \
istio-zipkin-svc -n istio-system -o \
'jsonpath={.spec.ports[0].nodePort}')
$ echo http://${ZIPKIN_URL}/

2. Send more traffic to the Bookinfo application by refreshing http://${BOOKINFO_URL}/productpage in your browser or run the following command several times:

curl http://${BOOKINFO_URL}/productpage

3. Verify that the trace spans are collected by Zipkin. You can access the trace spans from your browser at http://${ZIPKIN_URL}/.

If everything is set up correctly, you will see a Zipkin dashboard similar to the following screenshot:

Zipkin-Traces

From the Zipkin dashboard, you can see a list of traces that reflect each service call stack of the Bookinfo application. You can inspect each trace by clicking on the respective trace, then you can see details for the trace similar to the following screenshot:

Zipkin-Spans

The trace is comprised of 4 spans, where each span corresponds to a single service invoked. You can even find times for the entire call stack and each individual call.

Collect Metrics with Prometheus

In this section, you can see how to configure Istio to automatically gather telemetry and create new customized telemetry for services. I will use the Bookinfo application as an example.

1. Similar to Zipkin, Istio can enable Prometheus with a service type of ClusterIP. You can also expose another service of type NodePort and then access Prometheus by running the following command:

$ kubectl expose service istio-prometheus --type=NodePort \
--name=istio-prometheus-svc --namespace istio-system
$ export PROMETHEUS_URL=$(kubectl get po -l app=prometheus \
-n istio-system -o 'jsonpath={.items[0].status.hostIP}'):$(kubectl get svc \
prometheus-svc -n istio-system -o 'jsonpath={.spec.ports[0].nodePort}')
$ echo http://${PROMETHEUS_URL}/

2. Verify that the built-in metric values can be collected into Prometheus by accessing http://${PROMETHEUS_URL}/ from your browser.

Prometheus

3. Create and apply the new metric istio_double_request_count that Istio generates and collects automatically.

$ cat << EOF | istioctl create -f -
apiVersion: "config.istio.io/v1alpha2"
kind: metric
metadata:
name: doublerequestcount
namespace: istio-system
spec:
value: "2" # count each request twice
dimensions:
source: source.service | "unknown"
destination: destination.service | "unknown"
message: '"twice the fun!"'
monitored_resource_type: '"UNSPECIFIED"'
---
# Configuration for a Prometheus handler
apiVersion: "config.istio.io/v1alpha2"
kind: prometheus
metadata:
name: doublehandler
namespace: istio-system
spec:
metrics:
- name: double_request_count # Prometheus metric name
instance_name: doublerequestcount.metric.istio-system # Mixer instance name (fully-qualified)
kind: COUNTER
label_names:
- source
- destination
- message
---
# Rule to send metric instances to a Prometheus handler
apiVersion: "config.istio.io/v1alpha2"
kind: rule
metadata:
name: doubleprom
namespace: istio-system
spec:
actions:
- handler: doublehandler.prometheus
instances:
- doublerequestcount.metric
---
EOF

4. Verify that the new metric values are generated and collected by accessing http://${PROMETHEUS_URL}/ from your browser.

Prometheus new metric

You can see that the new metric istio_double_request_count is generated and collected into Prometheus. For each request, the new metric counts twice with message "twice the fun!".

Visualizing Metrics with Grafana

Now I will setup and use the Istio Dashboard to monitor the service mesh traffic. I will use the Bookinfo application as an example.

1. Similar to Zipkin and Prometheus, Istio enables Grafana with a service type of ClusterIP. You need to expose another service of type NodePort to access Grafana from the external environment by running the following commands:

$ kubectl expose service istio-grafana --type=NodePort \
--name=istio-grafana-svc --namespace istio-system
$ export GRAFANA_URL=$(kubectl get po -l app=grafana -n \
istio-system -o 'jsonpath={.items[0].status.hostIP}'):$(kubectl get svc \
istio-grafana-svc -n istio-system -o \
'jsonpath={.spec.ports[0].nodePort}')
$ echo http://${GRAFANA_URL}/

2. Access the Grafana web page from your browser http://${GRAFANA_URL}/.

By default, Istio grafana has three built-in dashboards: Istio Dashboard, Mixer Dashboard and Pilot Dashboard. Istio Dashboard is an overall view for all service traffic including high-level HTTP requests flowing and metrics about each individual service call, while Mixer Dashboard and Pilot Dashboard are mainly resources usage overview.

The Istio Dashboard resembles the following screenshot:

Grafana Istio Dashboard

Summary

In this blog, I have gone through how to enable Istio on IBM Cloud Private 2.1.0.3. I also reviewed how to deploy microservice-based application that are managed and secured by Istio. The blog also covered how to manage, and monitor microservices with Istio addons such as Zipkin, Prometheus, and Grafana.

Istio solves the microservices mesh tangle challenge by injecting a transparent envoy proxy as a sidecar container to application pods. Istio can collect fine-grained metrics and dynamically modify the routing flow without interfering with the original application. This provides a uniform way to connect, secure, manage, and monitor microservices.

For more information about Istio, see https://istio.io/docs/.

--

--

Morven Cao
IBM Cloud

Red Hatter, programmer, strong advocate and believer of OSS and passionate about reading, thinking and the possibility of time travel.