Published in


Managing service mesh on Kubernetes with Istio

Photo by Jeremy Bishop on Unsplash

by Nikita Mazur
Containerum Platform

As the complexity of microservice applications grows, it becomes extremely difficult to track and manage interactions between services. To address this problem and make service-to-service communication simpler and more efficient, several service mesh applications exist, including Istio and linkerd. In this article we will have a look at Istio.

But first, what is a service mesh?

Basically, it is a dedicated infrastructure layer that ensures communication between services. Over the last year service mesh has become one of the key trends in cloud managent. It is implemented as an array of lightweight proxies that are deployed on top of applications. Service mesh software handles routing, load balancing, provides logging, telemetry, etc.

Istio was first announced in 2017, and on July 31 version 1.0 was released. It is based on Envoy proxy (L7) by Lyft, which works on network level (TCP/IP) and HTTP, supports gRPC, collects statistics and supports many Service Discovery and Load Balancing methods.

Istio is a production-ready solution that aims at solving common issues of applications with microservice architecture:

  • Service discovery
  • Load balancing
  • High Availability
  • Endpoint monitoring
  • Dynamic routing
  • Security
  • … and more

One of the key benefits of Istio is that it can be launched ‘on top’ of an existing application — it deploys an Envoy proxy-server for each service as a sidecar-container inside the same Pod. It means you don’t have to make any change to the code of your applications.

In this tutorial we will install Istio, deploy a demo application and monitor its metrics in Grafana. Let’s install Istio first.

Install Istio

Installation is pretty easy. Download the installation file for your OS:

curl -L | sh -
cd istio-1.0.2
export PATH=$PWD/bin:$PATH

Once in the Istio directory, run:

kubectl apply -f install/kubernetes/istio-demo-auth.yaml

This will create the istio-system namespace and grant RBAC permissions. Besides, it will deploy plugins for metrics and logs, configure mutual TLS authentication between Envoy sidecars, and install core Istio components:

  • Istio-Pilot for service discovery and for configuring the Envoy sidecar proxies
  • The Mixer components Istio-Policy and Istio-Telemetry for usage policies and gathering telemetry data
  • Istio-Ingressgateway, which serves as an ingress point for external traffic
  • Istio-Citadel, which automates key and certificate management for Istio.

Now let’s check if all components are running:

kubectl get service -n istio-system

And the same for pods:

kubectl get pods -n istio-system

Ok, now let’s deploy a sample application!

Deploy the BookInfo sample application

To see how Istio works we will deploy BookInfo application. This is a simple application made up of four services. The source code and all the other files used in this example are located at the local Istio installation’s samples/bookinfo directory.

To enable Istio to manage services, it is necessary to inject a sidecar container to a pod:

kubectl apply -f <(istioctl kube-inject -f samples/bookinfo/platform/kube/bookinfo.yaml)

Confirm that the application has been deployed correctly by running the following commands:

kubectl get services

Check the pods:

kubectl get pods

Finally, define the ingress gateway routing for the application to make it accessible from the outside:

kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml

Check it:

kubectl get svc -n istio-system

Now this can be important: we have created a Load Balancer type of Service. If your service provider doesn’t support Load Balancers, create a ClusterIP service instead:

apiVersion: v1
kind: Service
app: istio-ingressgateway
chart: gateways-1.0.1
heritage: Tiller
istio: ingressgateway
name: istio-ingressgateway
namespace: istio-system
- name: http2
port: 80
protocol: TCP
targetPort: 80
- name: https
port: 443
protocol: TCP
targetPort: 443
- name: tcp
port: 31400
protocol: TCP
targetPort: 31400
- name: tcp-pilot-grpc-tls
port: 15011
protocol: TCP
targetPort: 15011
- name: tcp-citadel-grpc-tls
port: 8060
protocol: TCP
targetPort: 8060
- name: tcp-dns-tls
port: 853
protocol: TCP
targetPort: 853
- name: http2-prometheus
port: 15030
protocol: TCP
targetPort: 15030
- name: http2-grafana
port: 15031
protocol: TCP
targetPort: 15031
- # your external IP here
app: istio-ingressgateway
istio: ingressgateway
sessionAffinity: None
type: ClusterIP

Don’t forget to put your external IP instead of Save as istio-svc.yaml and then run:

kubectl create -f istio-svc.yaml

Let’s generate some load and send it to our sample app and see how Istio tracks it. For this purpose we’ll be using wrk utility. Let’s install it.

For CentOS:

sudo yum groupinstall ‘Development Tools’
sudo yum install -y openssl-devel git
git clone wrk
cd wrk
sudo cp wrk /usr/bin

For Ubuntu:

sudo apt-get install build-essential libssl-dev git -y
git clone wrk
cd wrk
sudo make
# move the executable to somewhere in your PATH, ex:
sudo cp wrk /usr/local/bin

Once installed, export your External IP address and launch wrk:

export GATEWAY_URL=%YOUR_EXTERNAL_IP:80wrk -t1 -c1 -d60s http://${GATEWAY_URL}/productpage

It’s time to go to Grafana to see what’s going on.

First, find the Grafana pod:

kubectl get po -n istio-system

Copy the name of the pod and forward it to port 3000:

kubectl port-forward %grafana-pod -n istio-system 3000

Open your browser: and go to ‘Istio Mesh Dashboard’.

You should see something like this:

This particular dashboard reflects the traffic that was generated as well as the global view of the services and workloads in the mesh. You can click on each particular service and see detailed stats, e.g.:

You can find more information about visualizing metrics in Grafana in the official docs.


We have just deployed Istio, a sample application and saw how to monitor it using Grafana. The article is a very basic introduction to Istio, and to learn more about it I’d suggest checking out the docs which are really well written and easy to understand.

Do you use service mesh software in your clusters? Please, share! And don’t forget to follow us on Twitter and join our Telegram chat to stay tuned!

Containerum Platform is an open source project for managing applications in Kubernetes available on GitHub. We are currently looking for community feedback, and invite everyone to test the platform! You can submit an issue, or just support the project by giving it a ⭐. Let’s make cloud management easier together!




Containerum publishes articles and best practices on Kubernetes. Containerum has just released an open source management platform with built-in revision control, teamwork and CI/CD pipelines.

Recommended from Medium

Spinnaker — Slack Integration

UniDexBot Sniper

Adding more Commits to your Unity project in GitHub

Part 1: Rate Limiting: A Useful Tool with Distributed Systems

Contribute to a solution that helps our heroes

Groupby in Pandas

The CEU Cloud Makes It Easy to Store & Manage All Your CEUs

From zero to Docker for operation team

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store


Containerum Platform for managing applications in Kubernetes.

More from Medium

Rapidly prototype your APIs on Kubernetes with Kusk Gateway — Kubeshop

Setting up Azure AD Saml using Dex server in ARGOCD

Container Orchestration Open-Source Platform - Kubernetes

How to build a CI CD pipeline in AWS ECR and AWS EKS using Github actions and ArgoCD for Node.js

GitOps CICD using AWS EKS and Github