Published in


Istio with Kubernetes

We have looked at Kong as an API gateway previously, to allow us to manage access to our services within K8s.

Istio can provide a similar function and comes with other useful features in its tool kit, such as broad traffic management, circuit breaking, intelligent load balancing as well as tracing and monitoring with Kiali.

Rather than a single application, Istio includes its own discovery (istiod) and load balancing (envoy) deployments. Envoy acts as a proxy for any selected service, allowing access to it to be managed.

For more details see the architecture link below.

Installing Istio

Get the istioctl binary.

curl -L | sh -

Install the demo profile which will include everything we need.

istioctl install -- set profile=demo
✔ Istio core installed
✔ Istiod installed
✔ Egress gateways installed
✔ Ingress gateways installed
✔ Addons installed
- Pruning removed resources
Pruned object HorizontalPodAutoscaler:istio-system:istiod.
Pruned object HorizontalPodAutoscaler:istio-system:istio-ingressgateway.
✔ Installation complete

Check the version.

istioctl version
client version: 1.6.4
control plane version: 1.6.4
data plane version: 1.6.4 (2 proxies)

Notice the applications that have been deployed.

kubectl get svc -n istio-system

NAME                        TYPE           CLUSTER-IP       
grafana ClusterIP
istio-egressgateway ClusterIP
istio-ingressgateway LoadBalancer
istiod ClusterIP
jaeger-agent ClusterIP None
jaeger-collector ClusterIP
jaeger-collector-headless ClusterIP None
jaeger-query ClusterIP
kiali ClusterIP
prometheus ClusterIP
tracing ClusterIP
zipkin ClusterIP

Included are Grafana, Jaeger, Kiali, Prometheus and Zipkin. We will also briefly look at Grafana and Kiali here.

Side Car Proxies

Set the side car proxies to be automatically created for any pods in the vadal namespace.

Create the namespace.

kubectl create ns vadal
namespace/vadal created

Enable istio injection.

kubectl label namespace vadal istio-injection=enabled

Deploy our vadal-echo image (see previous blog), to the vadal namespace.

kubectl create deployment -n vadal vecho -- image=vadal-echo:0.0.1-SNAPSHOT

kubectl expose deploy -n vadal vecho -- port 80 -- target-port=8080

Istio Ingress

kubectl get svc istio-ingressgateway -n istio-system

istio-ingressgateway LoadBalancer localhost 15020:31891/TCP,80:32309/TCP,443:31967/TCP,31400:30096/TCP,15443:32721/TCP

First we need a gateway configuration.

kind: Gateway
name: vadal-gateway
namespace: istio-system
istio: ingressgateway # use istio default controller
- port:
number: 80
name: http
protocol: http
- vadal.local

kubectl apply -f <above file>

Note: set the host name vadal.local (for example) to point to your host ip in /etc/hosts.

Then we need a virtual service.

kind: VirtualService
name: echo
namespace: vadal
- vadal.local
- vadal-gateway.istio-system.svc.cluster.local
- match:
- uri:
prefix: /echo
uri: /
- destination:
host: vecho.vadal.svc.cluster.local
number: 80

kubectl apply -f <above>

The virtual service references the gateway (vadal-gateway.istio-system.svc.cluster.local), and the service host (vecho.vadal.svc.cluster.local).

As there is just the root endpoint (/) in the service we use rewrite to translate /echo (we also want to use this host for other services).

Try it out:

curl -i vadal.local/echo
HTTP/1.1 200 OK
content-type: application/json
date: Thu, 09 Jul 2020 21:37:31 GMT
x-envoy-upstream-service-time: 7
server: istio-envoy
transfer-encoding: chunked



Although we hand crafted grafana/prometheus before, istio’s demo profile installs this for us, with the two connected to each other.

Expose it from the node. (Or run istioctl dashboard grafana, this method is temporal).

kubectl -n istio-system edit svc/grafana

Change type ClusterIP -> NodePort, add, nodePort: 30003

- name: http
nodePort: 30003
port: 3000
protocol: TCP
targetPort: 3000
app: grafana
sessionAffinity: None
type: NodePort

After saving it, note the port is available.

kubectl get svc grafana istio-ingressgateway -n istio-system

grafana NodePort 3000:30003/TCP 5h15m

Check out the various istio dashboards

http://localhost:30003/d/G8wLrJIZk/istio-mesh-dashboard?orgId=1&refresh=5s http://localhost:30003/d/UbsSZTDik/istio-workload-dashboard?orgId=1&refresh=5s&var-namespace=vadal&var-workload=vecho&var-srcns=All&var-srcwl=All&var-dstsvc=All&from=now-1h&to=now


A GUI to manage Istio and your services.

kubectl -n istio-system edit svc/kiali

Change type to NodePort and add nodePort: 30004

- name: http-kiali
nodePort: 30004
port: 20001
protocol: TCP
targetPort: 20001
app: kiali
sessionAffinity: None
type: NodePort

Check it out:


Credentials are: admin/admin


We installed Istio and used it’s gateway and virtual service architecture to serve up our vadal-echo service in it’s own namespace.

We could also observe our services in Grafana and in Kiali.

Next time we will secure our vadal-echo service.

Further details

Istio Architecture:

Comparison with Kong:,migrated%20to%20start%20leveraging%20K8s.

Originally published at on July 13, 2020.




Everything software, dev, sec, ops, agile

Recommended from Medium

Things We’ve Learned While Working with GraphQL

Accelerate Spark and Hive Jobs on AWS S3 by 10x with Alluxio as a Tiered Storage Solution

Computer Science Essentials

What is Azure DevOps?

How to Hire a WordPress Developer for Technical Newbies

21 Days of Christmas

Native WooCommerce Mobile Apps

Generating a water effect, part 2.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store



More from Medium

Canary Deployment with Istio in Kubernetes

Decrease your Organization’s Carbon footprints using Kubernetes

Canary Deployment in Kubernetes (Part 1) — Simple Canary Deployment using Ingress NGINX

Kubernetes in production