Istio with Kubernetes

Lightphos
actual-tech
Published in
4 min readJul 13, 2020

--

We have looked at Kong as an API gateway previously, to allow us to manage access to our services within K8s.

Istio can provide a similar function and comes with other useful features in its tool kit, such as broad traffic management, circuit breaking, intelligent load balancing as well as tracing and monitoring with Kiali.

Rather than a single application, Istio includes its own discovery (istiod) and load balancing (envoy) deployments. Envoy acts as a proxy for any selected service, allowing access to it to be managed.

For more details see the architecture link below.

Installing Istio

Get the istioctl binary.

curl -L https://istio.io/downloadIstio | sh -

Install the demo profile which will include everything we need.

istioctl install -- set profile=demo
✔ Istio core installed
✔ Istiod installed
✔ Egress gateways installed
✔ Ingress gateways installed
✔ Addons installed
- Pruning removed resources
Pruned object HorizontalPodAutoscaler:istio-system:istiod.
Pruned object HorizontalPodAutoscaler:istio-system:istio-ingressgateway.
✔ Installation complete

Check the version.

istioctl version
client version: 1.6.4
control plane version: 1.6.4
data plane version: 1.6.4 (2 proxies)

Notice the applications that have been deployed.

kubectl get svc -n istio-system

NAME                        TYPE           CLUSTER-IP       
grafana ClusterIP 10.111.215.34
istio-egressgateway ClusterIP 10.96.79.109
istio-ingressgateway LoadBalancer 10.102.213.69
istiod ClusterIP 10.105.65.156
jaeger-agent ClusterIP None
jaeger-collector ClusterIP 10.99.251.56
jaeger-collector-headless ClusterIP None
jaeger-query ClusterIP 10.97.215.154
kiali ClusterIP 10.99.47.144
prometheus ClusterIP 10.100.43.45
tracing ClusterIP 10.109.151.164
zipkin ClusterIP 10.104.193.208

Included are Grafana, Jaeger, Kiali, Prometheus and Zipkin. We will also briefly look at Grafana and Kiali here.

Side Car Proxies

Set the side car proxies to be automatically created for any pods in the vadal namespace.

Create the namespace.

kubectl create ns vadal
namespace/vadal created

Enable istio injection.

kubectl label namespace vadal istio-injection=enabled

Deploy our vadal-echo image (see previous blog), to the vadal namespace.

kubectl create deployment -n vadal vecho -- image=vadal-echo:0.0.1-SNAPSHOT

kubectl expose deploy -n vadal vecho -- port 80 -- target-port=8080

Istio Ingress

kubectl get svc istio-ingressgateway -n istio-system

istio-ingressgateway LoadBalancer 10.102.213.69 localhost 15020:31891/TCP,80:32309/TCP,443:31967/TCP,31400:30096/TCP,15443:32721/TCP

First we need a gateway configuration.

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: vadal-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: http
hosts:
- vadal.local

kubectl apply -f <above file>

Note: set the host name vadal.local (for example) to point to your host ip in /etc/hosts.

Then we need a virtual service.

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: echo
namespace: vadal
spec:
hosts:
- vadal.local
gateways:
- vadal-gateway.istio-system.svc.cluster.local
http:
- match:
- uri:
prefix: /echo
rewrite:
uri: /
route:
- destination:
host: vecho.vadal.svc.cluster.local
port:
number: 80

kubectl apply -f <above>

The virtual service references the gateway (vadal-gateway.istio-system.svc.cluster.local), and the service host (vecho.vadal.svc.cluster.local).

As there is just the root endpoint (/) in the service we use rewrite to translate /echo (we also want to use this host for other services).

Try it out:

curl -i vadal.local/echo
HTTP/1.1 200 OK
content-type: application/json
date: Thu, 09 Jul 2020 21:37:31 GMT
x-envoy-upstream-service-time: 7
server: istio-envoy
transfer-encoding: chunked

{“timestamp”:”2020–07–09T21:37:31.63",”headers”:{“host”:”vadal.local”,”user-agent”:”curl/7.64.1",”accept”:”/”,”content-length”:”0",”x-forwarded-proto”:”http”,”x-envoy-internal”:”true”,”x-request-id”:”1ed0c221-daf6–92e9–8fa3-b129800acdb1",”x-envoy-decorator-operation”:”vecho.vadal.svc.cluster.local:80/echo/*”,”x-envoy-peer-metadata”:”ChoKCkNMVVNURVJfSUQSDBoKS3ViZXJuZXRlcwoaCgxJTlNUQU5DRV9JUFMSChoIMTAuMS4xLjEKlgIKBkxBQkVMUxKLAiqIAgodCgNhcHASFhoUaXN0aW8taW5ncmVzc2dhdGV3YXkKEwoFY2hhcnQSChoIZ2F0ZXdheXMKFAoIaGVyaXRhZ2USCBoGVGlsbGVyChkKBWlzdGlvEhAaDmluZ3Jlc3NnYXRld2F5CiEKEXBvZC10ZW1wbGF0ZS1oYXNoEgwaCjY3NmZiZjc4OWQKEgoHcmVsZWFzZRIHGgVpc3Rpbwo5Ch9zZXJ2aWNlLmlzdGlvLmlvL2Nhbm9uaWNhbC1uYW1lEhYaFGlzdGlvLWluZ3Jlc3NnYXRld2F5Ci8KI3NlcnZpY2UuaXN0aW8uaW8vY2Fub25pY2FsLXJldmlzaW9uEggaBmxhdGVzdAoaCgdNRVNIX0lEEg8aDWNsdXN0ZXIubG9jYWwKLwoETkFNRRInGiVpc3Rpby1pbmdyZXNzZ2F0ZXdheS02NzZmYmY3ODlkLWxmemNxChsKCU5BTUVTUEFDRRIOGgxpc3Rpby1zeXN0ZW0KXQoFT1dORVISVBpSa3ViZXJuZXRlczovL2FwaXMvYXBwcy92MS9uYW1lc3BhY2VzL2lzdGlvLXN5c3RlbS9kZXBsb3ltZW50cy9pc3Rpby1pbmdyZXNzZ2F0ZXdheQo5Cg9TRVJWSUNFX0FDQ09VTlQSJhokaXN0aW8taW5ncmVzc2dhdGV3YXktc2VydmljZS1hY2NvdW50CicKDVdPUktMT0FEX05BTUUSFhoUaXN0aW8taW5ncmVzc2dhdGV3YXk=”,”x-envoy-peer-metadata-id”:”router~10.1.1.1~istio-ingressgateway-676fbf789d-lfzcq.istio-system~istio-system.svc.cluster.local”,”x-b3

Grafana

Although we hand crafted grafana/prometheus before, istio’s demo profile installs this for us, with the two connected to each other.

Expose it from the node. (Or run istioctl dashboard grafana, this method is temporal).

kubectl -n istio-system edit svc/grafana

Change type ClusterIP -> NodePort, add, nodePort: 30003

- name: http
nodePort: 30003
port: 3000
protocol: TCP
targetPort: 3000
selector:
app: grafana
sessionAffinity: None
type: NodePort

After saving it, note the port is available.

kubectl get svc grafana istio-ingressgateway -n istio-system

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana NodePort 10.111.215.34 3000:30003/TCP 5h15m

Check out the various istio dashboards

http://localhost:30003/d/G8wLrJIZk/istio-mesh-dashboard?orgId=1&refresh=5s http://localhost:30003/d/UbsSZTDik/istio-workload-dashboard?orgId=1&refresh=5s&var-namespace=vadal&var-workload=vecho&var-srcns=All&var-srcwl=All&var-dstsvc=All&from=now-1h&to=now

Kiali

A GUI to manage Istio and your services.

kubectl -n istio-system edit svc/kiali

Change type to NodePort and add nodePort: 30004

- name: http-kiali
nodePort: 30004
port: 20001
protocol: TCP
targetPort: 20001
selector:
app: kiali
sessionAffinity: None
type: NodePort

Check it out:

http://localhost:30004/kiali/console/overview?duration=60&refresh=15000

Credentials are: admin/admin

Conclusion

We installed Istio and used it’s gateway and virtual service architecture to serve up our vadal-echo service in it’s own namespace.

We could also observe our services in Grafana and in Kiali.

Next time we will secure our vadal-echo service.

Further details

Istio Architecture:

https://istio.io/latest/docs/ops/deployment/architecture/

Comparison with Kong:

https://stackshare.io/stackups/istio-vs-kong#:~:text=Istio%20has%20an%20inbuilt%20turn,migrated%20to%20start%20leveraging%20K8s.

Originally published at https://blog.ramjee.uk on July 13, 2020.

--

--