Istio — Path to Poduction Part 2 Dashboards

Matt Law
4 min readAug 22, 2018

--

In Part 1 we created our first Istio Gateway & Virtual Service to bridge into our container.

As a reminder of what we created:

  • We have oneGateway, responsible for all traffic for domain.com (*.domain.com)
  • One VirtualService that was responsible routing to 2 pods in the namespace team1. That namespace was referenced with the domain team1.domain.com.

I’m going to introduce another Gateway & Virtual Service into the mix, responsible for accessing pods in another namespace, namely the dashboards that are created as part of the istio installation. Information for each dashboard can be found here:

Lots of information to get access to, and the instructions in the links above have us creating a port forward into the cluster. Not really production ready.

Note: At time of writing there is a pull request to automatically allow access to these dashboards via an ingress gateways during setup of istio (https://github.com/istio/istio/pull/7346)

While that be rolled out soon, running through this demo will still help understanding how I have split up Gateways & Virtualservices in our shared cluster. Incidentally, I have chosen to do it slightly differently that where they are headed.

Change our existing Gateway

Our gateway from Part 1 was responsible for *.domain.com. While this is great for proving the components, I’m heading toward making each namespace have its own Ingress, Gateway & Virtualservice. For the time-being I’m sticking with 1 Ingress, and making multiple of gateways each one responsible for a seperate FQDN.

This all means we need to change the FQDN that our gateways are responsible for. In Part 1, the sole gateway was responsible for *.domain.com. We are going to be more specific now, making it responsible for only the FQDNs we have and have different gateways responsible for specific FQDNs

Lets edit our original global-gateway.yaml file, and add some specific domains under the hosts section. Note that this does not change the overall functionality of that existing configuration, but allows us to start creating multiple gateways.

Apply that:

kubectl apply -f global-gateway.yaml --namespace=istio-system

Check that its there:

istioctl get gateway --all-namespacesGATEWAY NAME    HOSTS                              NAMESPACE    AGE
global-gateway team1.domain.com,team2.domain.com istio-system 6d

Dashboard Gateways

We have 4 dashboards we are interested in accessing, all currently accessible via port forwarding. We need to determine how we are going to configure istio to match a request to send to the respective dashboard services/pods. There are a number of ways we could do this:

Path Matching:

We could try to match a path e.g. telemetry.domain.com/grafana where /grafana is the path we wish to match, however there seems to be an issue attempting to access via a path, and then have istio strip that path off (at least time of writing). Grafana will not return a proper page due to absoute paths being referenced for css & images etc.

Port Matching:

We could also match by port for each dashboard, which is where the github issue I listed above is headed. Thus we could access telemetry.domain.com:3000. For us, any non standard ports are blocked on our firewall, and we would need to engage them security team to open them.

Domain Matching:

I have chosen to access each service with a specific subdomain off our main domain. So grafana would be setup as grafana.domain.com etc. This does mean creating a few more DNS entries to manage this.

Here is our telemetry-gateway.yaml file. Note: Its using the same ingressgateway and we have added 4 specific domains for each dashboard.

Apply that:

kubectl apply -f telemetry-gateway.yaml

and check:

kubectl get gateway --all-namespaces
NAMESPACE NAME AGE
istio-system global-gateway 6d
istio-system istio-telemetry-gateway 5d

Grafana Dashboard VirtualService

Here is the example for the grafana dashboard Virtual Service only. The other 3 dashboards I have left out for know so as to not overwhelm with a wall of code.

The VirtualService entry should be fairly self-explanatory. I think thats its just a good habit to reference the destination host with the FQDN i.e. grafana.istio-system.svc.cluster.local even if they are living in the same namespace.

You will notice a new section here: DestinationRule

Why is this here? Its doing one thing: disabling the tls traffic policy to the grafana pod. All the dashboards have been setup, as part of the GKE installation, without a sidecar pod which is required for mTLS to work.

Lets apply our virtualservice

kubectl apply -f telemetry-vs-grafana.yaml --namespace=istio-system

and check that its there:

istioctl get virtualservices --namespace=istio-system
istioctl get destinationrule --namespace=istio-system

DNS Entry

You’ll need to add a DNS entry for grafana.domain.com to point to your external IP. We are still using the same Ingress, and therefore are using the same IP that we discovered in Part 1.

All you need to do now is to browse to your newly created URL!

Grafana Dashboard

Other Dashboards

We have 3 other dashboards that we wish to access. The approach is a rinse and repeat of the Virtual Service and DNS entry. You could create seperate yaml files for each, but I have included all of my dashboards in one yaml file.

I’ll leave it up to you to do this, noting that you will need to discover the ports your respective services are on so you can make the changes to the VirtualService.

Note that when accessing your servicegraph URL, you will need to add force/forcegraph.html to the path, so your URL might look like https://servicegraph.domain.com/force/forcegraph.html for example.

Summary

I hope that this article served a couple of purposes. Firstly how we can start splitting up gateways and virtualservices, and more importantly how we can now access the dashboards that are part of the installation.

In Part 3 I’ll be talking about Metrics, and how to export them out to an external source. Thats here:

--

--