Kiali, a developer journey — Day 1, some background

Joel Takvorian
Kiali
Published in
4 min readOct 16, 2018

Kiali can be described as a graphical user interface focused on observing an Istio-based Service Mesh that runs on Kubernetes [1].

It’s an open-source software born at Red Hat. Who says “open-source” says “contributions are welcome”. I will try to provide a subjective vision of day-to-day development in Kiali though a small series of blog posts, hoping it will be helpful for any potential contributor willing to dive in Kiali. Maybe you? But first, some general background around Kiali is necessary.

If you’re not familiar with Istio and the concept of a Service Mesh, let me explain briefly.

Picture from https://istio.io/docs/concepts/what-is-istio/

Istio provides a seamless layer on top of — and between — all the services of an application. From the picture above, your application is just “Service A” and “Service B”.
Pods are started with an Istio side-car container, based on Lyft’s Envoy Proxy, which intercepts the incoming and outgoing requests from/to your containers. You don’t have to change a single line of application code to make it work. The application doesn’t have to know anything about Istio.

The interest? To bring many features that are common in the world of microservices: circuit breaking, TLS encryption, routing rules (for instance, to manage a canary deployment), etc. I won’t enter in full details but you can find more information on the why.

One cool extra thing that Istio provides is telemetry around traffic usage. Actually, it’s not only cool. Kiali just wouldn’t exist without it — or it would be quite different — because the whole Kiali graph is built from Istio metrics. Istio reports requests counters and various histograms in Prometheusthe increasingly popular timeseries database — for intercepted traffic. These metrics all have a Service-based [2] dimensionality, which means they are stored and indexed per inbound and outbound edges, which allows us to build a full traffic graph from them.

This is an important characteristic of the Kiali graph. It’s built from runtime activity, not static topology. That is to say, links between graph nodes denote traffic flows and not hard dependencies / ownership relationships between entities. Which doesn’t mean that Kiali will never offer such a topology view in the future (and we’re all ears opened about what you, users, can say about it), but at the moment it’s not.

However, it wouldn’t be correct to say it’s only showing runtime activity. We do have some mechanisms to add overlays on top of the traffic flows graph, or to decorate nodes with additional data, this is what we call graph appenders. The first appender that was implemented (if I remember correctly) was the unused nodes appender, which precisely aims to find Kubernetes entities that don’t generate any traffic for any reason — be it because it’s not working as expected, or because it’s unused at that moment, or anything else — and add them to the graph, displayed as isolated nodes. This allows us to show elements that a purely runtime traffic-based graph wouldn’t show otherwise.

Other appenders include the response time appender (which decorates graph edges with response times), security policy (to decorate nodes with flags when mTLS is enabled), etc.

In other parts of Kiali, the opposite path has been taken: lists are built by fetching data from the Kubernetes API (such as Services and Deployments), then some aggregation can be performed using Istio telemetry.

Overall, Kiali architecture is pretty simple. It consists roughly in an HTTP server that serves a static Frontend (a single-page app), and exposes a REST API which, in turn, consumes other APIs such as Kubernetes and Prometheus.

Summarized architecture

This simple, stateless architecture is certainly making it easy to apprehend, which is nice for contributors who would look for an affordable way to step in. But of course I won’t guarantee that the architecture remains as such in the future: at some point we may add a cache, event-based updates, persistence layer, etc.

That’s all for our first day in this journey, I hope it brought a good picture of what is Kiali. For Day 2, I will explain how the code base is articulated, both on Backend and Frontend sides, and how as a developer I set up my environment every day.

Footnotes

[1] … or OpenShift, OKD, or probably any other Kube-based product. Since the core development team sits at Red Hat, we can guarantee it’s going to work with OpenShift. Most developers here actually use OpenShift / Minishift to setup their environment.

[2] Service here refers to the commonly used term when talking about microservices. Note than, in Kubernetes / Istio / Kiali terminology, a Service has a stricter meaning, it’s an inbound network interface but it doesn’t strictly relates to your application executable. We prefer the term of Workload to denote an entity able to generate traffic. Hence, to be more accurate, the mentioned metrics have a Workload-based dimensionality.

--

--