Istio on Nodeless Kubernetes

Brendan Cox
Elotl blog
Published in
3 min readJun 13, 2019

We get asked a lot whether it’s possible to run a service mesh with Kiyot, our nodeless CRI for Kubernetes. Yes, it is! Behind the scenes, Kiyot and Milpa launch pods onto right-sized cloud compute instances that run a slimmed down Linux distribution. This means that the nodes provide all the necessary system capabilities to run a service mesh. In this post, we’ll briefly cover how Istio’s service mesh runs in a regular Kubernetes cluster and then show how it runs on a nodeless Kiyot cluster. TL;DR: it’s not much different.

Istio’s service mesh is composed of a control plane and a data plane. The data plane is made up of two components: an Envoy sidecar proxy that is injected into every pod and Mixer, a service providing access control checks and telemetry capture. Istio’s control plane consists of services (Pilot, Galley and Citadel) that are used to configure the data plane. The data plane components query the control plane to learn how traffic should be routed to services in the cluster.

In a standard Kubernetes cluster, Istio’s Envoy sidecar proxies are injected into pods and the supporting Istio services (Mixer, Pilot, Galley and Citadel) are run as deployments. Each deployment creates one or more pods that run on a kubelet node. This is all pretty standard. To route traffic through the service mesh, the pod’s iptables rules are modified so that the pod’s incoming and outgoing traffic is routed through the proxy. When cluster traffic leaves the kubelet, kube-proxy ensures cluster traffic is routed to the appropriate pods.

Service to service networking in Kubernetes

When running Istio in a nodeless system with Kiyot, the components remain the same with the exception that each pod is launched onto its own cloud instance instead of running on the kubelet node. Kiyot, our CRI implementation, works together with our Milpa controller to dispatch new pods to right-sized cloud instances that are transparently managed by Milpa. Traffic to and from these pods still passes through the Istio sidecar, DNS queries point to the correct cluster IPs and kube-proxy forwards cluster IP connections to the correct instance. In short, despite the fact that pods no longer run on the kubelet, the system continues to work in a very similar manner.

Service to service networking with nodeless Kubernetes

From what we see above, the essence of what we’ve done is we’ve deconstructed a kubelet node into its component pieces. The Kiyot CRI and Milpa call the Cloud API to launch pods onto new instances and in the background and everything continues to work as expected. While it might seem strange, running Kubernetes in this type of setup has some compelling advantages.

  • Simplified cost management — When a deployment is scaled down or a pod stops running, you no longer pay for the compute resources it was using.
  • Autoscaling out of the box — The cluster automatically scales to handle bursty workloads.
  • Simple cost attribution — Instances are tagged with pod labels making it easy to match cloud costs with what’s running on the cluster.
  • Improved security — Each application is isolated on its own virtual machine, providing stronger isolation and security guarantees than a traditional multi-tenant system.
  • Simplified operations — There are no additional worker nodes to manage and no cluster autoscaling knobs to tune.

If you’re interested in exploring a similar nodeless setup for your Kubernetes cluster, use our Terraform scripts to provision a nodeless kubernetes cluster. If you’re looking to build something serious using Kiyot and Milpa, let us know if we can help.

--

--