The future of Kubernetes workloads: The Koki Platform

Sidhartha Mani
Koki
Published in
4 min readDec 6, 2017

--

I write this during the week of KubeCon 2017. This week, and the weeks leading up to it, have brought exciting announcements of tools, technologies and platforms for the members of the Kubernetes ecosystem.

Starting with the much anticipated announcement by AWS Cloud to support Kubernetes, to vendors such as my old employer Rancher Labs, and other prominent players such as Heptio, and alike responding to it.

During this time of change, I want to challenge all of the users in this ecosystem to ask - what more could this ecosystem do for you? (Feel free to comment on this post with your demands — I would love to hear your story.)

This exercise got me thinking— What tools and platforms would empower users to do the right thing? The answer was simple in my mind, best explained by this quote:

The key to better decision-making is understanding.

Koki: A Platform that understands

The Koki platform runs on top of your Kubernetes cluster, and manages all of the resources in the cluster. The platform comprises of the Koki language, the Koki runtime and tools such as Koki short for enhancing the UX for its users.

Koki platform’s core philosophy is that it understands your workload — as much as you, its user, manager, and beneficiary, understand it. You might ask, why do I care?

A platform that understands your workload can relieve you of all the manual steps you perform today — each and every one of them. The following are examples of what can be automated

  • Ensuring that incompatible updates of services with respect to other services never happens
  • Complex failure recovery scenarios such as restoring data from multiple distributed stores
  • An application definition that just works or fails with a clear message— irrespective of underlying infrastructure
  • Coordinating complex ordering of deployments and resources
  • Escape hatches to lower level primitives (directly talk to Kubernetes)
  • Secure by default
  • Visibility by default into your workloads — deep visibility into network, storage and compute resources.
  • Resource distribution and management — Never worry about how to divide the hosts, networks or volumes between different teams
  • Consistent methodology to observe and debug a running workload

Debugging and Observability as first class citizens

One of the key design philosophies behind the Koki language was to make Debugging and Observability first class citizens.

Every workload defined on Koki will acquire the ability to be observed and debugged. The language also provides users with the ability to override or enhance how the workload gets observed and debugged.

The following are scenarios I have encountered and fixed hundreds of times at large organizations. My aim is to never allow such scenarios to occur in the first place or to provide easy methods to debug and fix them:

  • Looking at the Kubernetes-Controller-Manager logs or the Kubelet logs to figure out why a pod doesn’t get scheduled
  • Guesses about why a new image doesn’t get pulled
  • Trying to understand why a volume doesn’t get attached to a node on which the pod is running
  • The worst of them all: why a packet wouldn’t ever reach a service even though the service was healthy, the ingress was healthy, the certs seemed correct, and the pods were healthy

Painting a pretty picture

As Kynan Rilee and I build this platform, we are excited by the numerous advantages it brings to the users.

  • One-click deployment of any workload (Imagine AWS RDS style experience, but for any of your workloads)
  • Robust, Fault Tolerant and Failure Resistant Infrastructure
  • Observability and Debuggability from the ground up

This is the future of IT - where launching complex large scale workloads is like launching a program on your machine, and where large scale applications can be observed and debugged like a program running on your machine.

Combined with a distributed runtime that understands your workload and a specification format that helps you do the right thing — outages from human error will be a thing of the past.

Conclusion

I hope that this effort brings forth a new world of IT, where users can scale their workloads to even greater extents and accelerate their organization without spending as much time or money on IT.

As of today, you can request a private beta by signing up on www.koki.io — or start using this when we launch on May 2nd (yes, that’s Kubecon Europe!)

You can try out the first piece of the Koki puzzle today — Koki Short. Short is a tool and a format for specifying Kubernetes API resources, composing them, and reusing them.

The Short format is

  • Human friendly — easier to read than long-winded Kubernetes API resource key names (such as preferredDuringSchedulingIgnoredDuringExecution)
  • Free to use
  • Completely open source
  • Zero learning curve —It’s just YAML. No need to learn a new language.
  • Easy to maintain
  • Available now as a chrome plugin — https://goo.gl/sqiCw1.

You can find more information about it here — https://goo.gl/73KADw

Go try it out, and let us know what you think!

--

--