Kubernetes in virtual reality: Building the K8s VR experience

Ryan van Niekerk
16 min readNov 22, 2016

--

Kubernetes VR demo — the finished Project

TLDR;

I made a Kubernetes pod visualizer / interactive experience for the HTC Vive. You can try out the code from the following repos:

If you want to run it in VR mode, you MUST have the following:

An early prototype of spawning objects with physics into A-frame

This is the first project I’m aware of that attempts to bring virtual infrastructure into an interactive Virtual Reality world. Before I dive into the more technical aspects about why you are likely here in the first place (Kubernetes in virtual reality), I just want to take a moment to acknowledge what an incredible community we have supporting the Kubernetes project. I’ve been an avid user for just under a year now, and have never been more excited to profess my love for a piece of open-source software.

It’s truly GAME changing (please excuse the pun) for the technology community as a whole. The Slack channel has been an invaluable resource and direct line of communication to people much smarter than me throughout my entire learning process over the last 10 months. I’m happy to say that we at Lonely Planet have adopted Kubernetes in our production environment and it is now a key component of our software delivery pipeline.

Please check out my Kubecon talk to learn more about the project that might have gone unmentioned in this post.

Kubecon presentation of Kubernetes Virtual Reality

Just an idea…

That’s all this really was 8 months ago. VR was just going mainstream with the release of the consumer grade Vive and Oculus. I was inspired to create this project by the previous “infrastructure in a game” prototypes of Dockercraft and Docker DOOM. Whilst the practicality of such hacks is open for debate, it was fun and refreshing to see people thinking about visualization of Docker in new and exciting ways.

I wanted to build upon these ideas and take them to a whole ‘nother level. The seed had been planted, I knew this was something I wanted to take on. I submitted my CFP (call for papers) to Kubecon in March. And waited. And waited. And waited. In September I finally got word that I was going to be speaking.

Now all that was left was to figure it all out…

The current VR landscape

I’m not going to talk about all the players here, but I do just want to highlight a few of the major choices a VR developer has to choose from at the moment.

Unity

Chances are if you own a mobile phone, you’ve played a game powered by the Unity game engine. Valve also built much of the (awesome) The Lab VR experience using the Unity game engine and have even open-sourced many of the components they added. I spent about a week exploring Unity before deciding that it wouldn’t fit well into the Kubernetes ecosystem. Building a event-based or restful framework into the engine seemed like a rather daunting task and requiring a user to download and install a binary seemed prohibitive to my end goal.

Pros:

  • Free to start
  • Stable
  • Awesome community
  • Great cross-platform editor
  • Drag-and-drop support for VR

Cons:

  • You need to know C# to be effective with it
  • Users of your “product” need to download and install your application in order to use it.
  • Free version isn’t open-source

Unreal

I really won’t do Unreal justice, it’s a fantastic and proven game engine. That being said, it’s incredibly deep and I was overwhelmed in figuring out where to even start. I never considered it a viable option for what I set out to accomplish.

WebVR

Now for the cool new-kid-on-the-block, WebVR. You may or may not have even known it was a thing that exists, but it’s here. And it’s gaining momentum fast. Mozilla is in large part to thank for this, they have a team dedicated to nothing but VR on the web, and they are doing a fantastic job. I recommend reading over the article on why WebVR matters.

There is one WebVR framework that stands out from the rest (and the one I ultimately decided to build upon), A-Frame.

A-Frame logo animation

A-Frame is a WebVR framework self-described as the following:

An open-source web framework for building virtual reality experiences. We can build VR web pages that we can walk inside with just HTML. Under the hood, it is a three.js framework that brings the entity-component-system pattern to the DOM.

Yes, you can build VR applications using nothing but the HTML and Javascript knowledge you have already. You can see the appeal here, both for the users and developers. Not having to learn a completely new language or be forced to download and install something just to view what other people have built is a pretty compelling reason to go with WebVR.

Here is a short example of how easy it is to render a VR scene using A-Frame:

Example A-Frame application by Mozilla

Building blocks

Now that I had settled on a framework for building the VR component of the application, I still had another big piece of the puzzle to solve. How do I tie it together with an actual Kubernetes back-end?

A Kubernetes cluster consists of a fairly wide range of different types of resources, including but not limited to:

  • Pods (with 1 or more containers)
  • Deployments
  • Replication Controllers
  • Volumes
  • Secrets
  • Config Maps
  • Services

I knew it wouldn’t be feasible to attempt to build a representation of all of these. I chose only to represent individual Pods for the first iteration. Regardless of whether a Pod was part of a Deployment, a Replication Controller or something else, it would be visualized as a single physical object in VR.

These were my main remaining objectives for the first attempt:

Provide a physical representation of a Pod in VR

A Pod would need to be represented by an actual three-dimensional object in the scene as opposed to just an item in a list that you would get from running :

kubectl get pods

I settled on the built-in primitive <a-box> (a cube) that comes with A-Frame as it’s simple enough to use, I can render a lot of them and it wouldn’t require loading loads of custom object models. Here is an example:

A-Frame’s “box” primitive

Handle real-time Pod events from a Kubernetes cluster

This meant that anytime a new Pod was created or deleted on the API end, that event should propagate to my VR application in near real-time. This was critical for my use case as the user would be connected to VR and requiring them to have to refresh a page in order to see changes would be a deal-breaker.

Kubernetes has a pretty awesome “watch” flag built right in. You’ve probably used it if you ran the following command:

kubectl get pods -w

I decided to use NodeJS to build the back-end API for a number of reasons, mainly:

  • The front-end was already going to be javascript, might as well save time and do the same for the back-end.
  • There are many great Kubernetes API client libraries in the NPM package repository. I went with “kubernetes-client” by Godaddy as it has nice built in support to stream the watch API and act on different events.
  • The asynchronous nature of the language would be useful in having a single application that dealt with non-blocking streaming events as well as one-off requests for additional information.

I give a deeper dive into the Kubernetes watch API in the section “Kubernetes watch API in-depth”.

Developing for VR is not the most fun process…

Handle propagation of real-time events in the browser

There were a few requirements to consider on the browser side. I knew I was going to be rendering dynamic (and sometimes stateful) components in the DOM. For this reason, choosing React was a no-brainer. This is actually the first time I’ve used React and it turned out to be pretty painless for the most part.

Luckily for me, Kevin Ngo (one of the uber-genius core members on the A-Frame team) already built a fantastic package to take advantage of all of the power of React directly in A-Frame. This meant I could build stateful (or stateless) components in React that represent A-Frame primitives. A-Frame + A-Frame React allows me to tie directly into the render loop in A-Frame (key to building a performant 3D application).

In order to propagate these events from the back-end API to the browser I decided to utilize the web sockets powered framework Socket.io. It’s very similar to another framework I worked with previously on a Docker hackathon project to visualize real-time Docker events.

Socket.io utilizes web sockets to enable real-time event based communication between browser and server. This was a key component of my project given the real-time requirements for the VR aspect of it.

I should mention there was another strong competitor in the open-source web sockets space called Deepstream. I mainly ended up using Socket.io based on the number of examples that utilized React in combination with Socket.io that I could reference.

Allow the user to interact with a Pod in VR

At minimum I wanted a user to be able to delete a Pod from inside of the VR application itself. Initially I wanted to allow for Pod creation as well, but given the timeline I decided to limit interactions to just allow for deletion. I also wanted each Pod to be able to display specific meta-data about itself. My initial goal was just to get the name overlayed on top of the Pod, but I eventually convinced myself to pull in more information and came up with a fairly decent mechanism to display that data.

Maintain a steady 90 FPS performance

Having played my fair share of Virtual Reality titles released for the Vive, I knew performance was a must. Anything less than 90 FPS in a virtual reality environment was begging for motion sickness. If you ever got your hands on one of the earlier Oculus Rift dev kits, you know what I’m talking about. The low update-rate and screen tearing were extremely apparent and significantly reduce the immersion factor. This turned out to be one of the more challenging aspects of the project. It was a constant trade-off between features and maintaining a steady 90FPS.

Component overview

Kubernetes VR component overview (powered by Cloudcraft)

The Kubernetes watch API in-depth

A key piece of this entire project was being able to act on real-time information from the Kubernetes API. I didn’t want to have to rely on a polling mechanism to query for changes from the cluster, but rather receive the changes that happen directly through a stream.

The Node.js backend utilizes a Kubernetes client library written by engineers at Godaddy. This particular library makes it easy to utilize the watch flag that you can pass to the Kubernetes API to keep a persistent connection open (via HTTP or websocket) and stream in changes to resources in real-time.

Here is an example of how this code works:

Different events that come from the K8s watch API — full code available at https://github.com/thenayr/kubernetes-vr-api/blob/master/index.js

I did run into a few caveats in dealing with the watch API. Unbeknownst to me at the time, the Kuberenetes API will actually assign a timeout to your connection regardless of what you set on the client side. I made the assumption that the watch request would persist until I killed it on the client side. You can read a bit more about how to deal with that bit here: http://stackoverflow.com/questions/33480560/kubernetes-api-server-drops-watch-connections

Determining when a new pod was created AND ready

Another “gotcha” to mention with the way the watch responses come through deals with the events ADDED and MODIFIED. When I was debugging an earlier version of the project, I noticed that pods that come through on the ADDED event were missing a lot of the metadata that I needed to pull in (IP address, labels, etc). It turns out that ADDED is the first event that fires for a new pod, even before that pod actually enters any sort of ready state.

Even more troublesome, creating a single new pod would fire off a series of events like so:

  • ADDED
  • MODIFIED
  • MODIFIED
  • MODIFIED
  • MODIFIED

Each phase that the pod went through after first being added would send another event to the stream. Given that my goal was simply to take a newly created pod and it’s metadata and insert it into my A-Frame scene, this made it rather difficult to distinguish when a new pod was actually created and ready, especially when multiple pod creations and deletions were happening simultaneously. I eventually found the right combination of events to determine when that state occurs:

Events to use to find newly added ready pods. Full code available here — https://github.com/thenayr/kubernetes-vr-api/blob/master/index.js

That is to say, I couldn’t use the ADDED event in the stream to generate new objects in my scene, instead I had to rely on a combination of the MODIFIED event and a series of metadata objects that show up in that event. It would be nice to have a READY event added to the API that would in effect do the same thing.

Placing Pods into a A-frame VR scene

An earlier prototype of the scene with some placeholder pods

Now that I had the event stream properly detecting pods upon creation and deletion, I had to come up with a way to get them to render directly into my VR scene in real-time.

This is where React really shines, I was able to build a Pod component which I could then have dynamically added or removed from the scene whenever the appropriate event was triggered. The following is a simplified gist of what that component looks like (I stripped out some of the dynamic texturing and extraneous functionality)

I then built a parent component to handle the actual spawning of the child Pods which would handle the positioning, retrieval of metadata via web sockets and overall lifecycle of the child pod.

There is quite a lot of code here that handles communication via websockets to and from the backend Node.js API. Important to note is the initial provisioning that happens via the loadPodsFromServer function. This emits a websocket event to the Node application which in turn responds with a JSON blob including my existing pods.

I wanted the pods to spawn in random positions in the scene and fall out of the sky. The function randomPosition handles that by setting the X,Y,Z coordinates each time a new pod component is going to be created.

Later on in the development cycle I decided I wanted to texturize the boxes depending on the type of image the pod was actually running. To implement this I added a simple labeling system to my (example) Kubernetes manifest files.

spec:
replicas: x
revisionHistoryLimit: x
template:
metadata:
labels:
type: nginx

When the Node API emits an event over web sockets, it includes this metadata label type and uses it to map back to a texture image in the front-end application:

The Node API side of the code checks to see if a pod has this label assigned, otherwise gives it a default pod texture (blue with the Kubernetes logo):

Inside of a separate React component I define my A-frame assets using the asset management system :

This entire system is very rudimentary and rigid for the most part (it has no notion of textures for types that don’t exist), but it got the job done for the limited subset of pods I was dealing with for my demonstration.

Here is the (browser and keyboard) result of what these texture mappings look like:

Custom material textures based on Kubernetes labels

Deleting Pods from inside of VR

Deleting a pod from inside VR

I wanted the interactions with the cluster to be bi-directional, not only should newly added and deleted pods propagate to the front-end, but pods that I delete from within a scene should also propagate to the Kubernetes back-end (adding pods from inside VR was put-off for later on). I had come up with a few ideas on how pod creation should work, but I decided to table it for a later iteration of the project.

In order to delete pods, I came up with the following scenario: A pod will always spawn within the bounds of the platform that the scene is based around, if the user takes a pod to the edge of the platform and drops it over (or throws it over from a distance), the pod will fall for a seemingly infinite distance and emit a delete message back to the Node API which will in turn run the equivalent of kubectl delete against that specific pod. This turned out to be pretty fun and easy to implement. I even added this vive-cursor component to the one of the controllers to allow you to point and click on a pod and send it flying into the abyss (easier and arguably more fun than having to carry it or throw it).

For the functional implementation of this, I utilized the A-frame component system to script out a polling check on the value of Y (current height in 3D space) to see if it was far below the scene in order to have it emit the delete event.

A-frame components expose a gateway into the lifecycle of an entity to add additional functionality. You can run functions based on several different states of the entity. In the example above, when a pod is initialized I trigger an interval function that runs every 5 seconds and checks if the pod has fallen 50 meters below the scene, emitting an event if that becomes true. The parent component of my pods binds this event to a function which in turn emits (you guessed it) a web socket event back to the Node API with the specific pod ID as the payload.

And the Node handler for this event:

Pod metadata UI

Displaying textual information is notoriously challenging in a VR interface. Luckily there are already A-frame components which allowed me to render geometric text in my scene. I utilized this component to build an interface which can be toggled on or off by clicking the Vive grip button while hovering the controller laser over a specific pod.

A-frame text-ui component

Unfortunately I did end up running into a number of performance issues by having the text UI components rendered (albeit hidden) when the pods get initialized. Instead I ended up having a “clicked” state on my pod component that would only render the text UI when activated. This helped significantly with the initial performance hits. I’m still not entirely happy with the state of the UI, but it was sufficient for demonstration purposes. Note that the pod ID that displays above the individual pods is actually using a separate sprite component that I hacked together (I REALLY dislike how it looks, but it functions and performance isn’t a problem like it was with the geometric text).

The gross sprite labels that I don’t like at all

Cluster metadata UI

In addition to individual pod metadata, I also wanted a way to display more generic information about the cluster as a whole. I didn’t spend a whole lot of time in building this particular component, but I was able to display the total count of pods and deployments that exist in the cluster.

There is a placeholder button where I was going to add pod creation functionality, but haven’t gotten around to it as of yet.

Kubernetes UI in VR

The cluster metadata UI will pop-up when the user clicks the menu button on either vive controller. Here is the code for the cluster UI:

Vive controller functionality in A-frame

The vive controller functionality in action

There were several pieces of functionality that were critical in order to support the full room-scale experience with the Vive that I wanted. They are the following in no particular order:

  • Controller teleportation
  • Controller physics interaction (pick up / touch objects)
  • Controller cursor (laser pointer) interaction

Luckily for me nearly all of these components were already created by the community (some JUST in time for me to take advantage of them).

Here are the community projects I relied on for the core functionality of the Vive controller:

Teleportation by Fernando Serrano: https://github.com/fernandojsg/aframe-teleport-controls

Physics interaction (grab and sphere-collider by Don Mccurdy): https://github.com/donmccurdy/aframe-extras/tree/master/src/misc

Vive cursor by Ben Pyrik: https://github.com/bryik/aframe-vive-cursor-component

Here is the code for my controller entities:

It’s worth mentioning that the Vive controllers ONLY work in the experimental builds of Chrome with Web VR support at the moment. They will not function in Firefox nightly builds.

There were also a number of hacky workarounds I had to implement in order for the controller interactions to work with newly appended objects. Due to the way some A-frame components are implemented, you often have to remove and reinitialize them when new objects are added. Here is an example of how I did this every time a new pod was added to the scene:

Using the componentDidMount() lifecycle hook via React, I can remove and add the sphere collider to my controllers (in order for them to register new objects), as well as adding the dynamic-body component to the newly created pod (so it actually gets physics applied to it).

Don’t try this at home

Or rather, only try this at home…All of the aforementioned code is available on my Github. I DO NOT RECOMMEND you try this out against any Kubernetes cluster in the wild. In its current state, you should ONLY try this out against a local or experimental (and very disposable) cluster.

Known issues

As mentioned before, performance was a constant struggle between new features and keeping a smooth frame rate. I haven’t tested a scene with more than 50 pods currently, though I would imagine it would start to stutter with larger amounts. I’m no expert in Javscript and I’m sure there are plenty of performance optimizations to be made in the code that I wrote.

The Node.js backend will timeout after a (mostly unknown) amount of time, at which point you will need to restart it..this can be easily coded in to prevent a restart from being required.

I’m definitely not happy with the state of the code base. If it looks like it was hacked together by someone sleep deprived and completely new to most of the technology in play, there is a very good chance that is the case.

Wishlist

There are a ton more features I would love to add to this project. Some additional ideas I had:

  • Streaming pod logs into the scene.
  • Create new pods from within VR
  • View pod metrics (CPU/memory) from within VR
  • Organize pods based on deployments / rc’s

Contact

If you’d like to discuss this project or schedule a demo session, please feel free to contact me via twitter or email.

I’m open to collaborating and/or working on similar projects like this (and extending this project further).

Thanks for reading!

This is what happens if you replace all of the dust particles with kittens. Adorable and terrifying at the same time.

--

--

Ryan van Niekerk

DevOps Engineer at Lonely Planet, Ketogenic freak. All views and opinions are strictly my own.