Creating Visualizations from Kubernetes metrics with and some notes on Fission Workflows

Running a persistent service for some tasks isn’t really practical for a variety of reasons, especially if it’s a single function with a specialized purpose; FaaS technology has largely addressed this need in a variety of ways, and with particular emphasis on integration with Kubernetes. is one such FaaS framework that runs atop Kubernetes, and can be enhanced and more dynamically manage interrelated jobs with Fission Workflows with the native Kubernetes manifest system.

In this example of how this service can be integrated into your application, we’re going to create a Workflow that does three things: One Fission function will grab data from the Heapster API (so, in this case, cluster metrics from the kube-system), and return a JSON object that is inserted into MongoDB. The second will be a task that prepares the MongoDB database values into a consumable body (in this case, populating a graph rendered by Canvas.js).


  1. A running Kubernetes Cluster (minikube will suffice as well)
  2. Fission and Fission Workflows installed on the cluster.
  3. MongoDB Service running on the cluster (I’ll go into this in a moment)

The data

You’ll just setup a simple MongoDB setup (this can easily be expaned into a replicaSet, but for the purposes of our example, we’ll just have a single pod managed by the RC):

This will create the Mongo controller and the Service on faas-demo-mongo.faas-demo.svc.cluster.local when you run:

kubectl create -f mongo-k8s.yaml --namespace=faas-demo

You can then use any number of options (including the Kubernetes cron system itself) to automate running this as frequently as you’d like the data updated.

With the database up and running, we can move on to seeding that database.

Writing metrics to MongoDB

So, you’ll setup the first task to read the metrics from Heapster, and return a JSON object of the latest metric (and you’ll check for its singularity on inserting into what’ll become your time series), and insert it into MongoDB:

Pretty straight forward. Now, making that data usable by your application (which you’ll see later on, requires x and y values to create a plot, rather than your key names here — this is to eliminate confusion if you use the data elsewhere for example, where the axis name is not super helpful) is your next task.

Reformatting your data

So, what you’ll do next is create a task script to read the data from MongoDB and format it to meet the needs of the application:

which will return the last 60 data points (one per minute, in my example, if I were to run the first job that often) with x and y axis values.

Creating the workflow

So, with these scripts in hand, you need to do two things: create the fission tasks, but also setup the workflow itself.

Create the Python environment:

fission env create --name python --image

Create the two tasks:

fission function create --name faas-demo-write --env python --code 
fission function create --name faas-demo-return --env python --code --path /memstates --method GET

Deploying these as related services, one with an obvious dependency on the other, is now trivially simple with a workflow like:

and can be launched with:

fission function create --name metricsmgr --env workflow --code ./

Consuming your task data

Once the tasks and workflow is launched, your graph data will be accessible from the fission router in the fission namespace:


which will (once sufficiently populated) will return a JSON body with x and y axis values.

With this in mind, you can, then, for example, setup a small Ruby app like this to consume your Fission endpoint statelessly:

In the above index.erbh the only thing that happens is a GET request to the memstats endpoint to receive the formatted graph data to populate the graph. From there, you just run through your normal container deployment for the application, so in my case, just create a container:

build and push:

docker build -t yourregistry/sample-ui .; docker push yourRegistry/sample-ui

and deploy:

kubectl create -f sample-app.yaml

Conclusions & Some Additional Thoughts

Of course, this example uses data of trivial value to consume on its own (ideally, you’d use this data with proper observability tooling i.e. Grafana, etc.), but the takeaway is to see how Fission can be leveraged to push and pull data in a task-oriented, functional way, that requires little operational overhead as well as invested time from developers (as you might with a complete service to perform these two functions on their own) for a larger application.

A more ideal use case for workflows would be one where the tasks are sequential, where as these two tasks are relatively asynchronous (while one depends on the other, one can run without the other needing to run necessarily after the data has been seeded into the database), but a functional example of the sequential behavior Workflow is partly intending to improve.

An example of this behavior comes from the Workflow GitHub:

where the chain of tasks depend on each other to start the next task (a series of sleeps), so this, for example, might benefit users with a multi-stage process like image processing, or even more complex, data transformation and cleaning tasks that might be, on their own, relatively expensive to complete within a single service call. Similarly (rather than sequential execution) workflow also addresses parallelization that would positively impact use cases like data processing where the same data set may require multiple outputs for various pipelines in your applications.