Microscaling a Kubernetes cluster

Ross Fairbanks
Microscaling Systems
4 min readJan 18, 2017

We’ve just released a Kubernetes integration for our Microscaling Engine. It autoscales containers to maintain a target queue length. This adds to the Docker Remote API and Marathon API integrations we’ve previously developed. You can also read an older post from us on what is microscaling?

We’re excited about this release because we’re running it in production on our MicroBadger site to retrieve container image metadata from Docker Hub.

Screenshot showing orchestrators supported by Microscaling Engine

The code is available on GitHub and this post is a walkthrough of how to run our Microscaling in a Box demo on your own cluster. We’ll be doing a follow up post on how we’re using microscaling with MicroBadger.

Demo overview

On Kubernetes the demo consists of 5 deployments and a service.

Demo overview diagram
  • Microscaling Engine our scaling agent is developed in Go. It uses the client-go library for Kubernetes to scale the consumer and remainder pods by calling the deployments API.
  • NSQ a lightweight message queue developed in Go. The queue is exposed as a service with a Cluster IP that is routable within the cluster. The other pods then connect using the hostname nsq.default.svc.cluster.local
  • Producer a simple Ruby based image that adds items to the queue. Scaled manually using the kubectl client.
  • Consumer uses the same image but removes items from the queue. Autoscaled by the microscaling pod.
  • Background Task is autoscaled by the microscaling pod to use any spare capacity on the cluster. In our case this is an Alpine Linux image running a bash script with an infinite loop. A real world example from Google is YouTube. Spare cluster capacity is used for video transcoding which is CPU intensive but not time critical.

Bring Your Own Cluster

The demo should run on most k8s clusters. The kube-dns add-on is needed to use the service. We’ve tested the demo on k8s 1.3 upwards. If you don’t have a suitable cluster then the demo will run on minikube and it has kube-dns enabled by default.

This is a great post on using minikube. Like the author I’m also using it with VirtualBox because I had problems with the xhyve driver. Possibly this is because I’m already using xhyve with Docker for Mac.

Setup

The first step is to sign up at app.microscaling.com. On the Start page choose Kubernetes and click next. On the Configure page you can use the default settings so click next again.

Screenshot for the configure page

Later on you can use the Configure page to experiment with the demo. Changing the app name lets you use different k8s deployments. You can also edit the queue scaling rule including changing the target queue length.

Note: Currently we only support scaling k8s deployments that contain a single container.

Deploying the demo

On the Run page you’ll download a yaml file that contains all the k8s objects. The first command create the k8s objects on the cluster. The 2nd command starts the microscaling and producer pods which also starts the demo.

kubectl create -f microscaling-k8s-demo.yamlkubectl scale deployments microscaling producer --replicas 1

You’ll then see the results appear on the run page.

Screenshot of the running demo

Cleanup

Once you’ve finished running the demo this command will delete all the k8s objects from your cluster.

kubectl delete deployments microscaling nsq producer consumer remainder && kubectl delete service nsq

Kubernetes Metadata

Metadata in Kubernetes has some major differences compared with Docker. Including that the term label has a different meaning which can be confusing! In k8s labels are used to organise and select objects e.g. which pods should be exposed by a service. K8S also has annotations which let you store additional metadata for objects.

It feels like our microscaling config could be stored as k8s annotations. This means the scaling parameters could be specified in the k8s yaml with the rest of the deployment config. We think there is a gap because docker labels aren’t exposed as k8s annotations. A tool that does this could be useful.

Kelsey Hightower demo

We also think metadata and data driven deployment will become increasingly important as container usage increases. The awesome Kelsey Hightower did a great demo showing how to use a docker label and the MicroBadger API to deploy k8s objects. You can also watch the full webinar that Kelsey did with our own Anne Currie which discusses a lot of these topics.

Kelsey Hightower demo

If you like this article, hit the green heart to help others find it. And follow Microscaling Systems here on Medium or on Twitter to keep up-to-date as we explore the use of metadata in containers.

--

--

Ross Fairbanks
Microscaling Systems

Interested in Linux containers, data center efficiency, and reusable rockets. Platform Engineer @GiantSwarm