I’m very tempted to buy this https://www.etsy.com/listing/86833090/11x17-poster-robot-print-poster-monster

Programmable Infrastructure in Production with Kubernetes

This week I’m going to give a talk on automation and programmable infrastructure and I need a simple real-world example. I’ll use our Microbadger service, which we partially automated with our own custom scheduler. The code for our scheduler is open source and available at https://github.com/microscaling/microscaling. It scales by calling the K8s deployments API.

What does our service do?

Microbadger is a SaaS that lets you navigate the metadata for any public image on Docker Hub. You give us the image name, we get the data from Docker Hub, tidy it up and show it to you. So far, so standard.

How does it work?

Imagine a user requests a new image that isn’t already cached. There’s lots of data we want to show. Docker Hub gives us some of it very quickly and some takes much longer (in particular, a deep inspection that gets the download size and reverse engineers the docker command for each layer).

We decided to display the quick stuff ASAP to keep the user happy and just fill in the slow stuff when we got it, so we split our data requests and processing.

Everything inside the orange box below is either

  • containerized, stateless & orchestrated in a k8s cluster (on AWS) OR
  • an AWS service (SQS for the queues).

So, what have we got?

  • The API service (Go) handles API requests from the web and either serves cached content or drops a request for new data on the Inspection Q. The web client polls the API until the request is fully complete.
  • A single-threaded Inspector service (Go) polls the Inspection Q for a new work item and requests the quick data from Docker Hub. It processes and caches the response and it also drops a request for the slower data onto the Size Q.
  • A single-threaded Size service (Go) polls the Size Q for a new work item and requests the slower data from DockerHub and processes and caches the response.

Each Go service is single threaded, we just scale them up by running more pods, which for us just contain a single container. The Size and Inspector services share the same physical infrastructure (they run on the same AWS instances/nodes).

To start with, we just ran 10 Inspector pods and 10 Size pods split over 2 T2 Medium AWS Instances. Easy peasy, but a bit unexciting and massively overprovisioned most of the time to handle peaks. Could we use programmable infrastructure to be smarter and save ourselves some money?

Behaviourally, what are the key features of this system?

  • The API service is vital, so needs to be fully resilient and fast (Go was useful for this).
  • The queues are also vital. We handle that by using a cloud queue service, which is expensive, but hey-ho. Note using SQS for our saved state keeps our k8s cluster stateless, which is easier to set up.
  • The Inspector service is urgent for new images, which are vital for our valuable new users.
  • The Size service is less urgent — if it’s unavailable for a while Microbadger won’t be as good but it’ll still work OK.

Ideal for programmable infrastructure!

This fairly standard architecture is perfect for optimising programmatically. To do this we used our Microscaling custom scheduler for k8s, which takes 3 inputs

  • a demand metric (in this case, we used the lengths of the two queues)
  • the high priority service container identifier (we used the inspector service)
  • the low priority service container identifier (the size service)

The custom scheduler just creates inspector containers until the primary demand metric targets are met (the inspection queue isn’t backed up). It frees up the space for the new inspectors by killing off size service containers (remember they share the same machines). When the inspector queue length falls, the custom scheduler kills off inspectors and starts enough size services to keep the size queue managed.

This programmable infrastructure approach has several good features

  • It ensures we run only the minimum number of services, which reduced our expensive rest-state SQS polling operations by ~70%
  • It reduced the total amount of resources we used significantly, potentially enabling us to reduce our bill in future in a multi-tenant cluster, for example.

Philosophy

Money saving is good, but what interests us is the change in philosophy. In a monolithic world all of these components would have been inside a single huge service, which would probably have done all of this custom scheduling itself internally (monitor internal queues and start/stop threads).

We split our service apart for the usual small-scale Microservice reasons

  • Easier to manage/deploy (very CD-able — small, independent components+APIs->small teams+fewer clashes, all of which worked well with the k8s rolling deploys)
  • Division of labour (specialisation of people and their services is more efficient)
  • Improved ability to leverage specialist 3rd party services (e.g. SQS)

The downside of Microservices, however, is you lose the omniscient internal view and control that you can build into a monolith. This usually means over-provisioning is required. However, with custom schedulers & orchestrators we got back this oversight.

Microservices can be an architecture for the resource-rich and time poor. We used our custom scheduler to get some of the resource back without having to re-architect our services, which was nice!

Thanks for reading.

Anne Currie & Ross Fairbanks