Announcing support for Microsoft Azure in the Kubernetes Cluster Auto Scaling Module

We’re excited to announce Microsoft Azure support for the Kubernetes auto scaling module, an open source system for automating deployment, scaling, and management of containerized applications. This feature, now available on the Kubernetes GitHub repository, provides fully-automated integration with Azure.

Auto scaling is an important feature that enables users to maintain application availability while scaling their Azure capacity up or down as needed to meet spikes in demand. The release marks a step-forward in our commitment to support integration and production-ready modules in the cloud.

Automatically Integrate Auto Scaling to Azure in Minutes

Until now, auto-scaling has only been available in Amazon Web Services (AWS) and Google Cloud environments. This release provides now the same support for Microsoft Azure. The acceptance of this patch to the Kubernetes GitHub repository marks EastBanc Technologies’ first contribution to the Kubernetes codebase open source project.

Auto Scale with Ease

As a software development firm with extensive experience in deploying Kubernetes, we have seen a surge in demand for this capability from our clients. With this release, we’ve facilitated the deployment process of Kubernetes on Azure platform. Users don’t need to make an investment in costly developers to take advantage of this solution. We’ve made it easy. Just a few clicks and you can get it up and running — the solution takes care of the integration automatically. The feature will be available as part of Kubernetes soon, but can be experienced today.

How it Works

The auto scaler module automatically adjusts the size of the Kubernetes cluster under specific conditions and provides the ability to both scale-up and scale-down operations. Scaling-up occurs when a specific pod lacks the resources (CPU or RAM) to run in the cluster. While scaling-down occurs when certain nodes in the cluster are underutilized for a prolonged period of time, these nodes are deleted and their pods placed on existing nodes instead. Full documentation on the cluster auto scaler is available on GitHub.

Set-Up Instructions

To get started you’ll need to set-up a scalable Kubernetes cluster using Azure Virtual Machine Scale Sets for node groups. Currently, Kubernetes setup in Azure VMSS is only supported by, which is included in Kubernetes distribution. You may encounter certain Azure limitations issues with Kubernetes in such a set-up due to limitations with Azure, read more here and here. While Azure support is currently limited, the Kubernetes team is actively working on improving it, and you can still test Azure Auto-Scaler with the latest version of Kubernetes. You may notice that the auto-scaler image used in the manifest, which includes the patch, is our build. This will not necessarily be the case after the next official auto-scaler release.

Below is an example deployment file for Kubernetes:

apiVersion: extensions/v1beta1
kind: Deployment
name: cluster-autoscaler
namespace: kube-system
app: cluster-autoscaler
replicas: 1
app: cluster-autoscaler
app: cluster-autoscaler
- image: kublr/cluster-autoscaler:0.5.0-alpha1
name: cluster-autoscaler
cpu: 100m
memory: 300Mi
cpu: 100m
memory: 300Mi
value: <subscription id>
value: <resource group>
value: <tenant id>
value: <client id>
value: <client secret>
- ./cluster-autoscaler
- --v=4
- --cloud-provider=azure
- --skip-nodes-with-local-storage=false
- --nodes=1:10:<scale-set-name>
- name: ssl-certs
mountPath: /etc/ssl/certs/ca-certificates.crt
readOnly: true
imagePullPolicy: "Always"
- name: ssl-certs
path: "/etc/ssl/certs/ca-certificates.crt"

Check out the module on Kubernetes GitHub repository now and let us know your feedback:

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.