Your very own Kubernetes cluster on Azure (ACS)

From time to time, during experiments, demos and even some production I need to run some Docker payload. Of course, it is super easy on your local computer, however when you need to run it in cloud it sucks.. It is really too complicated compared to the local run. I tried few options like Docker on AWS Beanstalk, AWS ECS and Azure ACS DC/OS (Mesos), but all of them are complicated for my needs and has inherited limitations. So it was a good time to give a try to Kubernetes, especially Azure when announces preview support of Kubernetes in Azure Container Service and same announce from Kubernetes side.

TL;DR; You can skip all this stuff and just check out demo deployment with instructions at


Well, so what is Kubernetes? In most simple words it is orchestrator for Docker containers. You pack your application as Docker container, and then Kubernetes will deploy, run, scale your application. To get into it, you probably can walk through tutorials, but it so simple, that you can skip it and just try how it works. Just spend few minutes for the demo and you will know enough to start clicking.

Required infrastructure

For this article I will use brand new Azure CLI. Installation is quite simple, in most cases you just need `pip install azure-cli` and that is it.

So let’s prevision our cluster. First of all you probably would like to have resource group to isolate your infrastructure.

az group create -n my-very-own-k8s-cluster -l “West Europe”

Next is actually provision cluster.

az acs create -n my-very-own-k8s-cluster \
-g my-very-own-k8s-cluster \
--dns-prefix my-very-own-k8s-cluster \
--orchestrator-type kubernetes

While you are waiting for the command to complete, let me share with you few comments.

  1. It you have problems with the command, for example it fails with some meaningless error, add — debugparameter, it is little over verbose, but will give you actual error.
  2. While — dnsprefix is optional, I suggest add it, otherwise tool will assume it as ‘cluster-name + group-name’ and if it will exceed 90 chars length, you will have strange errors way later on during cluster operation.

By default ACS provisions cluster with single master and 3 agents. All of them use D2 by default, so it will be quite expensive cluster, be careful and cleanup resources when you do not need them.

Additionally, you are welcome to read what is under the hood of ACS Engine for Kubernetes. It gives nice insights about how things are actually implemented. Please note, that ACS documentation does not use new azure tools, and so it is a little bit more complicate that is should :).

First payload

Now when you have all infrastructure in place, move to the Kubernetes side. To mange your cluster you need kubectl. You can get it automatically by (you might need to add it to PATH):

az acs kubernetes install-cli

Next you need to authenticate `kubectl` with your cluster.

az acs kubernetes get-credentials -n my-very-own-k8s-cluster \
-g my-very-own-k8s-cluster

And check if all is good. This command will give you versions for both client and server side.

kubectl version

By this time you actually have everything you need to run your first payload. So let’s create fist definition and try to run it. Definitions in Kubernetes could use many file formats, I will use YAML, so let’s create hello.yml file with content:

apiVersion: extensions/v1beta1
kind: Deployment
name: hello # Name of the deployment, just for reference purposes
replicas: 1 # Number of instances for the given application
app: hello
- name: ner-uk-ms # Name of container, could be anything you like
image: chaliy/ner-ms:uk # Docker image to run
- containerPort: 8080

For the moment it is important to understand some Kubernetes terminology.

Pod — instance of the container —

Deployment — something that will ensure your pods run, supervisors —

Service — something that makes your pods to be an a system —

So definition that we just created is actually deployment for single pod defined in template. The command bellow, will pull Docker image `chaliy/ner-ms:uk`, start instance of it and setup supervisor:

kubectl create -f ./hello.yml

Now few commands to play with it:

# Retrieve logs associated with deployment
kubectl logs hello
# List Pods
kubectl get pods
# List deployments
kubectl get deployments
# Details about concrete pod, for example in case of errors
kubectl describe pods/podid
# Delete something
kubectl describe pods/podid

If you want reconfigure your application, just change it (for example set `replicas:10`) in definition file and run:

kubectl apply -f ./hello.yml

Technically you already run your payload. So let’s see how it is going on. Kubernetes provides UI to observe your cluster. It runs in cluster the same way your applications will run. Of course, you do not want such UI appear outside of your cluster, so by default you will be able to run it only inside a cluster.. But wait.. How then to access to it? It turns out to be quite simple. Kubernetes implements Basteon Pattern and provides a simple way to proxy it to your local computer. So you need to run proxy first:

kubectl proxy --port=8000

And then you will have Kubernetes Dashboard right on your computer. Navigate to http://localhost:8000/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard and walk though it.

One more thing. You probably would like to publish your service outside of the cluster. For this you need to create service (in terms of Kubernetes). To do so use expose command, with type “Loadbalancer”.

kubectl expose deployment hello --type="LoadBalancer" --port=80 --target-port=8080

This command will start provisioning of the new load balancer, so it will take some time. To check if it already functional, query information about service:

kubectl get services/hello

After you will see External IP address, it will means it is ready, and so you can use it send requests. Something like:

curl http://EXTERNAL-IP/

This is pretty much about building your simple cluster. Of course, it is just a beginning of the story :).

All scripts could be found at . There you can also find more realistic example of exposing few services using nginx reverse proxy as a router.


For me it was quite good journey, however I am not really sure if I will continue to use it. There are few issues that will probably block me:

  1. ACS for Kubernetes is still in preview, and some functionality is not just implemented yet (e.g. you cannot scale your cluster yet).
  2. It is quite expensive, at least 4 D2 nodes, will cost up to $1000 per month. It is possible to use smaller instances, but again my type of load will not utilize it.
  3. It does not have a facilities to build systems. Something like docker-compose that will provide you with linked services.

Do not forget to cleanup your resources! Enjoy!