Windows containers in Kubernetes with acs-engine in Azure

Alessandro Vozza
Cooking with Azure
Published in
6 min readApr 20, 2018

--

This is an article that I’ve been meaning to write for a long time coming, and somehow never came around to it. Well, today it’s quiet at the (virtual) office so here it goes! Source is here.

I have to admit, in the beginning I looked at the whole business of running Windows containers/pods in Kubernetes as suspicious: why would anyone not want to migrate to Linux/.net core? Why insisting on .NET? Well, it turns out, there’s a ton of people out there that have very good reasons not to migrate their applications; who am I to tell them what to do? I better help them where they are achieving what they need to!

A great easy tool to deploy hybrid (that is, comprising both Linux and Windows node pools) Kubernetes clusters in Azure is the almighty acs-engine open source project (the very first project on Github I encountered when I joined Microsoft); grab your (conveniently multi-os) binary release here (other than that, you’ll need the azure-cli 2.0, an SSH key and a valid subscription; note that I’m using the B* VM size to save money; sometimes those kind of sizes are restricted in specific regions and/or your account needs to have an higher quota for them; reach out to support for an increase). Last thing you need: a service principal scoped to a resource group for Kubernetes to create Azure resources (Load Balancers, storage accounts, managed disks and so on). Create both like this:

az group create -n hybridk8s
az ad sp create-for-rbac --name hybrid \
--password "bestpasswordintheworld" \
--scopes "/subscriptions/<sub id>/resourceGroups/hybridk8s"

it will return an AppId, together with the bestpasswordintheworld you’ll need this later in the acs-engine template.

Great, now, go grab this file from my github repo and fill the blanks (AppId and secret, SSH key, windows password and DNS name fro your cluster). As you have acs-engine in your $PATH, you can now generate the ARM templates that you need:

acs-engine generate hybridk8sb.json

Feel free to inspect the generated _output/hybridk8sb/azuredeploy.json and _output/hybridk8sb/azuredeploy.parameters.json but don’t change anything! Acs-engine contains all the best practises and the scripts to deploy a fully functional Kubernetes cluster, and unless you’re trying to extend or patch its functionality (which you’re very welcome to! Start here), so trust it to do its dirty job :)

The moment of truth (and billing!): deploying the ARM template to azure

time az group deployment create \
--resource-group hybridk8s \
--template-file _output/hybridk8sb/azuredeploy.json \
--parameters @_output/hybridk8sb/azuredeploy.parameters.json

My deployment took 12m to complete, which is not bad all things considering (note that my cluster is 1+1+1, 1 master, 1 windows, 1 linux node. But even if it was 100’s nodes big, the deployment runs in parallel, so it will take more or less the same time).

So now what? But obviously, we need to talk to our newborn cluster! Most people here use SSH to retrieve the kubeconfig file, but there’s no need, as it was conveniently generated and stored by acs-engine:

export KUBECONFIG=`pwd`/_output/hybridk8sb/kubeconfig/kubeconfig.westeurope.json:~/.kube/config #notice the colon-delimeterkubectl get nodes
NAME STATUS ROLES AGE VERSION
23363k8s9010 Ready <none> 29m v1.10.1
k8s-linuxpool1-23363715-0 Ready agent 34m v1.10.1
k8s-master-23363715-0 Ready master 34m v1.10.1

Notice that I’m merging the new config with the existing ~/.kube/config (you can use kubectx to check contexts).

So now what (again!)? Let’s deploy an ingress controller to manage incoming traffic; we will use the almighty helm to do it in one command line. Let’s first populate a values.yaml file (I tried to do this on the command line, using --set flag but #failed miserably; hence the value file):

rbac:
create: true
controller:
nodeSelector: {
beta.kubernetes.io/os: linux
}
defaultBackend:
nodeSelector: {
beta.kubernetes.io/os: linux
}

This will force the newly created pods to land on the one and only Linux node. Deploy the ingress controller like this:

kubectl create ns ingresshelm install stable/nginx-ingress \
--namespace ingress \
--name ingress -f values.yaml

Voila’! You have an ingress controller in your cluster. After some time, you’ll see the public IP assigned to it using kubectl get svc -n ingress . You can use Azure DNS to map it to a wildcard A record:

az network dns record-set a add-record -n *.apps -g dns -z example.com--ipv4-address <IP of ingress service>

This will just map any DNS name like amazingapp.apps.example.com to the ingress controller, which in turn will know (will be instructed by us) what to with the request.

Now that we got everything in place (in the ingress namespace) to route request into our cluster to the right pods, let’s deploy a Windows service and pods.

Let’s start with a service and ingress:

apiVersion: v1
kind: Service
metadata:
name: win-webserver
labels:
app: win-webserver
spec:
ports:
# the port that this service should serve on
- port: 80
targetPort: 80
selector:
app: win-webserver
type: ClusterIP

Noting fancy here, but not the ClusterIP type.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: win-webserver
spec:
rules:
- host: win.apps.example.com
http:
paths:
- backend:
serviceName: win-webserver
servicePort: 80

This will just tell the ingress controller (the ngix pod) to respond when called as “win.apps.stackmasters.com by proxying to the service called “win-webserver” (effectively routing traffic to the pods behind the service). And lastly, our deployment:

Which is a simple Windows pod (with a single container) that replies to requests over port 80 with its client-ip. Have some patience, it may take several minutes to pull down the microsoft/windowsservercore:1709 image from Docker Hub, but once it’s running, you can just hit the url http://win.apps.stackmasters.com and get responses from your application running in Windows pods.

Tips & Tricks

Say you want to actually access your Windows nodes (there are many cases when you’ll need this, for example to debug a failed provisioning), you can get RDP access to the windows nodes tunneling via an SSH connection:

ssh -f azureuser@hybridk8sb.westeurope.cloudapp.azure.com -L 3389:<windows node>:3389 -N

You can can get a list of windows nodes using

kubectl get node -l beta.kubernetes.io/os=windows
NAME STATUS ROLES AGE VERSION
23363k8s9010 Ready <none> 12h v1.10.1

Not everything is nice and smooth though: there’s for an example a very nasty bug (the infamous #2027) that prevents proper DNS resolution from within a Windows pod. This is potentially a deal breaker if, for example, you need to access an external service like a database existing outside the cluster via its DNS name; there are mitigations in place though that can alleviate this problem. Also, ConfigMaps cannot be mounted as volumes (but can be used as Environment variables, as stated in the official documentation here). GMSA (Group Management Service Account, a mechanism to authenticate against Active Directory using host credentials), while working in Docker, has still to be emerged in Kubernetes (tracking issue here).

Logging could still be a challenge; however, you can check out this great article to get the hang of it (the challenge comes from the fundamental difference in the logging mechanisms between Windows and Linux). Last but not least, someone is working on support for Hyper-V isolated containers here, so stay tuned for very exciting new features (many of those are slated to happen when 1809, a.k.a. Windows Server 2019 will be released in preview, ).

As of acs-engine 0.13 this bug has been solved: acs-engine uses by default the first listed pool in the template to be added as backend pool when it creates a Azure LoadBalancer to expose services; either way, both windows nodes and linux nodes now know how to route traffic (via kube-proxy) to the correct endpoint.

How much will it cost me?

Here’s an estimate of the monthly cost for this 1+1+1 cluster we just created: 194.20€. Consider that a third (~64€) goes into non-productive workloads (running the master components of Kubernetes); for that, Microsoft is launching (now in preview) a managed Kubernetes service called AKS that will save you the hassle and cost of running your own masters.

I hope you enjoyed the article, and let me know in the comments what’s your use case for Windows containers in Kubernetes. Here are some references:

The Windows-k8s official roadmap:

Some really cool blogs:

--

--

Alessandro Vozza
Cooking with Azure

Full time Cloud Pirate, software engineer at Microsoft, public speaker, community organiser and mentor. Opinions are mine’s, facts are facts.