Running a Tight Ship: Deploying Kubernetes to Run Windows Containers

João Valentim
OutSystems Engineering
11 min readMar 18, 2019

With OutSystems 11 we brought containers to low-code and changed how applications are packed, shipped, and run. They are lightweight software packages that are standalone and executable and contain everything that’s needed to deploy an application. Because the code is isolated inside its container, it’s easier to change or update and scale, so as demand increases, copies of that container can be deployed and executed across other computers. As you scale, you’ll need a way to efficiently distribute and schedule those processes that are consumed across the computers. That means you’ll need to automate your resource management, and you’ll want to have it running on-premises, on your Windows servers.

Orchestrating Your Containers With Kubernetes

So, you’ve switched to deploying your OutSystems apps in containers. Now, you need a way to automate the management of those containers. It’s time to take things up a notch and maximize the benefits of running containerized applications.

There are many container orchestration tools out there. Given that Kubernetes (K8s) is the gold standard for container orchestration, let’s go with it. Let’s have a look at your options when using Kubernetes to run Windows containers.

Kubernetes is available in Docker for Windows, but it consists of a single-node cluster, only fit for testing. To have a more robust and scalable solution, you need to have your own full-blown Kubernetes cluster. If you’re not familiar with the concepts that Kubernetes uses, before you begin the tutorial, you should start by reading this Kubernetes Concepts article.

Before You Begin: The Prerequisites

Hardware and Operating Systems

At the time of writing, the Kubernetes control plane ran on a Linux server. Depending on the type of containers you want to deploy, the worker nodes could run on a Linux or Windows Server. The focus of this article is on Windows Server worker nodes that are fit to run OutSystems applications. To complete this tutorial, you should have the following installed:

  • Kubernetes Master: a recently updated Linux machine. We used Ubuntu Xenial version 16.04 (LTS).
  • Windows Workers: Windows Server, version 1709 or later. We used Windows Server version 1809.

Step 1: Configure the Kubernetes Network

Allocate Subnet IP Addresses

A Kubernetes cluster introduces new subnets for pods and services. To ensure that none of them collide with existing networks, you must properly allocate them.

  • Service subnet: a virtual, non-routable subnet, used by pods to access services. Each service will have an IP on this subnet, so you should allocate a broad enough range. Default value: 10.96.0.0/12
  • Cluster subnet: a global subnet that is used by all the pods. Each pod will have an IP on this subnet, so there must be enough IP addresses to accommodate all the pods. Each cluster node is assigned a smaller /24 subnet from this for their pods to use. Default value: 10.244.0.0/16
  • DNS Service IP: IP address of the kube-dns service, used for DNS resolution and service discovery. This address is taken from the service subnet. Default value: 10.96.0.10

Setup Anti-Spoofing

If you are using virtual machines to deploy the Kubernetes workers, the anti-spoofing protection must be disabled. To ensure that MAC address spoofing in Hyper-V is enabled, you should run the following Powershell command as Administrator on the machine hosting the VMs:

Get-VMNetworkAdapter -VMName "<name>" | Set-VMNetworkAdapter -MacAddressSpoofing On

If you are using VMware, the guest adapter should have promiscuous mode enabled.

Choose a Networking Solution

Choose a networking solution that ensures the virtual cluster subnet is routable across all the cluster nodes. There are several ways to do this, namely:

  1. You could use a Host-Gateway topology, where a third party Container Networking Interface (CNI) plugin sets-up the routes on each host.
  2. You could use an Upstream L3 Routing topology, configuring a top-of-rack (ToR) switch to route the cluster subnet.
  3. A third option is to use an Open vSwitch (OvS) and an Open Virtual Network (OVN), which creates a logical network amongst containers running in multiple hosts.

For simplicity sake, in this tutorial, we are going to use the Flannel CNI plugin (more on that in step 2). It’s a very simple overlay network created by CoreOS that satisfies Kubernetes requirements.

Step 2: Setup the Kubernetes Master

The official documentation for installing and initializing a Kubernetes master can be found here. This article describes the procedure for Kubernetes v1.13, which may change in the future. The following commands must be run in a bash shell as root. To install and update the Kubernetes master, do the following:

Update the Server

Start by updating the Linux machine:

apt-get update -y && apt-get upgrade -y

Install Docker

Kubernetes can use many container runtimes, but the most widely used is Docker. The installation procedure for Docker is outside the scope of this article, but it’s quite simple. When installing Docker, you should align the Docker version with the Kubernetes version. For each Kubernetes version, there’s a list of validated Docker versions. You can check the list of validated versions in the changelog file.

Now you should install Docker, following these steps for Docker CE or these for Docker EE.

Verify the Docker installation by running a hello-world container:

docker run hello-world

Install Kubeadm, Kubelet, and Kubectl

To bootstrap your Kubernetes cluster, install kubeadm. It’s a tool that sets up the best-practice defaults to get a viable Kubernetes cluster up and running.

The primary node agent that runs on each node is kubelet. It takes a set of pod specifications and ensures that the containers described in those specs are running and healthy.

And then, the kubectl command line interface manages the Kubernetes cluster.

ccurl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -cat <<EOF >/etc/apt/sources.list.d/kubernetes.listdeb http://apt.kubernetes.io/ kubernetes-xenial mainEOFapt-get update && apt-get install -y kubelet kubeadm kubectl

Turn Off Swap Space

For Kubernetes to work properly, you must turn off the swap space:

vi /etc/fstab  # (remove any swap entry)
swapoff -a

Assuming you are using the default cluster subnet (10.244.0.0/16) and service subnet (10.96.0.0/12), the master is initialized using kubeadm:

kubeadm init --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12

This operation may take a few minutes. Once the process has completed, you should get a confirmation message stating that the Kubernetes master was successfully initialized.

Change Permissions for Kubectl

To start using the cluster and to use kubectl as a regular user (not Administrator user), run the following commands in an unelevated, non-root shell:

mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config

Enable Mixed-OS Scheduling

Certain Kubernetes resources, like kube-proxy pods, can be scheduled in all nodes. To guarantee that the kube-proxy DaemonSet targets only Linux, it needs to be patched with a NodeSelector.

Download this Linux NodeSelector and apply it to target only Linux:

wget https://raw.githubusercontent.com/kubernetes-sigs/kubespray/master/roles/win_nodes/kubernetes_patch/files/nodeselector-os-linux-patch.jsonkubectl patch ds kube-proxy -n=kube-system -p "$(cat nodeselector-os-linux-patch.json)"

Verify the Kubernetes Master State

After completing the previous steps, you should check that the system is stable.

This means that the pods for all Kubernetes master components (kube-apiserver, etcd, kube-scheduler, kube-controller-manager) must be in Running state. These pods live in the kube-system namespace.

$ kubectl get pods -n kube-systemNAME                      READY  STATUS    RESTARTS   AGE
coredns-576cbf47c7-gmt5d 0/1 Pending 0 9s
coredns-576cbf47c7-xn6sf 0/1 Pending 0 9s
etcd-n0 1/1 Running 0 4m28s
kube-apiserver-n0 1/1 Running 0 4m23s
kube-controller-manager-n0 1/1 Running 0 4m26s
kube-proxy-fxv2r 1/1 Running 0 15s
kube-scheduler-n0 1/1 Running 0 4m24s

However, the core-dns will remain in Pending state until a network solution is configured (this will be done in the next step). Also, calling kubectl cluster-info will show a message with information where Kubernetes master and KubeDNS are running.

Step 3: Set up Flannel Networking

As mentioned previously, we are using Flannel as a third-party CNI plugin to set up the routes for the virtual cluster subnet. The Flannel networking will be configured in host-gateway mode, which defines static routes between pod subnets on all nodes. To configure the Flannel networking follow the next steps.

Enable Bridged IPv4 Traffic

In the master node, you need to enable bridged IPv4 traffic to iptables chains:

sudo sysctl net.bridge.bridge-nf-call-iptables=1

Configure Flannel

To update Flannel, download the latest Flannel manifest:

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

To enable host-gateway networking across Windows/Linux, you need to make some changes to the net-conf.json section of the downloaded kube-flannel.yml file:

  1. The type of network backend must be set to host-gw instead of vxlan.
  2. The cluster subnet must be properly defined (set to 10.244.0.0/16 in this example).
net-conf.json: |   {      "Network": "10.244.0.0/16",      "Backend": {         "Type": "host-gw"      }}

Launch Flannel

Flannel runs as a DaemonSet in Kubernetes, defined in the kube-flannel.yml file. To launch it, run the following command:

kubectl apply -f kube-flannel.yml

The Flannel pods are Linux-based, so you need to make sure that they only run on Linux nodes. To achieve this, apply the Linux NodeSelector patch mentioned previously to the kube-flannel-ds DaemonSet. For the Windows worker nodes, Flannel will be launched via flanneld host-agent process. There’s more on this later when we join the Windows nodes.

If you look carefully at the DaemonSets defined in kube-flannel.yml, or check the ones already deployed in the cluster, you will see that there are several kube-flannel-ds-*. Depending on your processor architecture, you need to target the right one. Assuming you’re running on an AMD64/x86–64 server, you need to patch kube-flannel-ds-amd64:

kubectl patch ds kube-flannel-ds-amd64 -n kube-system -p "$(cat nodeselector-os-linux-patch.json)"

Validate the Flannel Configuration

After a few minutes, all pods should be Running.

$ kubectl get pods -n kube-systemNAME                                 READY   STATUS RESTARTS   AGE
coredns-576cbf47c7-gmt5d 1/1 Running 0 12m
coredns-576cbf47c7-xn6sf 1/1 Running 0 12m
etcd-n0 1/1 Running 0 12m
kube-apiserver-n0 1/1 Running 0 12m
kube-controller-manager-n0 1/1 Running 0 12m
kube-flannel-ds-amd64-wq779 1/1 Running 0 1m
kube-proxy-vbzdj 1/1 Running 0 12m
kube-scheduler-n0 1/1 Running 0 12m

The Flannel DaemonSet should have the Linux NodeSelector applied:

$ kubectl get ds kube-flannel-ds-amd64 -n kube-systemNAME        ...       NODE SELECTOR                                           AGE
kube-flannel-ds-amd64 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux 12m

Repeat the following steps for each worker node you add to the cluster. The commands must be run in an elevated Powershell.

Update the Server

Update the Windows Server machine with the following:

sconfig(select option 6)

Install Docker

The installation procedure for Docker in a Windows Server is described here. If after the reboot the docker service is not running, you can start it manually:

Start-Service docker

Validate the Docker installation by running a hello-world container:

docker run hello-world:nanoserver

Create the Pause Image

The pause container is a container which holds the network namespace for the pod. The pause container has few responsibilities. Its job is to acquire the respective pod’s IP address, set up the network namespace, and then go to sleep. This Kubernetes design allows for a given container in a pod to die and then come back to life without losing the network setup.

For Kubernetes to work, this pause image must be properly and carefully prepared. It’s essential to pull and tag the proper image that fits the worker node Windows version. Incompatible images will cause deployment problems such as leaving the pod indefinitely in ‘ContainerCreating’ status.

To create the pause image of Windows Server 1809, follow these steps:

1. Pull the image that matches the Windows version on the worker node. Since we are using Windows Server 1809, we need the image nanoserver:1809:

docker pull mcr.microsoft.com/windows/nanoserver:1809

2. Tag the image. For the Dockerfiles to work, the pause image must be tagged with the latest tag:

docker tag mcr.microsoft.com/windows/nanoserver:1809 mcr.microsoft.com/windows/nanoserver:latest

3. Validate the pause image is properly created and actually runs on the server by running the container and checking that you see a Powershell prompt:

docker run mcr.microsoft.com/windows/nanoserver:latest

Copy the Kubernetes Certificate

Create a directory to store the Kubernetes binaries, deployment scripts, and config files:

mkdir C:\k

Copy the Kubernetes certificate file (available in the master node on $HOME/.kube/config) to the previously created C:\k directory.

Download the Kubernetes Binaries

The Kubernetes binaries (kubectl, kubelet, and kube-proxy) can be found here. You should target the latest stable version. The binary download links are available in the CHANGELOG.md file. Look for kubernetes-node-windows-amd64.tar.gz.

Extract the archive and place the mentioned binaries into C:\k.

Setup Kubectl

To be able to control the cluster from Windows using the kubectl command, you need to set some environment variables:

[Environment]::SetEnvironmentVariable("Path", $env:Path + ";C:\k", [EnvironmentVariableTarget]::Machine)[Environment]::SetEnvironmentVariable("KUBECONFIG", "C:\k\config", [EnvironmentVariableTarget]::User)

Join the Windows Worker to the Cluster

Microsoft provides a collection of Flannel deployment scripts on this Microsoft SDN repository that helps you to join a Windows worker node to the Kubernetes cluster.

You can download the ZIP file here and extract the contents of Kubernetes\flannel directory into C:\k.

Ensure that your cluster subnet (e.g. 10.244.0.0/16) is correct in the file: l2bridge\net-conf.json.

Finally, you are ready to join the node. Assuming that you are using the default subnets and that the Windows node has the IP address 192.168.160.152:

cd C:\k.\start.ps1 -ManagementIP 192.168.160.152 -ClusterCIDR 10.244.0.0/16 -ServiceCIDR 10.192.0.0/12 -KubeDnsServiceIP 10.192.0.10 -LogDir C:\k\log

Verify the Windows Worker State

After launching start.ps1, flanneld may be stuck in “Waiting for the network to be created.” As a workaround, you should relaunch start.ps1.

If everything went well, you should be able to:

  • View the joined windows nodes by running the command kubectl get nodes on any node
  • See host-agent processes for flanneld, kubelet, and kube-proxy running on the worker node(s).

5. Test the Cluster and the Network

Now that you have a running Kubernetes with Windows nodes let’s deploy a simple application to test the cluster and the network. The application is a very straightforward PowerShell web service that outputs the pod IP address and the number of times it was accessed. To run the test, do the following:

Validate the Cluster

Start by checking that all nodes are healthy:

$ kubectl get nodesNAME        STATUS   ROLES AGE VERSION
linuxmaster Ready master 8d v1.13.1
winworker01 Ready <none> 8d v1.13.1
winworker02 Ready <none> 7d5h v1.13.1

Download and Deploy the Service

Download the web service Kubernetes configuration YAML file:

wget https://raw.githubusercontent.com/Microsoft/SDN/master/Kubernetes/flannel/l2bridge/manifests/simpleweb.yml -O simpleweb.yml

Important: Make sure that the container image in the simpleweb.yml file matches your Windows version. This article assumes you are using Windows Server 1809, so if needed, you must change the container image to mcr.microsoft.com/windows/servercore:1809.

Apply the following configuration:

kubectl apply -f simpleweb.yml

This creates a deployment and a service. After a few seconds you should see two pods running since two replicas were requested in the configuration file:

$ kubectl get podsNAME                             READY   STATUS    RESTARTS   AGE
win-webserver-6bbbf69569-8lfxx 1/1 Running 0 3m
win-webserver-6bbbf69569-d7tsx 1/1 Running 0 3m

To invoke the web service you need to know its IP address, which is provided by the service:

$ kubectl get serviceNAME         TYPE     CLUSTER-IP   EXTERNAL-IP   PORT(S)     AGE
kubernetes ClusterIP 10.192.0.1 <none> 443/TCP 8d
win-webserver NodePort 10.205.245.56 <none> 80:32092/TCP 3m

And now you can invoke the web service to check that its running correctly:

$ curl 10.205.245.56
<html><body><H1>Windows Container Web Server</H1><p>IP 10.244.1.17 callerCount 1 </body></html>
$ curl 10.205.245.56
<html><body><H1>Windows Container Web Server</H1><p>IP 10.244.1.16 callerCount 2 </body></html>

The IP address that is returned by the web service is the pod IP address, and you may confirm that the service is effectively load-balanced between the pods that live in any of the Windows worker nodes.

Increasing or decreasing the number of pods is as simple as running a command. For instance, to increase the number of pods to three, run the scale command:

kubectl scale --replicas=3 -f simpleweb.yml

Steer Your Containers With Kubernetes

So there you have it, your own on-premises Kubernetes cluster! Kubectl provides everything you need to manage and monitor the cluster, but if you prefer a web-based dashboard you can install the Kubernetes dashboard.

Kubernetes is another piece of the puzzle. Accelerate your time-to-market developing in low-code, implement a full-fledged microservices architecture, deploy your applications into containers, and then sit back and let Kubernetes orchestrate, keeping everything running smoothly.

Go ahead and enjoy your brand new Windows Kubernetes cluster! We’d love to hear how it works from you. Please let us know how Kubernetes is working, and share your experience.

--

--

João Valentim
OutSystems Engineering

A System Owner, João focuses on the deepest levels of the OutSystems platform.