Hyper-V and Windows Kubernetes

Jung-Hyun Nam
Oct 2 · 16 min read
Image for post
Image for post
You can create the hybrid Kubernetes cluster with Windows worker node on your MacBook and bootcamp!

Starting with Kubernetes 1.14, Windows worker node support has come to general availability. But despite this, lots of improvements were added to make less gap between Linux Kubernetes worker node.

IMHO, there are two significant problems to solve before using Windows Kubernetes in production environments.

  • Both of Windows container and Windows Kubernetes are depending on network feature which provided by Host Network Service (a.k.a. HNS). When the Kubernetes cluster created a new service, the Windows worker node can route the traffic to service via its local load balancer, and each service consumes an ephemeral port. Inevitably, all ephemeral ports will be exhausted in the end. This behavior will lead to blocking further connections to use.
  • Windows container does not support elevated mode, so the administrator should install kube-proxy into the worker node directly, not the container form. Also, you should configure all of the network parameters correctly. This kind of work makes it extremely difficult to manage, and every time you want to upgrade the Kubernetes worker node manually.

Finally, Kubernetes 1.19 resolves these two problems. Thanks to this, make the local installation of Windows Kubernetes just got more comfortable than before.

This article will demonstrate how I can install local Windows Kubernetes with Hyper-V on my MacBook Pro.

I was so pleased to find a great tutorial article written by Key Kim. Thanks for your excellent post. Thank you.

Build a stable Windows 10 environment on MacBook Pro

It’s a shame that this is not a well-known fact; macOS has a security feature that protects disk data when you stolen or lost your Mac device, known as FileVault. Windows also has an identical feature called BitLocker.

Unfortunately, when you turned on the FileVault in your macOS environment, you may encounter a flaky Windows 10 installation experience, especially when using the latest Intel-based mac; model and configuration may vary it.

So my suggestion is you would be better to turn off before you install Windows 10 on your mac, especially you are trying to use a complex feature like Hyper-V.

Should I turn off FileVault anyway?

There are some Plan-B to persist FileVault. You can choose the virtualization software which supports “nested” virtualization.

However, in my personal experience, “nested virtualization” performance is generally dreadful, including VMware and Parallels.

Suppose you cannot turn off FileVault anyway. In that case, you can choose an alternative way like cloud computing with nested virtualization (e.g., Dv3 or Ev3 on Microsoft Azure, or bare-metal supported public clouds).

In a financial context, IMHO recommends Dv3 or Ev3 instances on Microsoft Azure to save your budget.

Before you choose Dv3 or Ev3 on Azure, please consult further information on this page.

First things first

To deliver concise information, I suppose you have met some conditions. I’ll explain why you met those conditions later.

  1. You have an internet connection with broadband and not a metered connection. (Especially you should avoid mobile tethering network.)
  2. This article tested with AMD64 Hyper-V and Windows containers.
  3. You are using Hyper-V with 16GiB or higher memory and 512GiB (0.5TiB) or larger SSD disk.
  4. You installed the latest version of Windows 10 Pro.
  5. You installed the entire Hyper-V components in Add/Remove Windows component dialog.
  6. You have a Windows Server 2019 Datacenter installation copy with English, United States version.

Configuring Hyper-V Network

When everything is ready, you can start to configure Hyper-V internal network in Hyper-V administrative console window. Open the virtual switch administration dialog, and create an internal virtual switch named K8s.

Image for post
Image for post
Create an internal switch in the virtual switch administrative window.

And all of the member computers of this network should access to the internet. To achieve this, you need to make an internet connection sharing configuration. Find an internet-connected network adapter, open the Sharing tab, and set sharing to the newly created virtual network adapter.

Image for post
Image for post
You can set internet connection sharing in your internet-connected adapter’s properties window.

Create Hyper-V Virtual PC

To create a hybrid Kubernetes cluster, you will need to create two Ubuntu 18.04 LTS installed VM (A master node and worker node using Linux), and one Windows Server 2019 VM.

You can make your Ubuntu Linux VM in your way. But I will use the Quick VM creation feature to save time and effort. Quick VM creation will use the OOBE (Out-Of-Box-Experience, you can experience this step when you turn on a newly bought PC or laptop.), so there is no additional file copy procedure. Wizard will ask some essential interview to initialize environments, and you got your desktop instantly.

Please create 3 VMs with met those conditions.

  • All VMs are 2nd generation Hyper-V VM.
  • All VMs have at least two cores of processors and 2GiB or higher memory.
  • In the case of Windows VM, please install without Desktop Experience when you setup. This choice means you are using the Windows Server Core version.
  • All VMs updated to the latest security fixes. (In Windows Server Core, you can use sconfig command to achieve this.)
  • All VMs configured to use static IP address, not DHCP. (In Windows Server Core, you can use sconfig command to achieve this.)
  • All VMs configured to hosts file (In Linux, /etc/hosts file, and Windows, C:\Windows\System32\Drivers\etc\hosts file) has multiple hostnames and address mappings between each Kubernetes node host, including itself. When correctly registered, you can test the connection using the ping command.)

Configuring SSH for client and nodes

For efficiency, you can do your task with multiple SSH connections. I will add some guides to configure your SSH connection between your Windows 10 and VMs.

You can install the Windows Terminal App from Microsoft Store.

Please consult to configure your Windows 10 SSH client by this post.

In the Linux server, you can consult this guide to configure the SSH server. You can add your public key from Windows 10, and the public key located under %USERPROFILE%\.ssh\id_rsa.pub (in PowerShell, $env:USERPROFILE\.ssh\id_rsa.pub).

Lastly, you can configure your Windows Server to accept SSH connection with this article. You can register your public key in the same manner.

Is everything ready? Let’s dive further to complete the configuration.

Linux VM Configuration (Common Steps)

This section contains commonly required steps for all Linux Kubernetes nodes.

Install Software

Let’s install additional components.

sudo apt -y install curl vim apt-transport-https
  • curl: This tool used to parse and run the shell script from the internet.
  • vim: I will use this tool to edit text files. You can use other tools like nano.
  • apt-transport-https: This tool will allow adding 3rd party package repository to the local system for apt package manager. I will add the Kubernetes aptitude catalog on this system.

Installing and Configuring Docker

Let’s install the Docker engine. I’ll use the automatic installation script from Rancher. With this script, you can easily install the Docker engine quickly.

curl https://releases.rancher.com/install-docker/19.03.sh | sh

After installing the Docker engine, add the current user to the docker group to connect the local docker engine without sudo command. After then logout and login again.

sudo usermod -aG docker $USERlogout

To use the Docker engine with Kubernetes, let’s edit the daemon.json file.

sudo vim /etc/docker/daemon.json

The configuration file goes like this.

"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"f
"storage-driver": "overlay2"

Save the file and restart the docker engine with this command.

sudo systemctl restart docker

Changing System Settings

To create a Kubernetes node, you should turn off the swap. Also, to integrate CNI plugins between Windows and Linux nodes, kernel parameters need to change.

First, we need to turn off swap.

sudo swapoff -a

Then, change /etc/fstab file to persist swap turned off after the system restarted.

sudo sed -i ‘/ swap / s/^\(.*\)$/#\1/g’ /etc/fstab

Lastly, we need to change the kernel parameter to integrate Windows and Linux Flannel CNI plugin. (Reference)

sudo sysctl net.bridge.bridge-nf-call-iptables=1

Installing Kubernetes CLI

Let’s install Kubernetes CLI to initialize, configure and administer the Kubernetes cluster.

We need to add Google’s apt catalog to the system. Let’s run the commands below.

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
sudo apt update

Then, install the related tools and make versions fixed.

sudo apt-get install -y kubelet kubeadm kubectlsudo apt-mark hold kubelet kubeadm kubectl

Verify kubeadm correctly installed. We will use this tool every Linux node, including each control plane and worker node.

kubeadm version

If the version information is displayed correctly, your setup done.

Configuring Linux Control Plane Node

To initialize cluster and control plane node, let’s run the below command.

sudo kubeadm init — pod-network-cidr=

Note that in this command, we specify the CIDR range into the--pot-network-cidr option. We will virtually allocate this CIDR range on the overlay network, and the overlay network recognized across this cluster’s member nodes. So you can choose your CIDR range for value for creating a local Kubernetes cluster simply.

⚠ Important — After you run this command, the cluster initialized soon. Then, you will see the join command line for other nodes. Please copy the command securely for future use.

Then, copy your client configuration file into your home directory, which contains the newly created cluster’s administrative credentials.

mkdir -p $HOME/.kube 
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You can test the configuration file with the below command. We can monitor all pods going green with the command line.

watch kubectl get pods -A

After seeing all pods going green, press Ctrl+C to exit.

Install Linux Flannel CNI

Let’s install the Flannel CNI for Linux nodes. We need to modify the distributed YAML file to achieve our goal.

Download the kube-flannel.yaml file first.

curl -L https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml -o kube-flannel.yaml

Then open the YAML file with your favorite editor, and find the line net-conf.json: | string. You may find the lines similar below.

net-conf.json: |
"Network": "",
"Backend": {
"Type": "vxlan"

Here, Network should point to the valid CIDR range you specified when initializing the cluster. Then, under the Backend property, you need to add two properties.

  • VNI: 4096
  • Port: 4789

For the completeness, modified contents will look like below.

net-conf.json: |
"Network": "",
"Backend": {
"Type": "vxlan",
"VNI" : 4096,
"Port": 4789

Save the modified YAML file and apply the YAML file with the below command line.

kubectl apply -f kube-flannel.yml

Then you need to monitor all of the Flannel pods going green. You can watch the status of each pod with the below command.

watch kubectl get pods -A

Install Linux Worker Node

Suppose that you have a command line for joining a new node into the cluster. with kubeadm init. You can run the command by prepending the sudo keyword.

But if you are running the command quite later — or — lose the command line, you can re-issue the join command line in the control-plane node’s shell with the below command.

kubeadm token create --print-join-command

Or else, you can recover the token and CA certification hash with these command lines.

$MASTER_IP=(Control-plane VM’s IP Address)$TOKEN=kubeadm token list -o jsonpath=’{.token}’$CA_CERT_HASH=openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed ‘s/^.* //’

By combining recovered values, you can join your new node with below command line.

sudo kubeadm join $MASTER_IP:6443 — token $TOKEN — discovery-token-ca-cert-hash sha256:$CA_CERT_HASH

After the node joins in the Kubernetes cluster, you can monitor the node status going ready with the below command. It takes some time to transit Not Ready state to Ready.

watch kubectl get nodes -o wide

Configuring Windows Worker Node

Let’s take a favorite part, the Windows worker node!

Before you getting started

Almost all of the CNI plugins written for Windows will depend on HNS API, and HNS will configure the virtual network on behalf of the CNI plugin.

At this time, Windows will provide information on all installed network adapters by display name or randomly generated GUID. So CNI will find an appropriate network adapter with display string.

Because of this, you need to use the American English version of Windows Server. If not, Windows will translate network adapter names into the default display language, not the word Ethernet.

Also, every Windows worker node should have identical hardware configurations. Starting Kubernetes 1.19, WINS used to process administration requests from the pod, and this approach achieves installation of kube-proxy as a container form.

When installing kube-proxy within a daemon set, different hardware configuration leads daemon set deployment into a fail. You will have to modify the daemon set every time to resolve this kind of issue.

So if you under this condition, you will need to check the daemon set file every time you deploy.

Checking Kubernetes Cluster Version

You will experience a difference when you are adding Windows worker node into the cluster. First, you need to check all of Kubernetes node’s versions are the same and what versions are running.

kubectl get nodes -o wide

Checking Windows Server Version

When I mentioned earlier, you need to use the latest version of Windows Server to enable all of the Kubernetes features. Let’s check the Windows Server version with the command below.

cmd.exe /s /c ver

Suppose that you are using a later version than the below version.

Microsoft Windows [Version 10.0.17763.1432]

If your OS version is lower than 10.0.17763 (10 and 0 is the version number, and 17763 is the build number), you are not using the Windows Server 2019. You should perform an in-place upgrade or change other machines installed Windows Server 2019.

If your three parts of version numbers matched, but not the last one is lower than 1432, your OS does not support the DSR feature in HNS.

In this case, you can perform a Windows Update operation or download a hot-fix numbered with KB4571748 from the Microsoft Update catalog web site.

Deploying Flannel CNI for Windows

To check your default adapter name is Ethernet, run the below command.


After then, deploy the kube-proxy daemon set into the cluster. You need to change the appropriate version instead of v1.19.2. You can run the below command in your control-plane node’s shell.

curl -L https://github.com/kubernetes-sigs/sig-windows-tools/releases/latest/download/kube-proxy.yml | sed ‘s/VERSION/v1.19.2/g’ | kubectl apply -f -

Then, deploy the overlay version of Flannel CNI daemon set on a cluster. Let’s run the below command line in your control-plane node’s shell again.

kubectl apply -f https://github.com/kubernetes-sigs/sig-windows-tools/releases/latest/download/flannel-overlay.yml

If you found that your adapter name in Windows is not Ethernet, change the command line like below to resolve the issue. (In sed command, the first part is an original string, and the latter part is to-be string.)

curl -L https://github.com/kubernetes-sigs/sig-windows-tools/releases/latest/download/flannel-overlay.yml | sed ‘s/Ethernet/Ethernet0 2/g’ | kubectl apply -f -

At this time, we still have not joined our Windows worker node into cluster, so let’s continue our work.

Install Docker Enterprise Edition for Windows

You can use the Docker Enterprise Edition in Windows Server freely because you already paid the price of Docker EE at your Windows Server price. So you can download the Docker EE from Microsoft’s package store.

Caution: Please don’t confuse with Docker CE or Docker Toolbox. You can download Docker CE via the official docker website, but Docker CE does not support the Windows server’s production Windows container. Also, Docker Toolbox does not provide Windows container features.

First, let’s install the prerequisites of Docker EE installation.

Install-Module -Name DockerMsftProvider -Repository PSGallery -Force

Then, install the latest version of Docker Enterprise Edition.

Install-Package -Name docker -ProviderName DockerMsftProvider

Change the Docker engine startup mode to start automatically when the system starts.

Set-Service -Name ‘docker’ -StartupType Automatic

Finally, restart the server to apply system-wide configuration changes.

Restart-Computer -Force

Install Windows Kubernetes Worker Node

Finally, we will install Windows Kubernetes worker node. We can use the PowerShell script developed by the Windows SIG team of the Kubernetes project.

curl.exe -LO https://github.com/kubernetes-sigs/sig-windows-tools/releases/latest/download/PrepareNode.ps1

We used the curl.exe included in Windows Server 2019.

curl.exe works well and better to use than Invoke-WebRequest of the Windows PowerShell. Because, Invoke-WebRequest requires the Internet Explorer unless you specify -UseBasicParsing option. Moreover, you already familiar with curl utility in the Linux environment.

After then, recheck the running Kubernetes version, and run the PowerShell script with the exact version number like below command line.

.\PrepareNode.ps1 -KubernetesVersion v1.19.2

After you run the script, the directory C:\k created and kubeadm.exe and other related CLI tools copied on that directory. Change your current directory and run the issued join command here.

cd c:\kkubeadm.exe join <IP address of control-plane node>:6443 --token <Token> --discovery-token-ca-cert-hash sha256:<CA Cert Hash>

When the Windows worker node joins, the deployed CNI daemon set automatically deploys the pod into a newly created node.

The sigwindowstools/flannel image built with the Windows server core base image leads to a long pull time. Because of this, your Windows node will take a long time to transit into the Ready state.

You can monitor pulling status in the Windows worker node by running the docker pull command. You can discover the exact image name and tag via investigating the deployed daemon set and pod information.

Deploying Test Applications

Let’s deploy a hybrid Kubernetes application which involves Linux and Windows container at the same time!

I will deploy the modified version of the AzureVote application, consisting of Python Django and Redis containers.

Here, the Redis pod uses a custom image, with Redis’s ported version into Windows, and the Windows Server Core 1809 base image. (I used the ported Redis, which was developed by Microsoft Open Technologies, the old company).

If the application runs correctly, we can see the web page with a voting user interface.

Let’s see the YAML file to do it.

apiVersion: apps/v1
kind: Deployment
name: azure-vote-back
app: azure-vote-back
replicas: 1
app: azure-vote-back
- name: azure-vote-back
image: rkttu/redis-windows:3.0-1809
- containerPort: 6379
name: redis
"beta.kubernetes.io/os": windows
apiVersion: v1
kind: Service
name: azure-vote-back
- port: 6379
app: azure-vote-back
apiVersion: apps/v1
kind: Deployment
name: azure-vote-front
app: azure-vote-front
replicas: 1
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
app: azure-vote-front
- name: azure-vote-front
image: microsoft/azure-vote-front:v1
- containerPort: 80
cpu: 250m
cpu: 500m
- name: REDIS
value: "azure-vote-back"
"beta.kubernetes.io/os": linux
apiVersion: v1
kind: Service
name: azure-vote-front
type: LoadBalancer
- port: 80
app: azure-vote-front

We can deploy the application with the below command line.

kubectl apply -f azurevote.yaml

Then, we need to wait for all of the pods going into the Ready state. Likewise, the first-time deploy will take a long time, so wait patiently.

watch kubectl get pods -A

To check the Python Django pod runs successfully, take note of the pod IP address, and grab a response with the curl command if the response contains an HTML page, the Linux container connected with a Windows Redis container.

Image for post
Image for post
Checking the pod runs correctly

We can then assign an external IP address to the service, created for external access. Unlike the public cloud’s Kubernetes, Hyper-V does not have a concept like a load balancer, so we need to allocate the worker node’s IP address to service.

Check the control-plane or worker node’s IP address and run the below command line to set valid IP addresses.

kubectl patch svc azure-vote-front -p '{"spec":{"externalIPs":[""]}}'

You can then browse your node IPs with your web browser; you can see the Azure Voting App interface.

Image for post
Image for post
Running AzureVote app in the browser

It’s a shame that you cannot use the Windows worker node IP address to the service.

However, you can route the web services on the Windows worker node via Linux nodes IP address. For example, we can host an IIS container on a Windows node and routing it through the Linux node’s IP address.

apiVersion: v1
kind: Pod
name: iis
name: iis
namespace: default
- image: microsoft/iis
imagePullPolicy: Always
name: iis
- containerPort: 80
beta.kubernetes.io/os: windows
apiVersion: v1
kind: Service
name: iis
name: iis
- port: 8080
targetPort: 80
nodePort: 30080
name: iis
type: NodePort
Image for post
Image for post
The IIS container on Windows worker node, within a Kubernetes cluster.

Saving and restoring entire Kubernetes cluster

Unlike managed Kubernetes cluster on the cloud, I built my private local Kubernetes cluster on my MacBook. This system will drain my battery when I’m not using the Kubernetes cluster.

We can save and restore each VM of cluster starting from control-plane nodes to worker nodes in sequential order to avoid this problem.

Image for post
Image for post
We can save our battery of MacBook by hibernating and restoring VMs from the control-plane to worker nodes


Initially, I want to write about the open-sourced Windows Calico CNI plugin, released in Summer 2020. However, the Windows Calico CNI plugin needs to develop to configure with new WINS architecture, which overwhelms my effort budget.

Shortly, Windows Kubernetes will move into the containerd from Docker Enterprise Edition, and the Windows version of the docker will decouple into front-end CLI and containerd. This approach will take some time, but when it comes to out, we can knock down the strict restriction between host OS and container version matching rule via Hyper-V containers, which requires virtualization or nested-virtualization.

If you have any further questions or suggestions, please let me know. You can write a response to this article.

Beyond the Windows

DevOps Engineer’s Blog

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store