WSLinux+K8S: The Interop way

11 min readJul 21, 2018



First thing first, I am an hobbyist that blogs about technology that I love and have fun learning it.
So please, while reading, always keep in mind that I might not know or apply potential best practices, as my blog posts are normally based on proof of concepts.
Use the following for your own tests and if you read something that is really wrong in doing, please contact/teach me (@nunixtech) and I will change the blog too.

Happy reading!

Special Thanks!

And before I go all technical, as you can see in the gif below, this blog post would have never been possible without all the knowledge shared in the different blog posts/official documentation that is accessible all over internet.

All my sources just for this blog post

So thank you Liz Rice, @HashiCorp, @kubernetesio, @Docker and of course @docsmsft for sharing and documenting!


The Kubernetes (k8s) momentum seems to have no end, and the ways of having a local cluster for as a dev environment are multiplying.
However, when it comes down using WSLinux and k8s in a virtualized environment, it (almost) always ends in path edits inside the config file(s) or in relying on the Windows based clients exclusively.

Couple good examples are the following:
1. by @jstangroome
2. by @AbelSquidHead

So the question is: which other solution could be found in order to have a k8s local cluster, with Docker daemon in VirtualBox and without all the path issues that we face with the Minikube installation.

The technologies used

Here is the list of the technologies used and which need to be fully integrated.
And while I will provide the setup for the installation of the k8s “environment”, I will link the documentation to all the points below in case you feel doing it by yourself:

In addition, kubectl needs also to be installed on WSLinux.

Building the environment

Now that we have the list of all the pieces needed to build our local cluster, here are the extra steps to make them fit together and have a nice and shiny dev environment.
I will start from the Vagrant Box step listed above and will assume that the first 4 points are done following the documentation.

Making our $HOME cozy

Before the config files are created, here is a small tweak that I do now all the time: link your WSLinux $HOME to your Windows one and copy the content of the current home (or at least the dot files):

$ cd /home
$ mv <username> <username>.bak
$ ln -s /mnt/c/Users/<Windows username> <username>
$ cp -r <username>.bak/.* <username>/

This small tweak will allow us to modify the files on our home without any fear of breaking the linux permissions (and @richturn_ms will not unleash the Dragons).
And for more information about this tweak, please read the excellent blog from @bketelsen:

The builder: Vagrantfile

This file is actually what will help us creating the Virtual Machine with the target OS and install Docker.

I followed thoroughly Liz’s blog post and in order to make it work for Windows, here is the vagrantfile with the changes highlighted:

$ mkdir -p ~/vagrant/kubebox
$ cd ~/vagrant/kubebox
$ vagrant init
$ vi Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :
# This script to install Kubernetes will get executed after we have provisioned the box
$script = <<-SCRIPT
# The user is vagrant, so switching to root
sudo -s
# Install kubernetes
apt-get update && apt-get install -y apt-transport-https
curl -s | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb kubernetes-xenial main
apt-get update
apt-get install -y kubelet kubeadm kubectl
# kubelet requires swap off
swapoff -a
# keep swap off after reboot by commenting the line in FSTAB
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
# sed -i '/ExecStart=/a Environment="KUBELET_EXTRA_ARGS=--cgroup-driver=cgroupfs"' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
sed -i '0,/ExecStart=/s//Environment="KUBELET_EXTRA_ARGS=--cgroup-driver=cgroupfs"\n&/' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# Get the IP address that VirtualBox has given this VM
IPADDR=`ifconfig eth1 | grep netmask | awk '{print $2}'| cut -f2 -d:`
echo This VM has IP address $IPADDR
# Set up Kubernetes
NODENAME=$(hostname -s)
kubeadm init --apiserver-cert-extra-sans=$IPADDR --node-name $NODENAME
# Set up admin creds for the vagrant user
echo Copying credentials to /home/vagrant...
sudo --user=vagrant mkdir -p /home/vagrant/.kube
cp -i /etc/kubernetes/admin.conf /home/vagrant/.kube/config
chown $(id -u vagrant):$(id -g vagrant) /home/vagrant/.kube/config
Vagrant.configure("2") do |config|
# Specify your hostname if you like
config.vm.hostname = "kubebox" = "bento/ubuntu-18.04" "private_network", type: "dhcp"
# Setting a static IP for reaching the cluster from WSLinux "private_network", ip: ""
config.vm.provision "docker"
config.vm.provision "shell", inline: $script

Note: if you have VSCode installed on Windows, you can use it instead of vi by typing code Vagrantfile
Thanks to the $HOME tweak, we can leverage the best of both worlds.

Now that the file is created, the only remaining step is to actually create the Virtual Machine by typing vagrant up

VM creation with vagrant on WSLinux

Completing the setup

Now that you have the a VM with docker and k8s installed, it’s time to complete the setup, still based on Liz’s blog:

  • Copy the config file from the VM into WSLinux. You can either connect to the VM with vagrant ssh and then cat ~/.config
    Or, as in Vagrantfile we did setup a static IP, run:
    $ mkdir ~/.kube && scp vagrant@ ~/.kube/kubebox.config
  • Edit the config file in WSLinux to point at the VM static IP:
    $ vi ~/.kube/kubebox.config
  • Add the $KUBECONFIG environment variable in WSLinux:
    $ export KUBECONFIG=$HOME/.kube/kubebox.config
    (add this to your .bashrc to make it permanent)
  • Test your k8s config and ensure kubectl connects from WSLinux
Accessing k8s cluster from WSLinux
Finalizing k8s setup


At this point, you can manage your local k8s cluster from WSLinux and as a you could see, no path change was involved.
Thanks to Vagrant, it becomes almost as easy than Minikube to install and, more importantly, it can be reproduced as much as needed (I did at least 5 vagrant destroy during this blog post).

I really hope it will help you and again, please do not hesitate to tweet me for a feedback or even better, an improvement.

>>> Nunix out <<<

Bonus 1: A beautiful Dashboard

As a first bonus, I will explain how I got the Dashboard working without going too much into :

Steps in a nutshell

  1. Install the Dashboard
    $ kubectl apply -f “”
  2. Create a new account with the Cluster Admin role (not a best practice as I read, but again, it’s a Dev environment)
    $ kubectl create serviceaccount cluster-admin-dashboard-sa
    $ kubectl create clusterrolebinding cluster-admin-dashboard-sa \
    --clusterrole=cluster-admin \
  3. Get the secret from the account, we will need it for login
    $ kubectl get secret | grep cluster-admin-dashboard
    cluster-admin-dashboard-sa-token-xxxxx ...
    $ kubectl describe secret cluster-admin-dashboard-sa-token-xxxxx
    token: ...
  4. Start the proxy inside the VM with a port forwarding
    $ ssh vagrant@ -L 9090: 'kubectl proxy --port=9090 --disable-filter=true'
  5. Login to the portal with your preferred browser on the following URL:!/login
Paste the token you got on step 3 and login
A visual overview can always help

Lessons learned

While the installation is quite straight forward, the access to the dashboard was really complicated to achieve.
The reason I create a tunnel instead of reaching the static IP is due to an error that I could not get around.
Hopefully, someone will find a more “professional” and secure way.

Bonus 2: What a boat without its Helm

The k8s ecosystem is also renown for its tooling that helps developing and provisioning the applications on the k8s cluster.

One of them is called Helm and acts as a package manager.


Once again, let me go quickly on the installation steps:

Helm installed on WSLinux
  • Create an account for Helm with cluster admin permissions
$ vi ~/.kube/helm-rbac.yamlapiVersion: v1
kind: ServiceAccount
name: tiller
namespace: kube-system
kind: ClusterRoleBinding
name: tiller
kind: ClusterRole
name: cluster-admin
- kind: ServiceAccount
name: tiller
namespace: kube-system
  • Create the account from the file created
    $ kubectl create -f helm-rbac.yaml
  • Create Helm management Pod
    $ helm init --service-account tiller
  • Confirm the Pod has been successfully created
    $ kubectl get pods --all-namespaces | grep tiller
Helm Pod installed on k8s cluster
  • Create a cluster rule for Tiller, other else you will get an error when trying to deploy the applications with Helm
    $ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
  • Finally, try to deploy an application
    $ helm repo add brigade
    $ helm install brigade/brigade
Brigade chart installed with Helm

Lessons learned

Once more, the rbac and accounts management were critical and I “lost” quite some time trying to understand and not running the first Google answer.

Bonus 3: Do the same with Minikube

While my blog post really helped me understand basic low-level setup to have a local single node k8s cluster, the initial challenge was to have a solution with Minikube.

Here is what I could find and, while a bit hacky, it works with only 1 path change.
As in the blog post, you will need VirtualBox to be already installed:

Showing both Minikube *nix binary and the Windows client
Showing both Kubectl *nix binary and the Windows client
  • Also as stated in the blog post, I linked my $HOME to my Windows profile directory.
    This allows me to have only one .kube and .minikube directories:
$HOME linked to Windows profile directory
  • Minikube uses VBoxManage application in order to communicate with Virtualbox and be able to create/manage the Virtual Machines.
    Create a symlink to the Windows application:
    $ ln -s /mnt/c/Program\ Files/Oracle/VirtualBox/VBoxManage.exe /usr/local/bin/VBoxManage
Symlink to VBoxManage.exe
  • Run $ minikube start -> this will end up with an error, but that’s OK as it will still create a config.json file with the Linux paths:
WSL Minikube error and config.json file content
  • Backup the file to your home or /tmp and run $ minikube delete
Backup the config and delete the cluster
  • Run $ minikube.exe start to create the cluster and VM with the Windows client
Minikube cluster created
  • In order to have the linuxkubectl client connecting to your k8s cluster, you need to embed the certificates into the $HOME/.kube/config file by running the following commands
    $ kubectl config set-credentials minikube \
    --client-certificate=$HOME/.minikube/apiserver.crt \
    --client-key=$HOME/.minikube/apiserver.key --embed-certs
    $ kubectl config set-cluster minikube \
    --certificate-authority=$HOME/.minikube/ca.crt --embed-certs
Kubectl config embed certs
  • Now comes the “only” edit of this setup. If we do a diff between the config file we backup before and the one created with the Windows client, you will notice that 4 values are missing:
Result of the diff: 4 values missing and all the paths translated
  • Add the 4 values to the backup file with the path of the SSHKeyPath translated with the command $ wslpath -u c:\\Users\\NND\\.minikube\\machines\\minikube\\id_rsa
Edit backup config
Replace the “Windows config” with the “Linux config”
  • And it’s done, you can use the Linux minikube and kubectl clients to manage your k8s single node cluster
Managing k8s cluster with the Linux clients

Bonus 4: Docker CE for lightning speed setup

Initially I was to blog only about the Vagrant solution, then the Minikube solution got some breakthrough and while tweeting, @jldeen (ok I admit I stressed out) asked if I had tested it with Docker CE.

While I have blogged a lot in the past for having Docker and WSLinux working together, I realized that to be totally honest with this blog’s title, then I should also have a mention for Docker CE.

Again, here is the technical steps to get a kubernetes environment working and as you can already imagine by the title of the section: it’s the easiest one to get working!


If you followed this blog and now want to test this solution, you will need some preparations first:

  1. The really first thing is to uninstall VirtualBox, as Docker CE requires Hyper-V to work (both for Linux Containers and for LCOW)
  2. Just in case, “clean” your existing kubernetes environment by moving (or deleting) the following directories: .kube / .minikube / .docker
  3. Enable the Windows features Hyper-V and Containers in the Control Panel
  4. After the required reboot, install Docker CE
  5. Finally, switch to the Linux Containers (if not done by default) in order to activated Kubernetes as the default orchestrator
Activating k8s in DockerCE
Installing k8s cluster


Once your cluster is installed, now we need to have our tooling from WSLinux communicating with Docker CE:

$ ln -s /mnt/c/Users/<Windows username>/.docker $HOME/.docker
$ ln -s /mnt/c/Users/<Windows username>/.kube $HOME/.kube
Content of the config directories

Testing and enjoying your new setup

And voilà! You can now connect to your k8s cluster by running kubectl

Checking the Docker CE k8s cluster




WSL Corsair + Winsider + Docker fanboy and Cornerstone Client Advocate.