WSLinux+K8S: The Interop way

Nunix
Nunix
Jul 21, 2018 · 11 min read

Forewords

First thing first, I am an hobbyist that blogs about technology that I love and have fun learning it. So please, while reading, always keep in mind that I might not know or apply potential best practices, as my blog posts are normally based on proof of concepts. Use the following for your own tests and if you read something that is really wrong in doing, please contact/teach me (@nunixtech) and I will change the blog too.

Happy reading!

Special Thanks!

And before I go all technical, as you can see in the gif below, this blog post would have never been possible without all the knowledge shared in the different blog posts/official documentation that is accessible all over internet.

All my sources just for this blog post

So thank you Liz Rice, @HashiCorp, @kubernetesio, @Docker and of course @docsmsft for sharing and documenting!

Introduction

The Kubernetes (k8s) momentum seems to have no end, and the ways of having a local cluster for as a dev environment are multiplying. However, when it comes down using WSLinux and k8s in a virtualized environment, it (almost) always ends in path edits inside the config file(s) or in relying on the Windows based clients exclusively.

Couple good examples are the following: 1. https://blog.stangroome.com/2018/06/25/minikube-and-wsl/ by @jstangroome 2. https://abelsquidhead.com/index.php/2018/04/25/containers-kubernetes-and-devops-for-an-old-as-dirt-developer-or-devops-nirvana-with-kubernetes-part-2many/ by @AbelSquidHead

So the question is: which other solution could be found in order to have a k8s local cluster, with Docker daemon in VirtualBox and without all the path issues that we face with the Minikube installation.

The technologies used

Here is the list of the technologies used and which need to be fully integrated. And while I will provide the setup for the installation of the k8s “environment”, I will link the documentation to all the points below in case you feel doing it by yourself:

In addition, kubectl needs also to be installed on WSLinux.

Building the environment

Now that we have the list of all the pieces needed to build our local cluster, here are the extra steps to make them fit together and have a nice and shiny dev environment. I will start from the Vagrant Box step listed above and will assume that the first 4 points are done following the documentation.

Making our $HOME cozy

Before the config files are created, here is a small tweak that I do now all the time: link your WSLinux $HOME to your Windows one and copy the content of the current home (or at least the dot files):

$ cd /home
$ mv <username> <username>.bak
$ ln -s /mnt/c/Users/<Windows username> <username>
$ cp -r <username>.bak/.* <username>/

This small tweak will allow us to modify the files on our home without any fear of breaking the linux permissions (and @richturn_ms will not unleash the Dragons). And for more information about this tweak, please read the excellent blog from @bketelsen: https://brianketelsen.com/going-overboard-with-wsl-metadata/

The builder: Vagrantfile

This file is actually what will help us creating the Virtual Machine with the target OS and install Docker.

I followed thoroughly Liz’s blog post and in order to make it work for Windows, here is the vagrantfile with the changes highlighted:

$ mkdir -p ~/vagrant/kubebox
$ cd ~/vagrant/kubebox
$ vagrant init
$ vi Vagrantfile# -*- mode: ruby -*-
# vi: set ft=ruby :# This script to install Kubernetes will get executed after we have provisioned the box 
$script = <<-SCRIPT
# The user is vagrant, so switching to root
sudo -s
# Install kubernetes
apt-get update && apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
# kubelet requires swap off
swapoff -a
# keep swap off after reboot by commenting the line in FSTAB
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
# sed -i '/ExecStart=/a Environment="KUBELET_EXTRA_ARGS=--cgroup-driver=cgroupfs"' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
sed -i '0,/ExecStart=/s//Environment="KUBELET_EXTRA_ARGS=--cgroup-driver=cgroupfs"\n&/' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# Get the IP address that VirtualBox has given this VM
IPADDR=`ifconfig eth1 | grep netmask | awk '{print $2}'| cut -f2 -d:`
echo This VM has IP address $IPADDR
# Set up Kubernetes
NODENAME=$(hostname -s)
kubeadm init --apiserver-cert-extra-sans=$IPADDR --node-name $NODENAME
# Set up admin creds for the vagrant user
echo Copying credentials to /home/vagrant...
sudo --user=vagrant mkdir -p /home/vagrant/.kube
cp -i /etc/kubernetes/admin.conf /home/vagrant/.kube/config
chown $(id -u vagrant):$(id -g vagrant) /home/vagrant/.kube/config
SCRIPTVagrant.configure("2") do |config|
  # Specify your hostname if you like
  config.vm.hostname = "kubebox"
  config.vm.box = "bento/ubuntu-18.04"
  config.vm.network "private_network", type: "dhcp"
  # Setting a static IP for reaching the cluster from WSLinux
  config.vm.network "private_network", ip: "11.8.19.5"
  config.vm.provision "docker"
  config.vm.provision "shell", inline: $script
end

Note: if you have VSCode installed on Windows, you can use it instead of vi by typing code Vagrantfile Thanks to the $HOME tweak, we can leverage the best of both worlds.

Now that the file is created, the only remaining step is to actually create the Virtual Machine by typing vagrant up

VM creation with vagrant on WSLinux

Completing the setup

Now that you have the a VM with docker and k8s installed, it’s time to complete the setup, still based on Liz’s blog:

Accessing k8s cluster from WSLinux
Finalizing k8s setup

Conclusion

At this point, you can manage your local k8s cluster from WSLinux and as a you could see, no path change was involved. Thanks to Vagrant, it becomes almost as easy than Minikube to install and, more importantly, it can be reproduced as much as needed (I did at least 5 vagrant destroy during this blog post).

I really hope it will help you and again, please do not hesitate to tweet me for a feedback or even better, an improvement.

>>> Nunix out <<<


Bonus 1: A beautiful Dashboard

As a first bonus, I will explain how I got the Dashboard working without going too much into :

Steps in a nutshell

Paste the token you got on step 3 and login
A visual overview can always help

Lessons learned

While the installation is quite straight forward, the access to the dashboard was really complicated to achieve. The reason I create a tunnel instead of reaching the static IP is due to an error that I could not get around. Hopefully, someone will find a more “professional” and secure way.


Bonus 2: What a boat without its Helm

The k8s ecosystem is also renown for its tooling that helps developing and provisioning the applications on the k8s cluster.

One of them is called Helm and acts as a package manager.

Access: https://docs.helm.sh/using_helm/#role-based-access-control

Once again, let me go quickly on the installation steps:

Helm installed on WSLinux
$ vi ~/.kube/helm-rbac.yamlapiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system 
Helm Pod installed on k8s cluster
Brigade chart installed with Helm

Lessons learned

Once more, the rbac and accounts management were critical and I “lost” quite some time trying to understand and not running the first Google answer.


Bonus 3: Do the same with Minikube

While my blog post really helped me understand basic low-level setup to have a local single node k8s cluster, the initial challenge was to have a solution with Minikube.

Here is what I could find and, while a bit hacky, it works with only 1 path change. As in the blog post, you will need VirtualBox to be already installed:

Showing both Minikube *nix binary and the Windows client
Showing both Kubectl *nix binary and the Windows client
$HOME linked to Windows profile directory
Symlink to VBoxManage.exe
WSL Minikube error and config.json file content
Backup the config and delete the cluster
Minikube cluster created
Kubectl config embed certs
Result of the diff: 4 values missing and all the paths translated
Edit backup config
Replace the “Windows config” with the “Linux config”
Managing k8s cluster with the Linux clients

Bonus 4: Docker CE for lightning speed setup

Initially I was to blog only about the Vagrant solution, then the Minikube solution got some breakthrough and while tweeting, @jldeen (ok I admit I stressed out) asked if I had tested it with Docker CE.

While I have blogged a lot in the past for having Docker and WSLinux working together, I realized that to be totally honest with this blog’s title, then I should also have a mention for Docker CE.

Again, here is the technical steps to get a kubernetes environment working and as you can already imagine by the title of the section: it’s the easiest one to get working!

Prerequisites

If you followed this blog and now want to test this solution, you will need some preparations first:

Activating k8s in DockerCE
Installing k8s cluster

Connection

Once your cluster is installed, now we need to have our tooling from WSLinux communicating with Docker CE:

$ ln -s /mnt/c/Users/<Windows username>/.docker $HOME/.docker
$ ln -s /mnt/c/Users/<Windows username>/.kube $HOME/.kube
Content of the config directories

Testing and enjoying your new setup

And voilà! You can now connect to your k8s cluster by running kubectl

Checking the Docker CE k8s cluster

Nunix

Written by

Nunix

WSL Corsair + Winsider + Docker fanboy and Cornerstone Client Advocate.