Forewords
First thing first, I am an hobbyist that blogs about technology that I love and have fun learning it.
So please, while reading, always keep in mind that I might not know or apply potential best practices, as my blog posts are normally based on proof of concepts.
Use the following for your own tests and if you read something that is really wrong in doing, please contact/teach me (@nunixtech) and I will change the blog too.
Happy reading!
Special Thanks!
And before I go all technical, as you can see in the gif below, this blog post would have never been possible without all the knowledge shared in the different blog posts/official documentation that is accessible all over internet.
So thank you Liz Rice, @HashiCorp, @kubernetesio, @Docker and of course @docsmsft for sharing and documenting!
Introduction
The Kubernetes (k8s) momentum seems to have no end, and the ways of having a local cluster for as a dev environment are multiplying.
However, when it comes down using WSLinux and k8s in a virtualized environment, it (almost) always ends in path edits inside the config file(s) or in relying on the Windows based clients exclusively.
Couple good examples are the following:
1. https://blog.stangroome.com/2018/06/25/minikube-and-wsl/ by @jstangroome
2. https://abelsquidhead.com/index.php/2018/04/25/containers-kubernetes-and-devops-for-an-old-as-dirt-developer-or-devops-nirvana-with-kubernetes-part-2many/ by @AbelSquidHead
So the question is: which other solution could be found in order to have a k8s local cluster, with Docker daemon in VirtualBox and without all the path issues that we face with the Minikube installation.
The technologies used
Here is the list of the technologies used and which need to be fully integrated.
And while I will provide the setup for the installation of the k8s “environment”, I will link the documentation to all the points below in case you feel doing it by yourself:
- OS: Windows 10 (Insider Fast ring — v17713 at the time of the writting)
- Shell: WSLinux Corsair (Ubuntu 18.04 based custom distro)
Installed on Windows 10 - Virtualization technology: VirtualBox 5.x
Installed on Windows 10 - Virtualization orchestration: Vagrant 2.x
Installed on WSLinux - Vagrant Box: bento/ubuntu-18.04
Installed on VirtualBox with Vagrant - Container platform: Docker 18.x-ce
Installed on VirtualBox Virtual Machine from Vagrant file - Container orchestration: Kubernetes 1.11
Installed on VirtualBox Virtual Machine from Vagrant file
In addition, kubectl
needs also to be installed on WSLinux.
Building the environment
Now that we have the list of all the pieces needed to build our local cluster, here are the extra steps to make them fit together and have a nice and shiny dev environment.
I will start from the Vagrant Box step listed above and will assume that the first 4 points are done following the documentation.
Making our $HOME cozy
Before the config files are created, here is a small tweak that I do now all the time: link your WSLinux $HOME
to your Windows one and copy the content of the current home (or at least the dot files):
$ cd /home
$ mv <username> <username>.bak
$ ln -s /mnt/c/Users/<Windows username> <username>
$ cp -r <username>.bak/.* <username>/
This small tweak will allow us to modify the files on our home without any fear of breaking the linux permissions (and @richturn_ms will not unleash the Dragons).
And for more information about this tweak, please read the excellent blog from @bketelsen: https://brianketelsen.com/going-overboard-with-wsl-metadata/
The builder: Vagrantfile
This file is actually what will help us creating the Virtual Machine with the target OS and install Docker.
I followed thoroughly Liz’s blog post and in order to make it work for Windows, here is the vagrantfile with the changes highlighted:
$ mkdir -p ~/vagrant/kubebox
$ cd ~/vagrant/kubebox
$ vagrant init
$ vi Vagrantfile# -*- mode: ruby -*-
# vi: set ft=ruby :# This script to install Kubernetes will get executed after we have provisioned the box
$script = <<-SCRIPT
# The user is vagrant, so switching to root
sudo -s
# Install kubernetes
apt-get update && apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
# kubelet requires swap off
swapoff -a
# keep swap off after reboot by commenting the line in FSTAB
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
# sed -i '/ExecStart=/a Environment="KUBELET_EXTRA_ARGS=--cgroup-driver=cgroupfs"' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
sed -i '0,/ExecStart=/s//Environment="KUBELET_EXTRA_ARGS=--cgroup-driver=cgroupfs"\n&/' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# Get the IP address that VirtualBox has given this VM
IPADDR=`ifconfig eth1 | grep netmask | awk '{print $2}'| cut -f2 -d:`
echo This VM has IP address $IPADDR
# Set up Kubernetes
NODENAME=$(hostname -s)
kubeadm init --apiserver-cert-extra-sans=$IPADDR --node-name $NODENAME
# Set up admin creds for the vagrant user
echo Copying credentials to /home/vagrant...
sudo --user=vagrant mkdir -p /home/vagrant/.kube
cp -i /etc/kubernetes/admin.conf /home/vagrant/.kube/config
chown $(id -u vagrant):$(id -g vagrant) /home/vagrant/.kube/config
SCRIPTVagrant.configure("2") do |config|
# Specify your hostname if you like
config.vm.hostname = "kubebox"
config.vm.box = "bento/ubuntu-18.04"
config.vm.network "private_network", type: "dhcp"
# Setting a static IP for reaching the cluster from WSLinux
config.vm.network "private_network", ip: "11.8.19.5"
config.vm.provision "docker"
config.vm.provision "shell", inline: $script
end
Note: if you have VSCode installed on Windows, you can use it instead of vi
by typing code Vagrantfile
Thanks to the $HOME tweak, we can leverage the best of both worlds.
Now that the file is created, the only remaining step is to actually create the Virtual Machine by typing vagrant up
Completing the setup
Now that you have the a VM with docker and k8s installed, it’s time to complete the setup, still based on Liz’s blog:
- Copy the config file from the VM into WSLinux. You can either connect to the VM with
vagrant ssh
and thencat ~/.config
Or, as in Vagrantfile we did setup a static IP, run:$ mkdir ~/.kube && scp vagrant@11.8.19.5:~/.kube/config ~/.kube/kubebox.config
- Edit the config file in WSLinux to point at the VM static IP:
$ vi ~/.kube/kubebox.config
...
server: https://11.8.19.5:6443 - Add the
$KUBECONFIG
environment variable in WSLinux:$ export KUBECONFIG=$HOME/.kube/kubebox.config
(add this to your .bashrc to make it permanent) - Test your k8s config and ensure
kubectl
connects from WSLinux
- Install a POD network
$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d ‘\n')”
- Allow pods to run on the k8s master node
$ kubectl taint nodes --all node-role.kubernetes.io/master-
Conclusion
At this point, you can manage your local k8s cluster from WSLinux and as a you could see, no path change was involved.
Thanks to Vagrant, it becomes almost as easy than Minikube to install and, more importantly, it can be reproduced as much as needed (I did at least 5 vagrant destroy
during this blog post).
I really hope it will help you and again, please do not hesitate to tweet me for a feedback or even better, an improvement.
>>> Nunix out <<<
Bonus 1: A beautiful Dashboard
As a first bonus, I will explain how I got the Dashboard working without going too much into :
- Config: https://github.com/kubernetes/dashboard
- Access: https://docs.giantswarm.io/guides/install-kubernetes-dashboard/
Steps in a nutshell
- Install the Dashboard
$ kubectl apply -f “https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml”
- Create a new account with the Cluster Admin role (not a best practice as I read, but again, it’s a Dev environment)
$ kubectl create serviceaccount cluster-admin-dashboard-sa
$ kubectl create clusterrolebinding cluster-admin-dashboard-sa \
--clusterrole=cluster-admin \
--serviceaccount=default:cluster-admin-dashboard-sa - Get the secret from the account, we will need it for login
$ kubectl get secret | grep cluster-admin-dashboard
cluster-admin-dashboard-sa-token-xxxxx ...
$ kubectl describe secret cluster-admin-dashboard-sa-token-xxxxx
...
token: ... - Start the proxy inside the VM with a port forwarding
$ ssh vagrant@11.8.19.5 -L 9090:127.0.0.1:9090 'kubectl proxy --port=9090 --disable-filter=true'
- Login to the portal with your preferred browser on the following URL:
http://127.0.0.1:9090/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
Lessons learned
While the installation is quite straight forward, the access to the dashboard was really complicated to achieve.
The reason I create a tunnel instead of reaching the static IP is due to an error that I could not get around.
Hopefully, someone will find a more “professional” and secure way.
Bonus 2: What a boat without its Helm
The k8s ecosystem is also renown for its tooling that helps developing and provisioning the applications on the k8s cluster.
One of them is called Helm and acts as a package manager.
Access: https://docs.helm.sh/using_helm/#role-based-access-control
Once again, let me go quickly on the installation steps:
- Install Helm on WSLinux
$ curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
- Confirm that Helm has been correctly installed
$ helm version
- Create an account for Helm with cluster admin permissions
$ vi ~/.kube/helm-rbac.yamlapiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
- Create the account from the file created
$ kubectl create -f helm-rbac.yaml
- Create Helm management Pod
$ helm init --service-account tiller
- Confirm the Pod has been successfully created
$ kubectl get pods --all-namespaces | grep tiller
- Create a cluster rule for Tiller, other else you will get an error when trying to deploy the applications with Helm
$ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
- Finally, try to deploy an application
$ helm repo add brigade https://azure.github.io/brigade
$ helm install brigade/brigade
Lessons learned
Once more, the rbac and accounts management were critical and I “lost” quite some time trying to understand and not running the first Google answer.
Bonus 3: Do the same with Minikube
While my blog post really helped me understand basic low-level setup to have a local single node k8s cluster, the initial challenge was to have a solution with Minikube.
Here is what I could find and, while a bit hacky, it works with only 1 path change.
As in the blog post, you will need VirtualBox to be already installed:
- Install Minikube cli in WSL and Windows
source: https://kubernetes.io/docs/tasks/tools/install-minikube/
- Install Kubectl cli in WSL and Windows
source: https://kubernetes.io/docs/tasks/tools/install-kubectl/
- Also as stated in the blog post, I linked my $HOME to my Windows profile directory.
This allows me to have only one .kube and .minikube directories:
- Minikube uses VBoxManage application in order to communicate with Virtualbox and be able to create/manage the Virtual Machines.
Create a symlink to the Windows application:$ ln -s /mnt/c/Program\ Files/Oracle/VirtualBox/VBoxManage.exe /usr/local/bin/VBoxManage
- Run
$ minikube start
-> this will end up with an error, but that’s OK as it will still create a config.json file with the Linux paths:
- Backup the file to your home or /tmp and run
$ minikube delete
- Run
$ minikube.exe start
to create the cluster and VM with the Windows client
- In order to have the linux
kubectl
client connecting to your k8s cluster, you need to embed the certificates into the$HOME/.kube/config
file by running the following commands
source: https://github.com/helm/monocular/blob/master/docs/development.md$ kubectl config set-credentials minikube \
--client-certificate=$HOME/.minikube/apiserver.crt \
--client-key=$HOME/.minikube/apiserver.key --embed-certs
$ kubectl config set-cluster minikube \
--certificate-authority=$HOME/.minikube/ca.crt --embed-certs
- Now comes the “only” edit of this setup. If we do a
diff
between the config file we backup before and the one created with the Windows client, you will notice that 4 values are missing:
- Add the 4 values to the backup file with the path of the
SSHKeyPath
translated with the command$ wslpath -u c:\\Users\\NND\\.minikube\\machines\\minikube\\id_rsa
- And it’s done, you can use the Linux minikube and kubectl clients to manage your k8s single node cluster
Bonus 4: Docker CE for lightning speed setup
Initially I was to blog only about the Vagrant solution, then the Minikube solution got some breakthrough and while tweeting, @jldeen (ok I admit I stressed out) asked if I had tested it with Docker CE.
While I have blogged a lot in the past for having Docker and WSLinux working together, I realized that to be totally honest with this blog’s title, then I should also have a mention for Docker CE.
Again, here is the technical steps to get a kubernetes environment working and as you can already imagine by the title of the section: it’s the easiest one to get working!
Prerequisites
If you followed this blog and now want to test this solution, you will need some preparations first:
- The really first thing is to uninstall VirtualBox, as Docker CE requires Hyper-V to work (both for Linux Containers and for LCOW)
- Just in case, “clean” your existing kubernetes environment by moving (or deleting) the following directories:
.kube
/.minikube
/.docker
- Enable the Windows features
Hyper-V
andContainers
in the Control Panel - After the required reboot, install Docker CE
source: https://store.docker.com/editions/community/docker-ce-desktop-windows - Finally, switch to the Linux Containers (if not done by default) in order to activated Kubernetes as the default orchestrator
Connection
Once your cluster is installed, now we need to have our tooling from WSLinux communicating with Docker CE:
- Kubectl cli
also if not done, you can install Kubectl cli by following this howto: https://kubernetes.io/docs/tasks/tools/install-kubectl/ - Interop config
finally, create symlinks for the follow directories (if you don’t have the $HOME hack as described above):
$ ln -s /mnt/c/Users/<Windows username>/.docker $HOME/.docker
$ ln -s /mnt/c/Users/<Windows username>/.kube $HOME/.kube
Testing and enjoying your new setup
And voilà! You can now connect to your k8s cluster by running kubectl