K3d + Vagrant: The Fastest Way to Set Up a Local Multi-Node Kubernetes Cluster on Windows Host

Image for post
Image for post
This post originally appeared on my blog at: https://www.trendfx-dojo.com/tech/kubernetes/k3d-kubernetes-local-cluster/

Local clusters are the most useful for developers that want quick edit-test-deploy-debug cycles on their machine before committing their changes. They also are handy for DevOps and operators that want to play with Kubernetes locally, without concerns about breaking a shared environment.

While Kubernetes is typically deployed on Linux in production, many developers work on Windows PCs or Macs.

Today, we will roll our sleeves up and build us a local Kubernetes Multi-node cluster using K3d and Vagrant.

Prerequisites

There are some prerequisites to install before you can create the cluster itself.

These include VirtualBox, Vagrant Visual Studio Code. Here is a list of the latest versions at the time of writing:

  • VirtualBox
  • Vagrant
  • Visual Studio Code

I prefer to create my work environment inside the Vagrant VM. So I can isolate the dependencies and configuration between projects. Furthermore, sharing the environment and configuration with collaborators is simple as sharing a single Vagrantfile.

Step 1: Booting a Vagrant Ubuntu 18.04 VM

Instead of building out a complete operating system image and copying it, we use a Vagrantfile to specify the configuration and provision the VM. The usual disclaimer is in effect – make sure to understand it before copying and use it.

We start by creating a directory named k3d-cluster to store our Vagrantfile. Then launch our new VM by using following command:

$ vagrant up

On the first vagrant up that creates the VM, our provisioning shell script is run, and it will install docker, k3d, kubectl for us.

After the VM is created, SSH into this machine with vagrant ssh, and explore our work environment

$ vagrant ssh

Let’s verify Docker, k3d, kubectl installed properly.

vagrant@k3d-cluster:~$ docker version
Client: Docker Engine - Community
Version: 19.03.13
API version: 1.40
Go version: go1.13.15
Git commit: 4484c46d9d
Built: Wed Sep 16 17:02:36 2020
OS/Arch: linux/amd64
Experimental: false
vagrant@k3d-cluster:~$ k3d --version
k3d version v3.0.2
k3s version v1.18.8-k3s1 (default)

Step 2: Setting up VS Code Remote SSH with Vagrant VM

Although, Vagrant allows us to sync a folder from the host machine to the VM, It’s better to use VS Code Remote SSH Extension for remote development.

In my experience, sometimes confusing errors will occur, and it will take you time to find ways to fix it. For example, you can’t create a symbolic link into the synced folder (Windows Permission).

Image for post
Image for post
VS Code Remote SSH Architecture

By default, Vagrant sets up some SSH config so that it’s super easy to get into the VM. All you need to do is cd to the directory holding the Vagrantfile, and then simply run vagrant ssh.

This is great, but it obscures the parameters being used under the covers to open that SSH connection — and you’ll need that information to configure Remote-SSH.
It turns out it’s straightforward! Let’s run the following command:

$ vagrant ssh-configHost default
HostName 127.0.0.1
User vagrant
Port 2200
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile D:/Workspace/k3d-cluster/.vagrant/machines/default/virtualbox/private_key
IdentitiesOnly yes
LogLevel FATAL

Okay, we got the ssh config in hand. Let’s move to the next part — setting up Remote SSH.

■ Copy vagrant ssh-config output into k3d-cluster/ssh-config. Then change the hostname from Host default into Host k3d-cluster.

■ Install the Remote SSH Extension in VS Code (Figure 1)​

Image for post
Image for post
(Figure 1) Install the Remote SSH Extension in VS Code

■ Pressing Ctrll+Shift+P, search for Remote-SSH: Open Configuration File then change the ssh-config path. In my case, it's D:\Workspace\k3d-cluster\ssh-config (Figure 2). ​

Image for post
Image for post
(Figure 2) Change SSH Config File path to your ssh-config File

■ Select the icon of the left sidebar, right-click on hostname k3d-cluster and select Connect to Host in new Window > Remember to select Linux as OS when it pops up (Figure 3)

Image for post
Image for post
Change the Hostname from default to k3d-cluster

■ After the new VS Code window opened, It will take a little while downloading the VS Code server onto the VM. (~/.vscode-server). Click Open Folder to select a remote folder. (Figure 4)​

Image for post
Image for post
Remote SSH Connected. Select your remote folder.

■ At this point, we have not created the remote folder yet. Let’s create a folder named demo.

Image for post
Image for post
Open Integrated Terminal in VS Code and create a folder. (It’s connect to Vagrant host automatically)

■ Select the new demo folder. (Figure 6)​

Image for post
Image for post
Selete the created folder

■ Create some Kubernetes configuration file for later use. (Figure 7)​

Image for post
Image for post
Prepare Kubernetes Configuration File for later use.

You can get more detail at official guide. Now, you can start editing remote files!

Step 3: Setting Up a Local k3d Multi-Node Cluster

Rancher created k3s, which is a lightweight Kubernetes distribution. The basic idea is to remove features and capabilities that most people don’t need, such as:

  • Non-default features
  • Legacy features
  • Alpha features
  • In-tree storage drivers
  • In-tree cloud providers

The Rancher team did a great job by reducing the binary to less than 40 MB that needs only 512 MB of memory.

Unlike Minikube, k3s is designed for production. The primary use case is for edge computing, IoT, and CI systems. It is optimized for ARM devices.

Image for post
Image for post

So, what is k3d? k3d takes all the goodness that is k3s, packages it in Docker. It provides a simple CLI to create, run, delete a full compliance Kubernetes cluster with 0 to n worker nodes.

Step 3–1: Start a K3d Cluster

Now, let’s create a cluster with three worker nodes inside our Vagrant VM. That only takes a little over 5–10 seconds:

vagrant@k3d-cluster:~/demo$ k3d cluster create --agents 3 --servers 1 -p 8502:80@loadbalancer myclusterINFO[0000] Created network 'k3d-mycluster'              
INFO[0000] Created volume 'k3d-mycluster-images'
INFO[0001] Creating node 'k3d-mycluster-server-0'
INFO[0003] Pulling image 'docker.io/rancher/k3s:v1.18.8-k3s1'
INFO[0013] Creating node 'k3d-mycluster-agent-0'
INFO[0015] Creating node 'k3d-mycluster-agent-1'
INFO[0017] Creating node 'k3d-mycluster-agent-2'
INFO[0020] Creating LoadBalancer 'k3d-mycluster-serverlb'
INFO[0023] Pulling image 'docker.io/rancher/k3d-proxy:v3.0.2'
INFO[0047] Cluster 'mycluster' created successfully!
INFO[0047] You can now use it like this:
kubectl cluster-info

-p 8501:80@loadbalancer: This port-mapping construct means map port 8502 from the host to port 80 on the container which matches the nodefilter loadbalancer​. The loadbalancer nodefilter matches only the serverlb that’s deployed in front of a cluster’s server nodes

We will cover many of the kubectl commands throughout the tutorial. First, Let’s us verify the cluster works as expected using cluster-info:

vagrant@k3d-cluster:~/demo$ kubectl cluster-infoKubernetes master is running at https://0.0.0.0:40769
CoreDNS is running at https://0.0.0.0:40769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://0.0.0.0:40769/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

You can see that, the master is running properly. If you want to see a much more detailed view of all the objects in the cluster as JSON, type kubectl cluster-info dump

Next, Let’s check out the cluster list using k3d cluster list and the nodes in the cluster using kubectl get nodes command

vagrant@k3d-cluster:~/demo$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k3d-mycluster-agent-0 Ready <none> 5m33s v1.18.8+k3s1
k3d-mycluster-agent-1 Ready <none> 5m33s v1.18.8+k3s1
k3d-mycluster-agent-2 Ready <none> 5m32s v1.18.8+k3s1
k3d-mycluster-server-0 Ready master 5m28s v1.18.8+k3s1
vagrant@k3d-cluster:~/demo$ k3d cluster list
NAME SERVERS AGENTS LOADBALANCER
mycluster 1/1 3/3 true

So, we have one master node called k3d-mycluster-server-0 and 3 worker nodes. Looks good!

Step 3–2: Deploy Pods

Okay, we already have an empty multi-node cluster up and running. It is time to deploy some pods.

The following is our Deployment YAML file. It creates a ReplicaSet to bring up three echoserver Pods:

  • .metadata.name: A Deployment Named echoserver
  • .spec.replicas: The Deployment creates three replicated Pods​
  • .spec.selector.matchLabels: The Deployment finds Pods with labels app:echoserver to manage​
  • .template.metadata.labels: Define Pods are labeled with app:echoserver
  • .template.spec.containers: Define the Container will be run in our Pods.
vagrant@k3d-cluster:~/demo$ kubectl apply -f deployment.yaml 
deployment.apps/echoserver created

The preceding command creates a Deployment and an associated ReplicaSet. The ReplicaSet has three Pods each of which runs the echoserver application.

Let us check out the Deployment, ReplicaSet, Pods that was created:

Let us check out the Deployment, ReplicaSet, Pods that was created:

vagrant@k3d-cluster:~/demo$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
echoserver 3/3 3 3 6m13s
vagrant@k3d-cluster:~/demo$ kubectl get rs
NAME DESIRED CURRENT READY AGE
echoserver-68666dcc9f 3 3 3 6m33s
vagrant@k3d-cluster:~$ kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
echoserver-68666dcc9f-55lqr 1/1 Running 0 7m27s app=echoserver,pod-template-hash=68666dcc9f
echoserver-68666dcc9f-rgtjm 1/1 Running 0 7m27s app=echoserver,pod-template-hash=68666dcc9f
echoserver-68666dcc9f-fpmt9 1/1 Running 0 7m27s app=echoserver,pod-template-hash=68666dcc9f

Step 3–3: Expose our Pods​

Next, We will create a Service to expose our Pods. The following is our Service YAML file

  • .metadata.name: A Service Named echoserver
  • .spec.selector: The Service finds Pods carry a labels app:echoserver to manage​
  • .spec.selector.ports.port: The Service expose on port 80
  • .spec.selector.ports.targetPort: The service targets TCP port 8080 on backend Pod

Create the Service by running the following command:

vagrant@k3d-cluster:~/demo$ kubectl apply -f service.yaml 
service/echoserver created
vagrant@k3d-cluster:~/demo$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 28m
echoserver ClusterIP 10.43.178.168 <none> 80/TCP 38s
vagrant@k3d-cluster:~$ kubectl describe service echoserverName: echoserver
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=echoserver
Type: ClusterIP
IP: 10.43.178.168
Port: <unset> 80/TCP
TargetPort: 8080/TCP
Endpoints: 10.42.0.6:8080,10.42.1.6:8080,10.42.3.7:8080
Session Affinity: None
Events: <none>

Kubernetes assigns our Service an Cluster-IP address: 10.43.178.168 which is used by the Service proxies.

vagrant@k3d-cluster:~/demo$ kubectl get ep echoserver
NAME ENDPOINTS AGE
echoserver 10.42.0.4:8080,10.42.1.5:8080,10.42.3.4:8080 67s

Next, create an ingress object (k3d deploy traefik as the default ingress controller)

vagrant@k3d-cluster:~/demo$ kubectl apply -f ingress.yaml 
ingress.extensions/nginx created

Let’s check the ingress

vagrant@k3d-cluster:~/demo$ kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
nginx <none> * 172.18.0.5 80 45
vagrant@k3d-cluster:~/demo$ kubectl describe ingress nginx
Name: nginx
Namespace: default
Address: 172.18.0.5
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
*
/ echoserver:80 (10.42.0.4:8080,10.42.1.5:8080,10.42.3.4:8080)
Annotations: ingress.kubernetes.io/ssl-redirect: false
Events: <none>

Finally, you can curl it via localhost:8502

vagrant@k3d-cluster:~/demo$ curl localhost:8502
CLIENT VALUES:
client_address=('10.42.1.3', 39638) (10.42.1.3)
command=GET
path=/
real path=/
query=
request_version=HTTP/1.1
SERVER VALUES:
server_version=BaseHTTP/0.6
sys_version=Python/3.5.0
protocol_version=HTTP/1.0
HEADERS RECEIVED:
Accept=*/*
Accept-Encoding=gzip
Host=localhost:8502
User-Agent=curl/7.58.0
X-Forwarded-For=10.42.1.4
X-Forwarded-Host=localhost:8502
X-Forwarded-Port=8502
X-Forwarded-Proto=http
X-Forwarded-Server=traefik-758cd5fc85-qfm5d
X-Real-Ip=10.42.1.4

Now, we can access the echoservice, from our Windows’ Google Chrome using same URL.

Image for post
Image for post
Access Exposed Service From Your Windows Host (Google Chrome)

Step 4: Deploying the Kubernetes Dashboard UI

Kubernetes has a friendly web interface, which is deployed as a service in a pod. The Dashboard is well-designed and provides a high-level overview of your cluster. It also lets you drill down into individual resources, view logs, edit resource files, and more. It is the perfect weapon when you want to check out your cluster manually.

Check out: Kubernetes Dashboard Official Document

Okay, Let’s start deploy and configure the Kubernetes Dashboard on k3d.

Because, the Dashboard UI is not deployed by default, we need to run the following command:

vagrant@k3d-cluster:~/demo$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yamlnamespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

Next, We need to create a new user using Service Account mechanism of Kubernetes, grant this user admin permissions, and log in to Dashboard using bearer token tied to this user.

Copy both following snippets: service_account.yaml and clusterole_binding.yaml

  • .metadata.name: A Service Account with name admin
  • .metadata.namespace: The Service Account will be created in namespace kubernetes-dashboard

Deploy the ServiceAccount configuration:

vagrant@k3d-cluster:~/demo$ kubectl apply -f service_account.yaml 
serviceaccount/admin created
vagrant@k3d-cluster:~/demo$ kubectl get serviceaccount --namespace kubernetes-dashboard
NAME SECRETS AGE
kubernetes-dashboard 1 7m
default 1 7m
admin 1 9m
  • .metadata.name: A Cluster Role Binding with name admin
  • .roleref.name: Create Cluster Role Binding for cluster-admin (already exist in the cluster)
  • .subjects.name: bind the ClusterRole to our Service Account admin

Check if cluster-admin ClusterRole existed:

vagrant@k3d-cluster:~/demo$ kubectl get clusterrole | grep cluster-admin
cluster-admin 2020-09-23T01:21:02Z

Deploy the ClusterRoleBinding configuration:

vagrant@k3d-cluster:~/demo$ kubectl apply -f clusterole_binding.yml 
clusterrolebinding.rbac.authorization.k8s.io/admin created
vagrant@k3d-cluster:~/demo$ kubectl get clusterrolebinding | grep "cluster-admin"
cluster-admin ClusterRole/cluster-admin 10h
helm-kube-system-traefik ClusterRole/cluster-admin 10h
admin ClusterRole/cluster-admin 7m

Now we need to get the Bearer Token of admin ServiceAccount to login Dashboard. Executing the following command and copy the token:

vagrant@k3d-cluster:~/demo$ kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin | awk '{print $1}')Name:         admin-token-td77x
Namespace: kubernetes-dashboard
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin
kubernetes.io/service-account.uid: dfc12ee0-2255-44c6-862a-aae1a6f36815
Type: kubernetes.io/service-account-tokenData
====
ca.crt: 526 bytes
namespace: 20 bytes
token: <YOUR_TOKEN>

We also need to run a Proxy to Kubernetes API Server. So we can access the Dashboard from our Windows. (We're already forwarding port 8503 for the Dashboard in Vagrantfile configuration).

# Forwarded port mapping: 8052 -> 8502 for kubernetes dasboard
config.vm.network "forwarded_port", guest: 8503, host: 8503
$ kubectl proxy --address='0.0.0.0' --port=8503 &
Starting to serve on [::]:8503

Then, Kubectl will make Dashboard available at http://localhost:8503/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/.

Image for post
Image for post
Copy and Paste the Bearer Token to Login Kubernetes Dashboard

You are now logged in as an admin!

Image for post
Image for post
Successfully logged in Kubernetes Dashboard

Congratulations!

You just created a Local Multi-Node Kubernetes cluster, deployed service, and exposed it to the world using k3d and Vagrant.

Summary

Today, we created a local multi-node Kubernetes cluster on Windows inside Vagrant VM with k3d.
We also explored it using kubectl commands, deployed service, exposed it, and set up the Kubernetes Dashboard UI.
If you are working in an environment with a tight resource pool or need a quick startup, K3s undoubtedly a tool you should consider.
Please leave a message if you have any questions.

Written by

I’m currently a Software Engineer at Geniee Inc, where I mainly responsible for the SSP Platform’s maintenance and development as Team Leader.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store