Run Kubernetes Locally with k3d (& Helm)

Tawsif Aqib
5 min readFeb 25, 2023

We can run Kubernetes cluster in our local machine for development without using Docker, k3d and Helm. Here is how —

Note:
This is an updated version of the previously posted article —
Local Kubernetes with kind, Helm & Dashboard

Objective

Create a Kubernetes cluster in Docker and use Helm to run Kubernetes Dashboard application.

Prerequisite

Docker, Terminal (I use Warp).

TL;DR

Click here or scroll down.

k3d

k3d is a lightweight wrapper to run k3s (Rancher Lab’s minimal Kubernetes distribution) in Docker.

k3s is fully compliant with “full” Kubernetes, but has a lot of optional and legacy features removed. It is —

  • Super fast to start (< 5 seconds) and deploy.
  • Has built-in local registry that’s explicitly designed to work well with Tilt. (I will explain what Tilt does later).
  • Although not as widely used as Kind, but still popular enough.

Installation

Checkout the k3d official installation guide for instruction. But if you are using MacOS and Homebrew, you can run the following command —

brew install k3d

After the installation is done, it is time to create our first Kubernetes node with —

k3d cluster create local-k8s

The cluster will be ready to use when we see the following message -

INFO[0041] Cluster ‘local-k8s’ created successfully!
INFO[0041] You can now use it like this:
kubectl cluster-info

Tips:
To delete a cluster — k3d cluster delete local-k8s

Check the newly created node with —

kubectl get nodes

NAME STATUS ROLES AGE VERSION
k3d-local-k8s-server-0 Ready control-plane,master 12s v1.25.6+k3s1

Helm

Helm is a package manager for Kubernetes. We will use it to deploy our Kubernetes Dashboard. Follow the installation guide in the official documentation to install Helm.

For MacOS and Homebrew users, we can run —

brew install helm

Cluster Admin Access

Before we deploy our Kubernetes Dashboard, we will create a service account user and grant him admin access. Admin permission is required for that user to fetch information related to node, workload, services etc.

We do that by creating a YAML file admin-user.yml with the following contents —

apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: default

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: default

Tips:
A service account user is bound to the
namespace. Kubernetes Dashboard must be deployed to the same namespace to use the admin user access token for logging into the dashboard.

We apply the changes with —

kubectl -n default apply -f admin-user.yml

If the changes are applied successfully, we will see —

serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created

Access Control Reference on Kubernetes Dashboard

Kubernetes Dashboard

Helm depends on chart files to deploy necessary Kubernetes resources like deployment, service, etc. And these chart files are located in repositories.

Our Kubernetes Dashboard Helm chart is located at https://kubernetes.github.io/dashboard.
Let’s install it with —

helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard

We should see the following message —

“kubernetes-dashboard” has been added to your repositories

Tips:
Find more packages/charts at —
https://artifacthub.io. Try searching by kubernetes-dashboard.

We now deploy a Kubernetes Dashboard application by creating a Helm release with the following command —

helm install dashboard kubernetes-dashboard/kubernetes-dashboard

Tips:
If we want to deploy the application under a specific namespace, then we can add -n <namespace> --create-namespace flag.

We should see the following message once the deployment is finished —

NAME: dashboard
LAST DEPLOYED: Sat Feb 25 00:17:26 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
*********************************************************************************
*** PLEASE BE PATIENT: kubernetes-dashboard may take a few minutes to install ***
*********************************************************************************
Get the Kubernetes Dashboard URL by running:
export POD_NAME=$(kubectl get pods -n default -l “app.kubernetes.io/name=dashboard,app.kubernetes.io/instance=kubernetes-dashboard” -o jsonpath=”{.items[0].metadata.name}”)
echo https://127.0.0.1:8443/
kubectl -n default port-forward $POD_NAME 8443:8443

Every time we deploy a Kubernetes application with Helm, we will see similar instruction belowGet the Kubernetes Dashboard URL by running:. This block of command is used to port-forward the application to host (local machine) port.

Copy and paste them in the terminal —

export POD_NAME=$(kubectl get pods -n default -l “app.kubernetes.io/name=dashboard,app.kubernetes.io/instance=kubernetes-dashboard” -o jsonpath=”{.items[0].metadata.name}”)
echo https://127.0.0.1:8443/
kubectl -n default port-forward $POD_NAME 8443:8443

We will see the following message if the port forward is successful —

https://127.0.0.1:8443/
Forwarding from 127.0.0.1:8443 -> 8443
Forwarding from [::1]:8443 -> 8443

If we visit https://127.0.0.1:8443, we will be greeted by the following page —

We use the “Token” method to sign in to the dashboard.
We can collect the token using —

kubectl -n default create token admin-user

This will generate a JWT token with an expiry of 1 hour. copy the token and paste in the UI and hit “Sign in”.

And finally, we should be able to see the following dashboard page —

Our work is finally done and that we have deployed a Kubernetes Dashboard application in K3d with Helm.

Final Tips

I use an alias in zsh so that I can automate the process of creating a token, copying it to clipboard, and starting the port-forward with a single command.

alias k8s-dash=’kubectl -n default create token admin-user | pbcopy && \
echo “==> Token copied to clipboard” && \
echo “==> Dashboard available at https://127.0.0.1:8443\n" && \
kubectl -n default port-forward \
$(kubectl get pods -n default \
-l “app.kubernetes.io/name=kubernetes-dashboard,app.kubernetes.io/instance=dashboard” \
-o jsonpath=”{.items[0].metadata.name}”) \
$(kubectl get pods -n default \
-l “app.kubernetes.io/name=kubernetes-dashboard,app.kubernetes.io/instance=dashboard” \
-o jsonpath=”{.items[0].spec.containers[0].ports[0].containerPort}”):8443'

This output of the command (alias) is —

k8s-dash

==> Token copied to clipboard
==> Dashboard available at https://127.0.0.1:8443

Forwarding from 127.0.0.1:8443 -> 8443
Forwarding from [::1]:8443 -> 8443

TL;DR

The list of shell commands for creating cluster in k3d and running a Kubernetes Dashboard with Helm.

Next up

One of the main reasons I moved to k3d from Kind is to use Tilt to automatically build images and push in into Kubernetes for re-deployment.

A continuation of this writing that includes details about using Tilt will be published soon.

Thanks for reading 🙏

--

--

Tawsif Aqib

Lead Engineer from Bangladesh, living in the Netherlands with decade of experience in software development, architecture and engineering leadership.