Local Kubernetes Development
In this blog, I want to help people develop with Kubernetes locally. You should be able to go from nothing to a fully running application in a couple of minutes.
I have been using Kubernetes for around 6 months and it is only recently that I have had a need to stop using Docker and docker-compose in order to test and develop my applications.
Unlike other tutorials, which build an unrealistic nginx app. We are going to build the most fabulous Flask application you have ever seen!
We will use the amazing k3d tool and standard helm and kubectl. Other solutions do exist but I found these to be the quickest way to get cracking.
Before I get into the heart of the post, please feel free to comment with questions for help as I might be able to answer those questions.
k3d Comparison
I have compared several tools to spin up a local instance of Kubernetes. Ordered from least favourite.
Minikube this spins up a virtual machine in order to run Kubernetes, it uses a lot of resources. It was an early solution and this shows.
k3s a lightweight approach which is built by Rancher that runs a Kubernetes instance. You can run this locally on a Linux system and it is lightning quick. Some things you have to configure yourself, but k3d takes care of those things for you (see below).
microk8s made by Canonical and it looks really good. There are a lot of features you can install, such as DNS and a dashboard. It requires a lot of set up compared to k3d and I couldn’t get the permissions working as I would have liked. I am sure I will re-review this when it matures some more.
k3d is a wrapper tool around k3s and is now owned by Rancher. It adds a lot of common set up problems associated with k3s and takes very little effort to run, hence I really like it. Unlike other solutions, it uses containers only.
Setup
k3d setup
It is super easy to install this tool. You will need to remove minikube, k3s, micro.k8s in order to do this
wget -q -O - https://raw.githubusercontent.com/rancher/k3d/master/install.sh | bash
Low start it up with the following
k3d create --enable-registry --api-port 6550 -p 8080:80 --workers 2 -v /home/user/Projects:/project
What this does
create
Starts a service worker, this is the entry point for all k8s things when you use kubectl--workers 2
Adds 2 workers, if you want more go for it! This is where your applications will run. I suggest more than 1 so you can better simulate your actual environment. You can also choose none.--api-port 6550
Override this if you have a port clash— enable-registry
Starts a local docker registry which is already hooked up to the k8s instance-p 8080:80
Publish the ingress controller to port 8080, change this to your desired port number or expose more ports if you want to use an external ingress like a local Nginx instance-v /home/user/Projects:/project
Sets up a volume, in my case~/Projects
is where I keep all my code lives! But this is mounted on all the workers so you can do local development.
Now you have them working you need to set up kubectl so it will use this. Super-crazy-easy, just run
export KUBECONFIG="$(k3d get-kubeconfig --name='k3s-default')"
You will want to add this to your profile e.g. .bashrc
or .profile
or .zhsrc
or wherever your setup for your shell lives. Every time you do a k3d delete
and then a k3d create
you will need to re-run this command.
LOCAL DOCKER REGISTRY whatever you do, do not forget to configure the registry so you can reference it easily. To do is add the following to your /etc/hosts
file
127.0.0.1 registry.local
Now you can sit back and enjoy a quick local k8s instance. If you know everything about kubectl and helm you can stop right now! For the rest of us mortals, you still need to install some more things.
kubectl setup
kubectl is the command-line tool you will use in order to manage your Kubernetes cluster. In spite of the myriad of dashboard and tools, you really need to learn how to use it. I am not going to document how to use it here but you will pick it up as you use Kubernetes more. What I need to show you is how to install it.
First, you will need to know what version of kubectl you will need. To do this run k3d version
and it will say something like
k3d version v1.7.0
k3s version v1.17.3-k3s.1
In this example, you are running Kubernetes version 1.17.0 and need that version of the kubectl
Then install the kubectl with the following
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
To test it run
kubectl cluster-info
If you get Config not found ...
error then run export KUBECONFIG="$(k3d get-kubeconfig --name='k3s-default')"
Now you are good to use Kubernetes. If you want to use it quick and easy, run kubectl completion bash
and now you can do tab completion. Just add the following to your profile e.g. your ~/.bashrc
source <(kubectl completion bash)
and you can replace bash
with zsh
helm setup
helm is a tool to manage Kubernetes manifest files and allows templating. It also helps manage applications which have been deployed. You can think of it like docker-compose is to Docker or vagrant is to virtual machines. It isn’t quite as advanced as tools like puppet or ansible but can be used instead of or alongside them.
You will ned helm 3 and above, so remove any other version you might have installed. To install run the following
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
helm repo add stable https://kubernetes-charts.storage.googleapis.com
Do you need to run helm repo add...
maybe not, I can’t remember as I was playing with it and accidentally removed everything.
Just like kubectl, you can add all the bash-completion helm completion bash
here is the line for your profile source <(helm completion bash)
Now you are ready to create an application and run it locally and do all the fancy things you wanted to do but where too afraid to ask.
The end is in sight. You have all the tools available and they are all configured for quick development.
Run an Application
Quick Note: The example is a Python-based Flask application that uses gunicorn. Don’t Panic you do not need to know Python, Flask, gunicorn or any other tool to understand to run the application.
The reason I chose to do a Flask app is that the standard nginx apps are not realistic and do not answer all the questions I had about starting up an application.
I am not going to do a deep dive into charts or Docker in this blog. Whilst this would be a good thing to do, I really want people to be able to spin up an application, make some minor changes and understand how it is hooked up.
First clone the repo, I have added the path I cloned it into but you may need to change it depending on how you tend to do coding and where you set up the volume for your k3d instance, we did k3d k3d create ... -v /home/user/Projects:/projects
git clone https://github.com/lukejpreston/flask-helm-example.git ~/Projects/lukejpreston/flask-helm-example
App Code
The app
folder is where the application code lives. This is the folder where a dev would be writing the majority of their code. In this case, it is very small and not very interesting.
The Chart
The chart
folder is the helm chart, created using helm create chart
command.
There are two files which I will reference. chart/values.yaml
and devvalues.yaml
The idea is you use the devvalues.yaml
for local development and this will override anything you want to in the app, whereas values.yaml
is used for non-dev environments.
chart/templates/flask-helm-example.yaml
I have put all the application code in a single template file to make it easier to follow in the blog. This is not to say this is the best approach (though I quite like chunking the YAML per application chunk).
lines 1-46
set up a local volume which will link your local app code to the container, so any changes you make will be reflected in the container. This is wrapped in an {{ if... }}
and .Values.local
is set in devvalues.yaml
We then have the Service
which exposes the pods created by the Deployment
The Deployment
is largely uninteresting, with the following exceptions
line 79
is overridden in the devvalues.yaml
file to be your local registry, this assumes you rank3d create ... --enable-registry
and set up your /etc/hosts
as described before.
line 83-85
turns on reload
for gunicorn, so when you update some code the app will automatically reload. A great alternative to this is pm2
line 99-102
and line 104-108
set up the volumes for your local code, this is defined in devvalues.yaml
line 118
is the path, which is not /
in devvalues.yaml
this is because the ingress uses the same hostname for all your applications, perhaps this is the case for your real applications but typically I have several hostnames for different applications and hence they can all run on /
There are also various parts around setting up a tls
which is an example of how you might have tls
setup for non-dev environments.
That is pretty much everything in this file worth noting. I have seen several examples using an environment
variable but I feel that avoiding this makes the code cleaner, it also means you can use “dev” code for debugging just by turning it on when deploying.
Running and Developing the App
Here’s hoping that all makes sense to you, the key thing is, you have a devvalues.yaml
file which overrides charts/values.yaml
for local development.
You need to build and publish the container to your local registry
docker build . -t flask-helm-example:latest --network host
docker tag flask-helm-example:latest registry.local:5000/flask-helm-example
docker push registry.local:5000/flask-helm-example
It shouldn’t take too long to do the above if you change the Dockerfile
you will need to re-run the above lines.
Once that is done run
helm install flask-helm-example -f devvalues.yaml ./chart
Notice that I have provided the -f devvalues.yaml
without this, it will not mount the volume for local development and will not set up the ingress as you would have expected it to.
Then you can go to http://localhost:8080/flask-helm-example and see it working. You should change the app/main.py
file to return a different string, and then gunicorn should reload. When you refresh the browser it will show you the new value.
Quick Summary
Amazing! You have an application which you can run locally and in a non-dev environment. Now go spread the word, developing Kubernetes applications is not only easy! Get set up in 9 lines of code:
# install kubectl
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl# install helm 3
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
helm repo add stable https://kubernetes-charts.storage.googleapis.com# install k3d
wget -q -O - https://raw.githubusercontent.com/rancher/k3d/master/install.sh | bash# start k3d
k3d create --enable-registry --api-port 6550 --publish 8080:80 --workers 2 -v /home/user/Projects:/project# get kubectl to use your k8s instance
export KUBECONFIG="$(k3d get-kubeconfig --name='k3s-default')"
Then to install and run the application using local values
# build and push docker image
docker build . -t flask-helm-example:latest --network host
docker tag flask-helm-example:latest registry.local:5000/flask-helm-example
docker push registry.local:5000/flask-helm-example# start the helm chart with local values
helm install flask-helm-example -f devvalues.yaml ./chart
Useful Commands
Here is a cheat sheet of commands you may find useful, there are loads more, but looking through my history I seem to use these a lot
kubectl get events
kubectl get pods
kubectl logs <pod-name> --follow
kubectl cluster-info dump
kubectl exec -t <pod-name> -- /bin/bashhelm install <name> -f <values>.yaml <chart-location>
helm uninstall <name>
helm status <name>
Questions
As I said at the beginning, feel free to ask questions. Note this blog was written Feb 2020 and may be outdated by the time you are reading this. I don’t want to claim I am an expert but I will try and find a solution to any problems you might have with you if I don’t have the answer, this will likely benefit me as well as you!
Happy coding!