Skaffold and Kaniko: Bringing Kubernetes to Developers
There’s a lot (maybe too much) going on in terms of the developer ecosystem when it comes to Kubernetes (k8s). There is no shortage of tools. Don’t worry, this post isn’t going to compare them all for the 58th time. What I’d like to outline for you here is a rather prescriptive, lightweight, and incredibly flexible way of working with Kubernetes as a developer. So let’s shed some weight.
Why We Should Care
Let me start first by saying that without Docker, Kubernetes would not be where it is today. Whatever your opinions are of the company, the technology itself has been absolutely instrumental to the success of Kubernetes. We find ourselves, however, at a bit of a crossroads where the technology that Docker popularized (the Linux container) has been almost entirely made into a commodity. Remembering that a Linux container is simply an aggregation of several Linux-based constructs, we find ourselves suddenly not necessarily in need of Docker as much as we used to be for common container building and management. The ‘daemon’ that Docker presents is, in light of recent container building tools, just too heavy for most development needs.
This came about during Docker’s evolution (as a company). I noticed this on my workstation as I saw that Docker went from being a cool little utility for working with containers to a fully fledged platform, most of which I didn’t need or want to use due to the ubiquity and utility of Kubernetes. The “daemon” just got too large.
I recently set out to prune my workflow significantly. I wanted to be able to develop locally, but I also wanted to leverage a deployment environment in Kubernetes to maintain consistency with how the rest of, well, everyone is deploying things. This consistency in environments that translates seamlessly from my local workstation through to production is nothing short of monumental, frankly. And I want in.
- Skaffold: A command line utility that allows for iterative local development against a Kubernetes cluster.
- Kaniko: A build mechanism by which container images can be built and pushed to a registry without the use of Docker. This has been recently integrated into Skaffold.
You can integrate these into any level of existing pipelines that you’d like (using existing CI / CD tools), but these tools are not replacements of that.
This tool-chain allows me to focus on my code, and oh by the way, I don’t have Docker installed on my local workstation because as a developer, I really just want to output a resulting image to deploy, test, and iterate against. I also don’t have to worry about maintaining a local VM running Kubernetes or a distribution thereof unless I have hard requirements to do offline development. Even then, Skaffold and Kaniko can be run locally if you want the full offline experience. Here however, I’m using GKE because it also fits in with how I want to develop quickly and effectively. Let’s start by looking at my code tree:
UPDATE: You can find the code at https://gitlab.com/tariqislam/hello-go
A quick rundown:
- Dockerfile: Yep. This might be the single largest lasting legacy of Docker.
- hello-go-ingress.yaml: This is a regular ingress manifest. It’ll cause a Google Cloud Load Balancer (best in the business) to be auto-provisioned and globally distributed.
- hello-go-ss.yaml: This is my stateful set manifest (this is a stateful application written in Go)
- hello-go-svc.yaml: This is my service manifest for my deployment
- main.go: Here’s my app
- skaffold.yaml: This is where I define my local dev “pipeline” of the image I want built, where I want it to go, and what files to include in the build. This is also where I specify that I want to use Kaniko to build the container image instead of Docker.
Get Up And Running
In the skaffold manifest I define three things before I can forget about it:
- The image name (fully qualified to include the destination registry)
- kaniko as the build mechanism, including the GCP bucket where my source context is pulled into as well as a service account secret for pushing the image to the registry by kaniko itself
- A wildcard pattern to detect the files to include as part of my iterative rollout
Here’s what that looks like:
This file is literally all I need to get Skaffold and Kaniko working for me. Everything else is exactly what you would expect in terms of Kubernetes manifests, Dockerfile, and code. If you haven’t already, you can quickly create a bucket with the following:
gsutil mb gs://<bucket name>
Next, you’ll need to create a regular service account for Kaniko to be able to push the built image to your Google Container Registry. This service account should be given
Storage Admin permissions, and you’ll need to download the corresponding key for the service account that’s referenced in the file above. You can do this with the following commands:
gcloud iam service-accounts create <SA-NAME> \
Retrive the json key with the following (move it to the path specified in your skaffold.yaml file):
gcloud iam service-accounts keys create /download/path/ --iam-account <SA-NAME>
At this point I can set it and forget it. Skaffold and Kaniko both move themselves into the background for me to just use seamlessly. Now I just need to provision a Kubernetes environment that, as a developer, I can treat as my own dev workspace without having to worry about multi-tenancy issues:
gcloud container clusters create my-dev-cluster --zone=us-east4-a
Wait a few minutes for the cluster to come up and then:
This will inject Kaniko into my cluster for me as a temporary Pod, which will then build my container image, push it into the specified registry, and Skaffold will then deploy that image to my cluster using my context from my k8s config. This is a single, one-time build.
It’ll look something like this:
If I want to iteratively develop, Skaffold offers an option to automatically detect changes in my code, rebuild the image, push it, and redeploy to my own k8s environment. For that we use the command:
Which will listen for changes and output subsequent build and deploys:
And that’s really about it. I have what I need as a developer to iteratively develop without distraction or overhead. If my needs or environments change, Skaffold itself is flexible enough to use different build environments (local Docker, Google Cloud Builder, etc) that can be specified as profiles in a single Skaffold config. For example if I decide to use Google Cloud Builder instead of Kaniko, I could simply add the following snippet to my config file:
And run the following:
skaffold run -p gcb
I hope this was helpful in getting you started with effectively developing with Kubernetes environments. These projects are just getting started so their potential is vast.