Developing on Kubernetes: The Inner & Outer Loop

Fabian Deifuss
Jun 10 · 10 min read

This article is geared towards developers with some experience in Kubernetes or general systems administration although, hopefully, anyone can take something away from these practices.

Kubernetes “K8s” evolved to be the standard way to deploy your services to the cloud. And rightly so! Using a container orchestration engine like K8s provides many niceties including but not limited to:

  • Scalability: never outgrow your infrastructure
  • Modularity: adapt K8s to your needs without patching upstream source code
  • Self-healing: restart failed containers
  • Simple service discovery
  • Standardisation of infrastructure abstractions
  • Crisp atomicity of deployments

Having evolved into an industry standard, K8s allows teams to have a common denominator to reason about in regards to deployment strategies. Even though it did not reinvent general concepts of software architecture & deployment strategies, the methodologies have been made easier to grasp than ever before thanks to the declarative nature of K8s resources. This standardisation is critical as it strongly decouples ones expertise from a company’s potential legacy infrastructure and allows for more rapid iteration and innovation.

Though K8s has its flaws in terms of complexity and overhead when used in production, these issues rarely outweigh the benefits it provides. Thus, I will provide a concise overview of development workflows that I am fond of for developing on K8s. This includes:

  • (bootstrapping a project)
  • the inner loop: local development
  • the outer loop: deployment strategies & configuration management

In my humble opinion, these stages are not disconnected but rather cascading. Hence, the transition and interoperability between each stage is critical. If done right, lifecycle management is a breeze. Also, I firmly believe that complexity should only be introduced if there is a specific reason to do so.

Keep it as simple as possible, but no simpler.

All project files can be found on GitHub. Note that these practices are my personal favourites. Feel free to start a discussion if you agree/disagree with something stated.

Prerequisites

To not break the flow later on, let’s install all necessary packages upfront. Note that you can find up to date installation instructions on each project’s website:

# nix
nix-env -bi docker kompose minikube kubectl skaffold
# brew
brew install --cask docker
brew install docker kompose minikube kubernetes-cli skaffold
# start docker desktop manually
# openSUSE
sudo zypper in docker kompose minikube kubernetes-client
sudo systemctl enable --now docker
curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64 && \
sudo install skaffold /usr/local/bin/
# debian
sudo apt install docker.io
sudo systemctl enable --now docker
# kompose
curl -L https://github.com/kubernetes/kompose/releases/download/v1.22.0/kompose-linux-amd64 -o kompose
chmod +x kompose
sudo mv ./kompose /usr/local/bin/kompose
# minikube
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
# kubectl
curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
# skaffold
curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64 && \
sudo install skaffold /usr/local/bin/
# fedora
sudo dnf install moby-engine kompose
sudo systemctl enable --now docker
# minikube
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
# kubectl
curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
# skaffold
curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64 && \
sudo install skaffold /usr/local/bin/

Bootstrapping a Project

Skip this part entirely if you are comfortable writing K8s resources or already have a working service. This is supposed to ease the transition from a dockerised setup to deploying to a K8s cluster.

TL;DR: agile thinking — do not get caught up in the various processes. Focus on the core problem to solve. The simplest solution is the correct one.

So you have gathered all the requirements and designed the system’s architecture. Now on to the fun part: coding the actual business logic. But how shall we start out? We could clone a minimal scaffold that matches our designed architecture. We could use tools like helm create to scaffold some baseline K8s resource files or just write those sweet yaml files by hand. Personally, I dislike introducing K8s in a brand new project because of two reasons:

  1. When starting out, you are working locally. There is nothing to deploy and there is nothing to orchestrate just yet. So why introduce K8s at this point?
  2. Software development includes a process of discovery and learning. More often than not you will find yourself adjusting parts of your design during the actual development. Let’s not get caught up in the process and keep things flexible and simple for now.

Furthermore, when starting out and testing your software on your machine, you can save yourself the extra headache troubleshooting additional intermediaries by eliminating them altogether (looking at you, DNS). Just run your containers the way you are used to. Chances are you are familiar with docker run and docker compose. Personally, I like a minimal docker-compose.yml to kick things off as I really like the clarity of its descriptive format. You do you.

Let’s assume this was your fancy service:

// main.go
package main
import (
"log"
"os"
"time"
)
func main() {
for {
log.Println("Hello from service " + os.Getenv("NAME"))
time.Sleep(time.Second)
}
}

Which can be containerised using a Dockerfile like this:

# Dockerfile
FROM golang:1.16
COPY main.go .
RUN go build -o myservice main.go
CMD ./myservice

A minimal docker-compose.yml then looks like:

# docker-compose.yml
version: "3"
services:
myservice: # call this anything you like
build: .
environment:
NAME: "kube"

This is all you need to run docker compose up --build which will build and run your service. Assuming your service needs a database like postgres, just extend docker-compose.yml

# docker-compose.yml
version: "3"
services:
myservice: # call this anything you like
build: .
environment:
NAME: "kube"
depends_on:
- db
db:
image: postgres:13
environment:
POSTGRES_PASSWORD: "postgres"

Running docker compose up --build will build your service and run it once postgres is up and running. Straight forward, isn’t it?

Once your service is ready to be deployed, assuming you are deploying to K8s, you will finally want to ditch docker compose and switch over to developing against K8s. Luckily, we can convert our docker-compose.yml to their K8s representation smoothing the transition significantly:

mkdir k8s
kompose convert --out k8s/
rm docker-compose.yml

This will create two deployment resources within the k8s directory allowing us to take a look at the inner dev loop. But wait, we need a local K8s cluster first.

Local K8s Cluster

Choosing a local K8s cluster can be overwhelming as there are lots of solutions. Generally speaking, you want to get as close to configuration parity to your production cluster as possible. For example, if you deploy to EKS, consider the EKS snap. Minikube is an easy to use and extensible high-fidelity local cluster that has never failed to amaze me. Unless you already have a specific reason not to use Minikube, you cannot go wrong with it. To fire up a local K8s cluster with docker using Minikube, run minikube start --driver=docker

The Inner Loop

Depending on your setup, your local K8s cluster probably cannot access your local docker image registry. Hence, we cannot create our resources using kubectl apply -f k8s/ until we have built and pushed our service-image to the cluster’s registry (obviously, we do not want to push every dev image iteration to DockerHub/GitHub Container Registry). As you may notice, manually triggering this process on every change would be tedious! Luckily, there are tools helping out with exactly that issue. The most popular ones are Skaffold, Garden & Tilt. I will be showcasing skaffold as it is my personal favourite. Check out this awesome comprehensive comparison by Liran Haimovitch if you want to know more about how these tools differentiate themselves.

Our current directory looks like this:

$ tree
.
├── Dockerfile
├── k8s
│ ├── db-deployment.yaml
│ └── myservice-deployment.yaml
└── main.go

Configuring skaffold is rather simple thanks to its interactive walkthrough. skaffold init will guide you:

$ skaffold init
? Choose the builder to build image myservice [Use arrows to move, type to filter]
> Docker (Dockerfile)
None (image not built from these sources)

myservice shall be built from our own Dockerfile. Hit Enter to confirm the selection

$ skaffold init
? Choose the builder to build image myservice Docker (Dockerfile)
? Choose the builder to build image postgres [Use arrows to move, type to filter]
> None (image not built from these sources)

postgres shall be pulled from DockerHub, so None sounds reasonable. Hit Enter to confirm the selection

$ skaffold init
? Choose the builder to build image myservice Docker (Dockerfile)
? Choose the builder to build image postgres None (image not built from these sources)
apiVersion: skaffold/v2beta17
kind: Config
metadata:
name: dev-on-k-s
build:
artifacts:
- image: myservice
docker:
dockerfile: Dockerfile
deploy:
kubectl:
manifests:
- k8s/db-deployment.yaml
- k8s/myservice-deployment.yaml
? Do you want to write this configuration to skaffold.yaml? (y/N)

deploy picked up our two K8s resources that we generated earlier. However only myservice will be built on changes using our Dockerfile. This looks reasonable. Confirm with y

And that is it! Just like that, we, or anyone we work with, can fire up our services using skaffold dev without any prior setup. skaffold will watch the source files for changes. Once a change occurred, skaffold will rebuild the image using the corresponding Dockerfile and trigger a fresh deployment. Try it out! Change the logged message on line 10 from main.go and observe skaffold pick up the change in no time. If you need some specific customisation, like port-forwards, chances are you will find an option for it in the skaffold.yaml reference.

The Outer Loop

Assuming we are happy with our service, we ultimately want to deploy it to a live cluster. Most likely we want the resource configuration to be ever so slightly different between deployment environments like local, dev, staging, production. This is where a robust configuration management system comes into play.

Note that we are talking about configuration management, not secrets management which is typically external to a cluster. If you want to upgrade your secrets management, take a look at Vault.

Kustomize lets you customize raw, template-free YAML files for multiple purposes, leaving the original YAML untouched and usable as is. Kustomize uses patches to introduce environment specific changes on an already existing standard config file without disturbing it.
- https://github.com/kubernetes-sigs/kustomize

In other words, Kustomize applies patches specified in an environment specific configuration file called kustomization.yaml to the raw K8s resource files. This makes infrastructure as code principles more flexible and transparent. The desired file structure that Kustomize expects looks like this:

~/someApp
├── base # the raw, unmodified k8s resource files
│ ├── deployment.yaml
│ ├── kustomization.yaml
│ └── service.yaml
└── overlays # the deployment environment specific patches to apply
├── development
│ ├── cpu_count.yaml
│ ├── kustomization.yaml
│ └── replica_count.yaml
└── production
├── cpu_count.yaml
├── kustomization.yaml
└── replica_count.yaml

Let’s kustomize our simple service. Currently our project layout looks like this:

$ tree
.
├── Dockerfile
├── k8s
│ ├── db-deployment.yaml
│ └── myservice-deployment.yaml
├── main.go
└── skaffold.yaml

To make it conform to Kustomize’s expected structure we need to move the K8s resource files into a base directory and register them in a kustomization.yaml

mkdir k8s/base
mv k8s/*.yaml k8s/base/
echo "apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- db-deployment.yaml
- myservice-deployment.yaml
" > k8s/base/kustomization.yaml

Creating environment specific patches is as simple as creating overlays containing only the patches to apply. For example, if we want to patch the environment variable’s value for local development environments we need to specify only this specific piece of the YAML and register the resource in a kustomization.yaml file referencing the base files we want the patches to be applied to:

# local environment patches
mkdir -p k8s/overlay/local
echo "
apiVersion: apps/v1
kind: Deployment
metadata:
name: myservice
spec:
template:
spec:
containers:
- name: myservice
env:
- name: NAME
value: local
" > k8s/overlay/local/myservice-deployment.yaml
echo "
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
patchesStrategicMerge:
- myservice-deployment.yaml
resources:
- ../../base

" > k8s/overlay/local/kustomization.yaml

Similarly, we could patch up the YAML for staging and production environments if so desired.

Now that we changed our file structure, we broke the previously generated skaffold.yaml. Thankfully, skaffold and kustomize are interoperable so re-generating a skaffold.yaml is as simple as running skaffold init once again.

rm skaffold.yaml
skaffold init

Notice how skaffold configured its deploy stage to use kustomize’s base directory and appended a profile for the local environment:

# skaffold.yaml
apiVersion: skaffold/v2beta12
kind: Config
metadata:
name: dev-on-k-s
build:
artifacts:
- image: myservice
docker:
dockerfile: Dockerfile
deploy:
kustomize:
paths:
- k8s/base
profiles:
- name: local
deploy:
kustomize:
paths:
- k8s/overlay/local

skaffold dev will use the base path by default. To use the local configuration, specify the local profile as a CLI flag: skaffold dev -p local

The core idea of GitOps is having a Git repository that always contains declarative descriptions of the infrastructure currently desired in the production environment and an automated process to make the production environment match the described state in the repository. If you want to deploy a new application or update an existing one, you only need to update the repository — the automated process handles everything else. It’s like having cruise control for managing your applications in production.
- https://www.gitops.tech

Configuring different deployment environments using Kustomize’s YAML resources allows us to version control every single configuration. This means the CD part of our CI/CD pipelines contains no more code than

skaffold run -p <profile>
# you will probably specify a dev/staging/production
# profile that you want to use here respectively

which will build and test the artifacts, tag them and deploy the K8s manifests to your cluster. For a more sophisticated/fine-grained CD pipeline, consider different skaffold invocations like

skaffold build --push
skaffold deploy
# or to separate the configuration from
# the application's source code
skaffold render --output render.yaml
skaffold apply render.yaml

To learn more about Skaffold’s CI/CD workflows, check out their awesome CI/CD documentation.

Wrapping Up

To clean up your local environment, minikube delete will shut down and prune your K8s cluster in a single motion.

To reiterate: Reproducibility and application life cycle management on Kubernetes can be a breeze when using the right tools for the job. Skaffold and Kustomize provide a clean and easy to grasp workflow.

I hope this was of help to anyone. Feel free to hit me up in the comments section or find me on twitter.

Geek Culture

Proud to geek out. Follow to join our +500K monthly readers.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store