Building and Extending Kubernetes: Writing My First Custom Controller with Go

Disha Virk
9 min readFeb 5, 2024

--

The world of Kubernetes (K8s) is a playground for those who love to explore the depths of cloud-native technologies. It offers endless possibilities for customization and optimization, a feature that has always intrigued me! 😍

Among its many brilliant features, the concept of controllers particularly has always stood out to me, acting as the silent orchestrators ensuring the desired state of applications within our clusters. The more I explored, the more I realized that understanding controllers is key to unlocking advanced K8s capabilities. So, this weekend, I decided it was time to roll up my sleeves and embark on a mini project that had been on my mind: writing my first custom Kubernetes controller. 🛠️

As someone who had only dabbled in Go through Helm chart templating until now, the idea of engaging directly with Go and core K8s libraries to extend the platform’s capabilities was both daunting and exhilarating.

Through this article, I aim to share the insights and knowledge gained from this experience, offering a step-by-step guide on developing a custom Kubernetes controller from scratch.

Kubernetes Controller

A Kubernetes Controller is a control loop that watches the state of your cluster through the Kubernetes API. It makes changes attempting to move the current state closer to the desired state. 🔄
Controllers are the core mechanisms through which Kubernetes operates and maintains cluster resources. Each controller is focused on a specific responsibility within the cluster. 🏗️

Some examples of Kubernetes Controllers include:

  • ReplicaSet Controller: Ensures that the specified number of pod replicas are running at any given time. If a pod crashes, the ReplicaSet controller notices the discrepancy and starts a new pod to maintain the desired state.
  • Deployment Controller: Manages deployments of applications, handling updates to your application or its configuration over time. It enables you to describe an application’s lifecycle, such as which images to use for the app, the number of pods, and how to update them.
  • Service Controller: Manages the creation of internal cluster DNS entries for services, enabling service discovery within the cluster.
  • Namespace Controller: Manages the lifecycle of namespaces, ensuring that resources within the namespace are deleted when the namespace is.

“Hey, wait a minute! Are Kubernetes Controllers and the Kubernetes Controller Manager the same thing? Absolutely not!”❌

Kubernetes Controller Manager

The Kubernetes Controller Manager is a daemon that embeds these controllers. It’s essentially a container for various controller processes. Rather than running each controller as a separate process, Kubernetes bundles them into a single process — the Controller Manager — to streamline their management. This component runs controller processes and handles the lifecycle of various controllers in Kubernetes, including the Node Controller, Job Controller, Endpoints Controller, and Service Account & Token Controllers, among others.

It is responsible for:

  • Orchestrating and managing the individual controllers.
  • Ensuring that controllers communicate with the API server to watch for changes in their resources and to make the necessary adjustments. 📡
  • Managing the overall control logic that impacts the state of the cluster. ⚙️

Think of the Kubernetes Controller Manager as a manager in an office who oversees various departments (controllers).
Each department (controller) has its specific job, such as handling customer service (Service Controller), managing the workforce (ReplicaSet Controller), or overseeing new projects (Deployment Controller).
The manager (Controller Manager) doesn’t do these jobs directly but ensures that each department works effectively towards the company’s goals (desired state of the cluster).

Hope that clarifies!

Let's get our hands dirty now…

Prerequisites ✅

Before we start, we ensure we have the following:

  • Go programming language installed.
  • Access to a Kubernetes cluster where we can test the controller.
  • client-go library, which is the official Go client for Kubernetes.

Step 1: Setting Up The Cluster

I used kind to set up my local cluster due to its simplicity and the minimal requirements it imposes on the development environment. 🖥️

Prerequisites

  • Docker 🐳: Ensure Docker is installed and running on your machine.
  • Kind: Install kind on your machine. Installation instructions can be found on the official kind website or GitHub repository.

Step 1: Install Kind

On macOS, we use Homebrew:

brew install kind

For Windows or Linux, please follow the installation instructions provided in the kind documentation.

Step 2: Create a Cluster Configuration File (Optional)

While this step is optional, creating a cluster configuration file allows us to customize your cluster.

# kind-config.yaml - defines a cluster with one control plane node and two worker nodes.
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker

Step 3: Create the 'Kind' Cluster

Run the following command to create a cluster. If you’re using a configuration file, specify it with the --config flag; otherwise, kind will create a default cluster for you.

kind create cluster --name my-cluster

If you have a configuration file:

kind create cluster --name my-cluster --config kind-config.yaml

Step 4: Verify the Cluster

Once the cluster creation process is complete, we can check the status of your nodes: 🚀

kubectl get nodes

Step 2: Define the CRD

First, we need to define our CRD. This YAML file specifies the structure of our custom resource, TheFooTheBar

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: thefoosthebars.myk8s.io
spec:
group: myk8s.io
versions:
- name: v1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
message:
type: string
scope: Namespaced
names:
plural: thefoosthebars
singular: thefoosthebars
kind: TheFooTheBar
shortNames:
- tfb

This CRD defines a new kind of resource named TheFooTheBar with a single field message in its spec.

Apply this CRD to the cluster:

kubectl apply -f custom-resource-definition.yaml

Step 2: Define the Custom Resource

After the CRD is created, we can define our custom resource.

apiVersion: myk8s.io/v1
kind: TheFooTheBar
metadata:
name: the-foo-the-bar-sample
spec:
message: "Hello, World!"

Apply this resource to the cluster:

kubectl apply -f custom-resource.yaml

Step 3: Set Up Go Environment for the Controller

Create a new directory for our project and initialize a Go module:

mkdir k8s-controller
cd k8s-controller
go mod init k8s-controller

Add dependencies📦 :

go get k8s.io/apimachinery@v0.22.0 k8s.io/client-go@v0.22.0 sigs.k8s.io/controller-runtime@v0.9.0

Step 4: Writing the Real, Actual Custom Controller🛠️️

Create a file main.go. Our controller will watch for changes to TheFooTheBar instances and log the message field from the spec.

Setup and Configuration

The importsection sets up the controller program with necessary K8s client libraries and standard Go utilities to interact with K8s resources dynamically and flexibly.

The importstatement in Go serves a similar purpose to the import statement in Python, with both being used to include packages or modules in code.

package main

import (
"context"
"fmt"
"path/filepath"

metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/watch"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/util/homedir"
)

func main() {

// define a variable called "kubeconfig" to store the path to the kubeconfig file
var kubeconfig string

// check if home directory path is not empty.
// if not, contruct the path to kubeconfig file
if home := homedir.HomeDir(); home != "" {
kubeconfig = filepath.Join(home, ".kube", "config")
}

// build configuration to connect to a K8s cluster based on command-line flags and provided kubeconfig path
config, err := clientcmd.BuildConfigFromFlags("", kubeconfig)

// if external kubeconfig file either wasn't found, wasn't accessible, or was invalid
// throw error
if err != nil {
fmt.Println("Falling back to in-cluster config")

// retrieve configuration from environment variables and service account tokens available within pod
config, err = rest.InClusterConfig()

// if even the in-cluster configuration setup fails, raise panic
if err != nil {
panic(err.Error())
}
}

Initialize a Dynamic Client 🤖

// create new dynamic client for interacting with K8s API
dynClient, err := dynamic.NewForConfig(config)
if err != nil {
panic(err.Error())
}

Unlike the static client, which requires predefined structs for each K8s resource type, the dynamic client works with any resource type at runtime, making it especially useful for working with CRDs or when we don’t know all the resource types your application might interact with in advance.

Define CRD

// define variable, "thefoothebar" holding definition of custom resource
// specifies that resource belongs to API group "myk8s.io", with version "v1", & plural name of resource
thefoothebar := schema.GroupVersionResource{Group: "myk8s.io", Version: "v1", Resource: "thefoosthebars"}

Setup Informer 📡

// creates new informer for specific resource i.e. "thefoothebar"
// this informer watches for changes to resource & maintains a local cache of all resources of this type
informer := cache.NewSharedIndexInformer(

// callbacks defining how to list and watch the resources, respectively

// ListFunc initially populates informer's cache
ListFunc: func(options metav1.ListOptions) (runtime.Object, error) {
return dynClient.Resource(thefoothebar).Namespace("").List(context.TODO(), options)
},
// WatchFunc keeps cache updated with any changes
WatchFunc: func(options metav1.ListOptions) (watch.Interface, error) {
return dynClient.Resource(thefoothebar).Namespace("").Watch(context.TODO(), options)
},
},

// specifies that informer is for unstructured data
// unstructured data represent any K8s resource without needing a predefined struct
&unstructured.Unstructured{},

// resync period of 0 means that informer will not resync the resources unless explicitly triggered
0,
cache.Indexers{},
)

Event Handling

// registers callbacks for different types of events related to resource, "thefoothebar"
informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
// called when new resource is created
AddFunc: func(obj interface{}) {
fmt.Println("Add event detected:", obj)
},
// called when existing resource is updated
UpdateFunc: func(oldObj, newObj interface{}) {
fmt.Println("Update event detected:", newObj)
},
// called when existing resource is deleted
DeleteFunc: func(obj interface{}) {
fmt.Println("Delete event detected:", obj)
},
})

Running the Informer

//  signal informer to stop watching for resource changes
stop := make(chan struct{})
defer close(stop)

// starts informer in separate goroutine, allowing it to begin processing events
go informer.Run(stop)

// blocks until the informer's local cache is initially synced with current state of cluster
// if it times out, program panics
if !cache.WaitForCacheSync(stop, informer.HasSynced) {
panic("Timeout waiting for cache sync")
}

fmt.Println("Custom Resource Controller started successfully")

<-stop

Building and Running Our Controller 🚀

go build -o k8s-controller .
./k8s-controller

Let's edit our custom resource 📝

Edit custom resource, the-foo-the-bar

As soon as we save our custom resource, we see the events in the log messages of our controller’s output.

Now, when we delete our custom resource 🗑️,

Conclusion

In wrapping up this exploration into K8s custom controllers, it’s clear that what we’ve accomplished is merely the tip of the iceberg. While we’ve made significant strides in learning and application, it’s evident that we’ve only begun to scratch the surface of what’s possible.

The experience was immensely rewarding, not just in terms of technical learning but also in igniting a deeper curiosity to delve further into K8s’ capabilities.
The client-go library, a cornerstone of this mini project, stands out as a testament to the power at our disposal for developing Kubernetes custom controllers. With its comprehensive suite of components, it equips developers with the tools necessary to write efficient and effective solutions. However, it requires developers to learn low-level details about how K8s libraries are implemented and write some boilerplate code.

I cannot wait to go deeper into this interesting topic and share more insights about it.

Well, until then. Happy coding! 🚀💻

--

--