Writing your first Kubernetes Operator: A Definitive Guide to Starting Strong

Sonu Jose
DevelopingNodes
Published in
7 min readNov 11, 2023

Have you ever considered the thrill of designing and crafting your very own Kubernetes Operator? Welcome to the intricate yet powerful world of Kubernetes Operators!

In this comprehensive guide, we’ll start our journey through the fundamentals of controllers, operators, CRD, and best practices surrounding Operators. We will also create a custom operator named ConfigmapSync to sync Configmap between namespaces.

On completing the blog, you will be able to write and deploy your own custom operator using Kubebuilder.

What are Kubernetes Operators?

In its simplest technical form, an operator adds an endpoint to the Kubernetes API, called a custom resource(CR), along with a control plane component (controller) that monitors and maintains resources of the new type. In other words, Operators are software extensions that use custom resources to manage applications and their components.

An operator has got 3 components — controller, custom resource(CR) and state. Controller is just some logic about something that it is supposed to be managed and is usually visualised as an observe and adjust loop. Observe the current state, compare it to the desired state and adjust the state. The state just holds the information of what the desired state of the resource is and the resource is the thing that you are managing.

Operator Lifecycle

The operator’s role is to reconcile the actual state of the application with the desired state by the CRD using a control loop in which it can automatically scale, update, or restart the application.

CRDs are used to extend Kubernetes by introducing new types of resources that are not part of the core Kubernetes API. By defining a CRD, users or operators can create their own custom resources and define how those resources should be managed.

Why do we need a Kubernetes operator?

Operators allow developers to define custom resources and the associated controllers to manage those resources. This enables the automation of complex tasks, such as managing stateful applications, rolling out custom updates, and scaling services automatically based on demand.

  • Using operators, developers can create custom abstractions that encapsulate their knowledge of how to manage specific applications or infrastructure components.
  • This allows for greater productivity and consistency in managing complex applications and infrastructure, as well as easier automation and better control over resources.

The advantage of Operators is to expand the automation possibilities of Kubernetes. The presence of a broad ecosystem of K8s Operators makes it possible to find ready-made solutions for most uses.

Some Real-world examples of Kubernetes operators

With Istio Operator, we can install, update, and troubleshoot Istio conflicts automatically. This Operator has very few prerequisites (actually only istioctl) and then it allows both installation and validation of all APIs.

The Prometheus Operator for Kubernetes provides easy monitoring definitions for Kubernetes services and deployment and management of Prometheus instances.

The Elastic Kubernetes Operator — officially implemented by the Elastic project — enables the automation of Elastic Search and Kibana on Kubernetes with greatly simplified deployment configurations and easy management.

Introduction to Kubebuilder: an SDK for building Kubernetes APIs using CRDs.

Kubebuilder is a framework for building Kubernetes APIs using the controller-runtime library. It provides tools for generating code, building images, and deploying controllers as Kubernetes resources.

To create a custom operator using Kubebuilder, you’ll need to follow these steps. This will bootstrap the Kubebuilder and generate the necessary code for adding our custom controller.

  1. Install Kubebuilder: You can follow the instructions here: https://book.kubebuilder.io/quick-start.html#installation.

2. Create a new project: Run the following command to create a new project with a sample API and controller:

$ mkdir -p $GOPATH/src/github.com/example/
$ cd $GOPATH/src/github.com/example/
$ kubebuilder init --domain example.com --repo=github.com/example/my-operator
$ kubebuilder create api --group=mygroup --version=v1alpha1 --kind=MyKind

3. This will generate the basic scaffolding for your operator, including a sample API and controller.

  • Define your custom resource: Open the api/v1alpha1/mykind_types.go file and modify the MyKindSpec and MyKindStatus structs to define the desired fields and status of your custom resource.
  • Generate the CRD: Run the following command to generate the CRD for your custom resource:
$ make manifests

This will generate the config/crd/bases/mygroup.example.com_mykinds.yaml file, which contains the CRD for your custom resource.

4. Now write your controller logic: Open the controllers/mykind_controller.go file and write the logic for your controller. This will typically involve watching for changes to your custom resource, reconciling the desired state with the actual state, and updating the status of your resource as necessary.

5. Build and deploy your operator: Run the following commands to build and deploy your operator:

$ make docker-build docker-push IMG=example/my-operator:latest
$ make deploy IMG=example/my-operator:latest

This will build a Docker image for your operator, push it to a registry, and deploy it to your Kubernetes cluster.

Let’s build the ConfigmapSync Operator

With the tools and resources, let’s leverage this machinery to create a specialized operator known as `ConfigmapSync`. This operator is designed to seamlessly synchronize config maps from one namespace to others within a cluster.

Certainly! Here’s an example of a Kubernetes operator built using Kubebuilder to sync a ConfigMap from one namespace to another. This is a basic outline to illustrate the concept:

First, create a new API with Kubebuilder:

  1. Create the API:
    Use Kubebuilder to create a new API called `ConfigMapSync`.
kubebuilder create api --group apps --version v1 --kind ConfigMapSync

2. Define the Spec and Status:
Update the API’s `types.go` to include fields for source and destination namespaces. The types.go file defines the ConfigMapSync custom resource with its specification and status.

package v1

import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// ConfigMapSyncSpec defines the desired state of ConfigMapSync
type ConfigMapSyncSpec struct {
SourceNamespace string `json:"sourceNamespace"`
DestinationNamespace string `json:"destinationNamespace"`
ConfigMapName string `json:"configMapName"`
}
// ConfigMapSyncStatus defines the observed state of ConfigMapSync
type ConfigMapSyncStatus struct {
LastSyncTime metav1.Time `json:"lastSyncTime"`
}
// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// ConfigMapSync is the Schema for the configmapsyncs API
type ConfigMapSync struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec ConfigMapSyncSpec `json:"spec,omitempty"`
Status ConfigMapSyncStatus `json:"status,omitempty"`
}
// +kubebuilder:object:root=true
// ConfigMapSyncList contains a list of ConfigMapSync
type ConfigMapSyncList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []ConfigMapSync `json:"items"`
}
func init() {
SchemeBuilder.Register(&ConfigMapSync{}, &ConfigMapSyncList{})
}

3. Write the Reconcile Logic:
Implement the reconciliation loop in the controller.

// Reconcile method to sync ConfigMap
func (r *ConfigMapSyncReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
ctx := context.Background()
log := r.Log.WithValues("configmapsync", req.NamespacedName)
// Fetch the ConfigMapSync instance
configMapSync := &appsv1.ConfigMapSync{}
if err := r.Get(ctx, req.NamespacedName, configMapSync); err != nil {
return ctrl.Result{}, client.IgnoreNotFound(err)
}
// Fetch the source ConfigMap
sourceConfigMap := &corev1.ConfigMap{}
sourceConfigMapName := types.NamespacedName{
Namespace: configMapSync.Spec.SourceNamespace,
Name: configMapSync.Spec.ConfigMapName,
}
if err := r.Get(ctx, sourceConfigMapName, sourceConfigMap); err != nil {
return ctrl.Result{}, err
}
// Create or Update the destination ConfigMap in the target namespace
destinationConfigMap := &corev1.ConfigMap{}
destinationConfigMapName := types.NamespacedName{
Namespace: configMapSync.Spec.DestinationNamespace,
Name: configMapSync.Spec.ConfigMapName,
}
if err := r.Get(ctx, destinationConfigMapName, destinationConfigMap); err != nil {
if errors.IsNotFound(err) {
log.Info("Creating ConfigMap in destination namespace", "Namespace", configMapSync.Spec.DestinationNamespace)
destinationConfigMap = &corev1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Name: configMapSync.Spec.ConfigMapName,
Namespace: configMapSync.Spec.DestinationNamespace,
},
Data: sourceConfigMap.Data, // Copy data from source to destination
}
if err := r.Create(ctx, destinationConfigMap); err != nil {
return ctrl.Result{}, err
}
} else {
return ctrl.Result{}, err
}
} else {
log.Info("Updating ConfigMap in destination namespace", "Namespace", configMapSync.Spec.DestinationNamespace)
destinationConfigMap.Data = sourceConfigMap.Data // Update data from source to destination
if err := r.Update(ctx, destinationConfigMap); err != nil {
return ctrl.Result{}, err
}
}
return ctrl.Result{}, nil
}

4. Build and deploy your operator: Run the following commands to build and deploy your operator:

make docker-build docker-push IMG=registry/configmapsync-operator:latest
make deploy IMG=registry/configmapsync-operator:latest

5. You can also deploy the operator using the below manifest

apiVersion: apps/v1
kind: Deployment
metadata:
name: configmapsync-operator
spec:
replicas: 1
selector:
matchLabels:
app: configmapsync-operator
template:
metadata:
labels:
app: configmapsync-operator
spec:
containers:
- name: configmapsync-operator
image: registry/configmapsync-operator:latest # Replace with your image location

6. Apply a sample manifest based on the new operator CRD to the cluster

apiVersion: apps.bixbite.io/v1
kind: ConfigMapSync
metadata:
labels:
app.kubernetes.io/name: configmapsync
app.kubernetes.io/instance: configmapsync-sample
app.kubernetes.io/part-of: configmapsync
app.kubernetes.io/managed-by: kustomize
app.kubernetes.io/created-by: configmapsync
name: configmapsync-sample
spec:
SourceNamespace: "sourceNamespace"
DestinationNamespace: "destinationNamespace"
ConfigMapName: "configMapName"

By leveraging the power of operators, we’ve explored how the ConfigMapSync operator simplifies the management of ConfigMaps, facilitating their synchronization across namespaces within a Kubernetes cluster. This hands-on process involved defining the CRD, writing reconciliation logic, and deploying the operator to ensure smooth and efficient synchronization.

The journey doesn’t end here. There’s a wealth of opportunities to explore, innovate, and fine-tune operators, amplifying the potential of Kubernetes to optimize your workflows and achieve operational excellence. So, roll up your sleeves, dive deeper into the Kubernetes operator landscape, and unleash the full potential of your cluster with tailored, purpose-built operators.

--

--

Sonu Jose
DevelopingNodes

Software Engineer at VMware. Loves building tools and applications for Platform Engineers. K8s Enthusiast | Golang Developer | Writer in Faun and Dataseries.