Designing a Controller for Custom Resources from scratch for absolute beginners.

Senjuti De
11 min readFeb 24, 2023

--

CRD (CUSTOM RESOURCE DEFINITION)

The Custom Resource Definition API resource allows you to define custom resources and their schema. Defining a CRD object creates a new custom resource with a name and schema that you specify. This can be a powerful way to extend the Kubernetes API and add custom functionality to your cluster. The Kubernetes API serves and handles the storage of your custom resource. The name of a CRD object must be a valid DNS subdomain name.

The CRD enables engineers to plug in their own Object and application as if they were a native Kubernetes component. This is extremely powerful in creating tool and services built on Kubernetes.

By doing this, we can build out the custom resources for our application as well as use Kubernetes RBAC to provide security and authentication to our application. These custom resources will be stored in the integrated etcd repository with replication and proper lifecycle management. They will also leverage all the built-in cluster management features which come with Kubernetes. CRDs are really amazing extensions of the Kubernetes API and allow a lot of flexibility in the creation of K8s native applications.

Use a Custom Resource Definition (CRD or Aggregated API) if most of the following apply:

  • We want to use Kubernetes client libraries and CLIs to create and update a new resource.
  • We want top-level support from kubectl (for example: kubectl get my-object object-name).
  • We want to build new automation that watches for updates on the new object, and then CRUD other objects, or vice versa.
  • We want to write automation that handles updates to the object.
  • We want to use Kubernetes API conventions like .spec, .status, and .metadata.
  • We want the object to be an abstraction over a collection of controlled resources or a summation of other resources.

Everyone has different goals in mind and context surrounding those goals, however if our project needs a flexible way to extend Kubernetes and is trying to stick heavily to the “Kubernetes-native” way of doing things then CRDs are right up our alley.

CUSTOM RESOURCES

A resource is an endpoint in the Kubernetes API that stores a collection of API objects of a certain kind; for example, the built-in pods resource contains a collection of Pod objects.

A custom resource is an extension of the Kubernetes API that is not necessarily available in a default Kubernetes installation. It represents a customization of a particular Kubernetes installation. However, many core Kubernetes functions are now built using custom resources, making Kubernetes more modular. Once a custom resource is installed, users can create and access its objects using kubectl, just as they do for built-in resources like Pods.

CUSTOM CONTROLLERS

On their own, custom resources let us store and retrieve structured data. When we combine a custom resource with a custom controller, custom resources provide a true declarative API.

The Kubernetes declarative API enforces a separation of responsibilities. We declare the desired state of our resource. The Kubernetes controller keeps the current state of Kubernetes objects in sync with our declared desired state. This is in contrast to an imperative API, where we instruct a server what to do.

We can deploy and update a custom controller on a running cluster, independently of the cluster’s lifecycle. Custom controllers can work with any kind of resource, but they are especially effective when combined with custom resources. The Operator pattern combines custom resources and custom controllers.

ASSIGNMENT GIVEN TO US

The task was basically to create to CRD which will comprise the fields message and count. The message basically represents text data in the cluster of that specific kind. The count field represents the number of pods up and running for that resource. So the tasks can be summarized as follows:

  • Write the Custom Resource Definition file
  • Writing the main.go which will be the entry point of the controller.
  • Create the API with types.go, doc.go, register.go
  • Generate DeepCopy, Lister, Informers, and clientset using code-generator after installing it.
  • Write the controller file such that when we create a custom resource those many numbers of pods are up and running, when we update the message or count of CR, that change is reflected, and when we delete the CR all the associated pods are deleted

First and foremost we have to ensure that we are in the correct directory, so we will create so create a folder in /home/go/src/{github username}/customcluster and move into this directory.

Let us look at the process step by step:

  1. Create the Custom Resource Definition (CRD) YAML file.

Any Custom Resource definition comprises 3 fields:
- Type: It represents the CRD name
- Group: The group it belongs to
- Version: Api version
Combining these 3 fields we can create our Custom Resource API endpoint.

  • apiVersion and kind define that this is a CRD.
  • metadata contains information about the CRD, including its name.
  • spec defines the properties of our CRD.
  • group is a string that defines the API group of the CRD.
  • version is a string that defines the API version of the CRD.
  • scope defines the scope of the CRD, which in our case is Namespaced.
  • names define the name and kind of our CRD. They are basically the alias that can be used to access the custom resource.

2. Create the CRD object

With the CRD definition in place, we can create a CRD object in our kubernetes cluster. We can do this by running the following command:

$ KIND_EXPERIMENTAL_PROVIDER=podman
$ minikube config set rootless true
$ minikube start --driver=podman --container-runtime=containerd
$ kubectl apply -f manifests/crdefinition.yaml

This will first start a kubernetes cluster and then create the CRD object in it.

3. Create our CR (Custom Resource) and it’s object in kubernetes

Now that we have our CRD in place, we can create a Custom resource. We can do this by creating a YAML file with the following content:

Let’s break down what’s happening here:

  • apiVersion specifies the group and version of our CR.
  • kind defines the type of our Custom resource.
  • metadata contains information about the resource, including its name.
  • spec defines the properties of our resource.
  • message is a string that will be logged in the pods using echo.
  • count is an integer that defines how many pods to create.

With the CR in place, we can create a CR object in our kubernetes cluster. We can do this by running the following command:

$ kubectl apply -f manifests/cr.yaml

Checking if the object is running by $ kubectl get customcluster or $ kubectl get -o wide for more details.

  • kubectl api-resources | grep customcluster
customclusters      cpod   sde.dev/v1alpha1      true         Customcluster

4. We need to do$ cd home/{user}/go/src/github.com/{github_username} and and create foldercustomcluster/pkg/api/{Group}/{version}, and add the three essential files.

i) types.go file which contains definitions of customcluster type and its fields.

  • Customcluster is the schema for our Customcluster resource. It contains the TypeMeta and ObjectMeta fields that are required for all Kubernetes resources, as well as the Spec and Status. The TypeMeta and ObjectMeta are used by the Kubernetes API to provide metadata information about the custom resource. The metav1.TypeMeta provides information about the type of custom resource, including the API group and version a well a the the kind of the resource. The metav1.ObjectMeta provides information about the object itself, including name,namespace, labels and annotations.
  • CustomclusterStatus is a struct that defines the current state of our Customcluster resource.
  • CustomclusterSpec is a struct that defines the desired state of our Customcluster resource. It has two fields, Message and Count. It basically tells us about the fields that we need to supply as input to our controller.
  • CustomclusterList is a list of Customcluster resources. This struct ensures that we can list all our custom resources in the same way that we get the list of all pods when we execute kubectl get pods command.

This metadata are used by Kubernetes API to manage and organize the CRs within the cluster. The `json:”,inline”` and `json:”metadata,omitempty”` tags are used to control the encoding of the fields when they are marshalled to or from JSON.

ii) register.go is the file where we register our CRD with the Kubernetes API. Using it we want to register our type in Kubernetes scheme so that Kubernetes knows that it should respond to a type named customcluster. In other words to make a type recognisable by the Kubernetes cluster we specify this register.go file.

  • SchemeGroupVersion is a variable that defines the group and version of our CRD.
  • SchemeBuilder is a runtime.SchemeBuilder object that will help us build our CRD.
  • AddToScheme is a function that will add our CRD to a runtime.Scheme object.
  • addKnownTypes is a function that adds our types to the scheme.
  • We use metav1.AddToGroupVersion to add our group version to the scheme.

iii) doc.go file is used for declaring global tabs for our v1alpha1 API. tabs are a way to control the behaviour of the code generator and are basically used to call a particular instruction for all valid instances over the codebase. For eg- +k8s:deepcopy-gen=package means that a deep copy must be generated at every package. This is global. If we declare it anywhere else, it will be local. The doc.go file is where we provide documentation for our package. This is essential for other developers who might use our package or maintain it.

With these three files, we can define, register, and use our custom resource definition in Kubernetes.

5. Generate DeepCopy, Listers, Informers and clientSet using code-generator.

The Kubernetes API server uses code generated from the CRD schema to create, update, and delete resources of that type. The code-generator tool is a part of the Kubebuilder project, which helps generate this code automatically.

Every Kubernetes resource should implement some behaviours like we should be able to deepcopy that object and we should be able to get and set the group version of that type. Code-generator is needed to generate scaffoldings around our type.

When creating a CRD, it is important to have well-formed Go code. This includes having correct deepcopy and clientset code. deepcopy code provides a way to make deep copies of objects, while clientset code provides a way to interact with the Kubernetes API server and get, delete, update and list our CRs just like Kubernetes native resources like pods.

The code-generator tool can generate informers, listers,deepcopy and clientset code for our CRD, which is why we use it. This saves us time and ensures that the generated code is well-formed and conforms to best practices.

To generate the code, the code-generator tool uses the deepcopy-gen and client-gen tools. These tools generate the necessary deepcopy and clientset code based on the CRD schema defined in types.go.

  • Install code-generator by $ go get k8s.io/code-generator (Path: Home/go/pkg/mod/k8s.io/code-generator@v0.26.1)
  • Create deepcopy, lister, clientset, and informers by running:
go get k8s.io/code-generator
$ execDir=~/go/pkg/mod/k8s.io/code-generator@v0.26.1
$ "${execDir}"/generate-groups.sh all github.com/{username}/{project-name}/pkg/client github.com/{username}/{project-name}/pkg/apis {group-name}:v1alpha1 --go-header-file "${execDir}"/hack/boilerplate.go.txt

Now the deepcopy, clientset, Informers, and listeners would have been added to the local directory.

6. Writing the main.go file

The main function is responsible for setting up the Kubernetes clientset, the custom clientset, and the informer factory for the custom resource definition. It also creates the controller and starts the informer factory to monitor changes to the custom resources. The controller’s run method handles the reconciliation of the custom resources with the desired state. The kubeclient helps to interact with Kubernetes native resources like pods and the klientset helps to interact with custom resources.

  • To add all dependencies, then build using:
$ go mod tidy
$ go mod vendor
$ go build
$ ./customcluster

or $ go build -o main .
$ ./main

In Go programming language, the go mod vendor command is used to create a local copy of all the dependencies of a Go module in a vendor directory within the module’s root directory. The purpose of using go mod vendor is to ensure that the module’s dependencies are available even if the network is not accessible, or if the dependencies are no longer available online. By having a local copy of the dependencies, you can build and test the project without relying on external resources. Additionally, when the vendor directory is present in a module, Go will prioritize the local copy of the dependencies over any globally installed dependencies, ensuring that the project always uses the same version of the dependencies. Overall, using go mod vendor provides more control and stability over a project’s dependencies, making it a best practice for projects that require reliable builds and deployments.

7. Writing controller.go and starting the controller.

Now we need to write the main logic of the controller. So create a pkg/controller/controller.go file. In this file, we will be writing all the controller struct, Pod creation deletion logic, and syncing logic.

When the controller is started, our client informer will look into each and every change to our custom CR and take action according to it.

  • If we want to create a new object, we can specify the name of the CR and count the number of pods of the CR, and run $ kubectl create -f manifests/cr.yaml. When a new CR is created using the above command then the controller will check how many pods are running (0 in the starting) and create or delete the pods according to whats required to sync the running pods to what the new CR count specify and that number of pods will start running for that CR.
  • If we want to update the number of pods or messages, just make the changes in the objects.yaml file and then run $ kubectl apply -f manifests/cr.yaml. Depending on the current running pods and the desired pods, the required number of pods will get created or deleted accordingly.
  • To see the pods that are running or terminating and check their names we use $ kubectl get pods .
  • As we want to echo in the pods, you can check the logs of these pods by using $ kubectl logs {podname} .
  • If we delete the CR then the controller will take the name and namespace of the CR and search all running pods in the kubeClient and deletes them. The command for the same is $ kubectl delete -f manifests/cr.yaml .

Repository: https://github.com/Senjuti256/customcluster

The link to my next blog containing detailed explanation of my custom controller: https://medium.com/@senjutide2000/workflow-and-implementation-of-a-custom-controller-f75ee0fc7c6a

Thank you for reading my blog, hope it was helpful in creating your first CRD, CR and get a basic idea of the workflow of a controller :)

--

--

No responses yet