Leveraging K8s CRDs & Kubebuilder to create a Telegram message resource.

Paolo Gallina
Harbur Cloud Solutions
6 min readJan 14, 2019

--

Do you find introductions boring? Jump to to the demo session!

Kubernetes comes shipped with several highly configurable resources and APIs that cover the majority of the use cases and scenarios. However, its real power is a design that, not only allows users to extend its functionality, but also makes it as easy as possible.

Many companies are taking advantage of this particular design in order to build their open or proprietary solutions, providing new approaches to issues that are not general enough and therefore would hardly find a place inside Kubernetes itself.

The open source community, on the other hand, is creating hundreds of projects extending Kubernetes APIs to cover new use cases and to prototype implementation of functionalities aiming to be added to the official APIs in the future.

Just to have an idea, check out the approach of istio.io or the Kubernetes Aquasec implementation or the prototyping done by Tomas Smetana in the following video.

Kubernetes Feature Prototyping with External Controllers and Custom Resource Definitions

Choose the right tool with the right approach

Currently, there are two different ways to create Custom Resources, characterized by different scopes and learning curves: CRD with custom controllers and API Aggregation. Here you can find a very useful comparison from the official documentation that can guide you to make your choice.

Today we check the “straightforward” approach making use of CRDs and custom controllers. We are going to leverage a tool called Kubebuilder to automatically generate the folder structure needed and to introduce additional layers of abstraction that save you the task of taking care of the controller queue and maintaining Lister and Client referenced.

We use Kubebuilder, but keep in mind that there are different methodologies to implement CRDs and custom controllers: check for example the comparison with the start from scratch approach and an additional one covering operator SDK.

The objective is to create a CRD for a TelegramMessage resource and a controller that, upon detecting a new resource or an update, will be in charge of delivering the message through a Telegram Bot to a channel or a chat.

This short demo obviously does not claim to show any best practices regarding extra cluster communication and does not cover essential aspects such as security or performances.

Demo Session

The starting point is having kubebuilder installed and a single node of a minikube cluster running. Moreover, please notice that the whole code is available here.

By leveraging kubebuilder we can easily create a collection of files that will serve as a starting point to implement our CR and the relative controller. Basically, we need a CRD implementing and validating our new resource and a controller to manage its lifecycle once created. In this demo, I’ve chosen myself as the owner and my company as the domain, and the new resource implemented will be called TelegramMessage.

Let’s run the required commands:

$ kubebuilder init --domain harbur.io --license apache2 --owner "Paolo.Gallina"$ kubebuilder create api --group harbur --version v1beta1 --kind TelegramMessage 

Kubebuilder automatically populates your working directory with a structure of files and folders. The main ones are the following:

  1. The cmd/... package contains the manager main program. The manager is responsible for initializing shared dependencies and starting/stopping Controllers. Users typically will not need to edit this package and can rely on the scaffolding.
  2. The pkg/apis/... packages contain the API resource definitions. Users have to edit the *_types.go files under this director to implement their API definitions. Each resource lives in a similar file.
  3. The pkg/controller/... packages contain the Controller implementations. Users have to edit the *_controller.go files under this directory to implement their Controllers.
  4. The bin/ folder that once the code is compiled will contain the binaries of the controllers

One of the core elements is the go definition of our new resource that the controller, once created, will have to manage. In our case, the resource is defined in telegram_message.go and we need to modify it to reach our goal, for example as follow:

Our TelegramMessage Struct is composed of two subStructs: one representing the message specifications defined by the user and, the other, its status populated by the controller. Please, note that the CRD with the corresponding validation fields will be generated automatically by kubebuilder. We added also as fields the token and the chatID in order to support multiple chats and bots at the same time: different instances can trigger messages to different bot/chats.

The most complex part is building the controller of the new resource. However, since this is just a proof of concept, I will keep it as simple as possible. We are interested in a controller that sends the message using the Telegram APIs when a new resource is created or modified or if, for some reason, the message is still not sent. In particular, we need to modify the Reconcile function that, as the documentation states:

“reads that state of the cluster for a TelegramMessage object and makes changes based on the state read and what is in the TelegramMessage.Spec”

First of all, we fetch the current status of the resource by using the *ReconcileTelegramMessage r, saving it in a variable calledinstance. A function sending the message using the Telegram APIs is triggered when the Status.Delivered = No and when the current Spec.MessageToDeliver is different from the one sent previously, i.e. MessageDelivered. The encoded message is sent through an HTTP Post that uses the Bot token and ChatID as parameters.

In this release, a change of chatID or token will not trigger sending any message in order to make the demo simpler and the code more understandable. However, it can be introduced easily by adding an additional field in the status, for example, a hash of the Message, the chatID, and the token triggering the APIs in case any of those fields is changed. I decided also, for the sake of simplicity, to skip the creation of automatic tests for both, the resource and the controller, leaving a return true instead, even though this is not the best practice.

Let’s verify now if the controller is working.

To do so, we pass the CRD to the cluster and we run the controller locally:

$ make; make install; make run

These three commands create and install the CRD in the cluster, build the code and run the binary of the controller. We can check that everything is ok by retrieving the APIs currently supported by the cluster through kubectl api-version, which shows us (also) harbur.harbur.io/v1beta1.

Let’s test now the controller by creating a TelegramMessage object:

$ kubectl apply -f config/samples/test.yaml

, where test.yaml is a classical YAML file with the following structure:

If we look at the logs of the controller in our terminal, we visualize:

10:52:31 Checking status of resource test
10:52:31 Sending message Hello Medium
10:52:32 Checking status of resource test

Now it is time to check if the bot has sent the message to the specified chatID and yes, it worked!

If you are wondering how you can create a TelegramBot, get the relative token and retrieve a chatID, you can check the following resources:

Just a spoiler: to create a bot you have to talk with the botFather, which is the father of all the bots. No kidding!

Let’s describe the resource to verify if it has been created correctly and if its status has been populated by the controller:

Name: test
Namespace: default
Labels: sdfffcontroller-tools.k8s.io=1.0
Annotations: ...
API Version: harbur.harbur.io/v1beta1
Kind: TelegramMessage
Metadata:
Creation Timestamp: ...
Generation: 1
Resource Version: 133709
Self Link: ...
UID: ...
Spec:
Chatid: your-chat-id
Messagetodeliver: Hello Medium
Token: your-token
Status:
Delivered: Yes
Messagedelivered: Hello Medium
Events: <none>

Since everything is working, we only need to push the controller into a container and run it in the cluster. Once again, kubebuilder helps us to perform this task:

$ make docker-build
$ docker tag controller:latest your-repository/controller:latest
$ make docker push

Let’s modify the controller YAML file by adding the URL of the image which was just pushed into the DockerHub paologallinaharbur/controller:latest as follows:

Only one step is missing: deploying the controller into the cluster and starting to use it!

$ make deploy
$ kubectl apply -f config/samples/test2.yaml

Message received!

Some interesting links:

--

--