You can configure a service traffic to be routed 10% towards version 1.0 and 90% towards version 2.0 of an application in Istio by as simple a definition as below.
A task to clone a repository can be specified in Tekton by applying the following yaml.
The above yamls represent VirtualService and Task custom resources. The kind field in the above definitions, VirtualService and Task defines the custom resource types. We are already familiar with types Pod, Deployment, Service, Job etc. Those are builtin resource types. To define what can go in yaml of custom resources you create CustomResourceDefinition (CRD) which acts as a schema for the custom resources (CRs). For example here is the Tekton’s Task (CRD) .
We can see that apiVersion in the CR Task mytask is made up of group (tekton.dev) and versions (v1beta1) which are defined in the corresponding CRD, which means we can have another resource like Pod, Service, Deployment etc whose groups and version are different.
For example in Knative a Service is defined as below. There is a already a builtin Service resource in Kubernetes to expose the pods behind it but it doesn’t clash with the below one as its apiVersion is different.
A custom resource is CRUD’ed (kubectl apply/patch/delete etc) the same way any other builtin resources are. The resources created thus far (the task and the virtual service) are just a piece of data stored as objects by Kubernetes; think of built-in ConfigMap resources, for example or even a deployment yaml. In case of ConfigMap, we expose them as environment variable in the pod and then the actual application knows how to use that data. If we just need dumb data like this, ConfigMap could be used instead.
Typically CRs serves as the data on which a custom controller operates to achieve the desired business function. Think of builtin job resources controlled by builtin Job Controller. A Controller looks into the yaml ( the desired state) and performs reconciliation of the resource ( by watching the resource), for example when a new ReplicaSet is created then the builtin ReplicaSet controller sees there are
n number of replicas required and create pods to match the state. If you edit and reduce replica to
n-1 reconciliation steps of the controller tries to bring pods to that number and so on.
Extending this to our Istio example, when we change the VirtualService CR (by kubectl edit), there is some custom controller in the istio control plane which gets the new state to arrive to (say when we update the yaml to reflect 20% traffic for version 1.0 and 80% to version 2.0) and takes necessary steps like mapping these new rules to envoy configuration and pushing them to relevant parties. VirtualService definition in itself is just a piece of data
You extend K8s by defining custom resources and implementing controllers to operate on those resources.
In a nutshell, leveraging K8s custom resources/controllers has the advantage of shipping software on a platform which is known to be robust and provides all the machinery for running/managing your application using a consistent API and command lines and a declarative approach where you say I need 3 replicas rather than how. For example, Resource creation is still done via
kubectl create -f your_awesome_resource.yaml
End users don’t need to learn new ways of interacting with them.
$ kubectl get crd
More importantly they are all open-source. You could read their implementation of controllers, contribute to the community, and roll out your own next awesome platform extensions which is native to K8s along the way