Somnath Mani
epiphani
Published in
5 min readOct 26, 2020

--

Kubernetes: Kubectl Vs API

Multiple ways to skin it

Kubernetes provides a couple different ways to access and manage the system, namely, via CLI using kubectl and directly via HTTP API. While, kubectl is the “go to” approach for anything Kubernetes, API adoption appears to be languishing.

As with everything else, one needs to use the right tool for the right job. kubectl is meant for people and is expressly meant to be used as CLI, while API is for code to programmatically manipulate the system.

Often times one finds kubectl with extensive ‘awks’ and ‘greps’, shrink wrapped and masquerading as code. why so?

Begs the question, Why kubectl?

  • Awesome as a CLI tool
  • Developers have familiarity with the tool
  • Extensive documentation
  • Stackoverflow for everything else that one doesn’t know

Why not HTTP API?

  • Unfamiliarity and learning curve
  • Documentation exists but is difficult to digest perhaps?
  • Input parameters and outputs. Strength of API is perhaps also its biggest weakness. More on this below

The great part about using an API is the structured input to an API call and the structured output that is returned in the response. You don’t have to parse text to figure out the output. It is provided in a dictionary. But does one know the various and many (believe me there are many!) inputs and outputs? The API calls are well documented and the schema for parsing the output is extensively documented as well but the issue is, who has the bandwidth and more importantly the patience to painstakingly go through the rather enormous documentation?

Would it then not be great if there was some means to be able to select these input and output parameters from a pre-compiled list and use them in the API calls?

The missing link…Kubernetes Connector

Input Paramaters

A Kubernetes connector can be used to pre compile the list of input parameters and output variables. The user simply selects the appropriate parameters, executes the API (part of the connector execution) and uses the pre-compiled list to parse the output. What’s more, the connector provides a user the ability to save the output in variables to be reused as input in a subsequent API call.

Output Fields

The figures on the left depict the selection of input and output fields in the Kubernetes connector as implemented by Epiphani. We parse the Kubernetes Swagger API document to generate the lists and in the backend, we use the python client to execute the API calls

Use case: Auto Remediating Kubernetes Alerts

We demonstrate how we can auto remediate Kubernetes alerts using the Kubernetes connector and Epiphani playbook engine. In an earlier post I had explained how the alerts themselves are generated and how Epiphani playbooks handle such alerts. That article can be found here. In this post we focus on the Kubernetes connector itself

The figure on left shows a sample playbook that would get triggered when a pod needs to be restarted, for instance, if the for some reason pod cpu breaches a threshold and the pod needs to be deleted and subsequently re-deployed. Each of the node in the playbook is a connector. Let’s run through the logic being implemented in this playbook real quick.

  • Look for pod
  • If pod is found, delete it (which triggers a redeploy)
  • Get the new list of pods
  • Format the list and send it in a page to the on-call person using Pagerduty connector
  • If at the very beginning the pod was not found, then just skip to the end

“Find Pod” node

This figure shows the input form for the “Find Pod” node, which implements the “list_namespaced_pod” API of Kubernetes connector. The only parameter of interest here is the namespace name, which is “default” in our case

The form on the left shows that as part of configuring the connector, we specify a rule that if the result of the list_namespaced_pod API contains a pod with a specific name (specified by the variable argumt {{pod}}), then we want the execution to follow a certain execution path (green in our case). In the example, if you look closely at the “match” criterion above, “items metadata name” is actually selected from a list that contains all the possible output context paths in the response to the API call

Similarly the Pod Restart node in the sample playbook, implements the “delete_namespaced_pod” API. This node takes in pod name the namespace as input. Execution of this node, results in the redeployment of the service deployed in the pod.

Finally executing the above playbook with the nginx-deployment pod(shown on left figure) as argument, results in the pod getting deleted and a new nginx-deployment pod getting deployed.

In the figure above, we see that the playbook executed successfully. The small blue color triangle in the top left of each node shows the execution path and as you can see, the pod was found and subsequently restarted.

This is confirmed by the enriched page from Pagerduty which lists the new pod list (above).

In Conclusion

Kubernetes API can be powerful for programatic remediation or operations use cases and Kubernetes Connector can greatly facilitate it.

Epiphani implements a playbook engine that make this connector available to be used in auto remediation and other automation use cases.

In case you are curious and want to try out the Kubernetes Connector and the Epiphani playbook engine, it’s free to download and use!!

Please check out this link for instructions on how to download the Epiphani playbook engine with the connector

Also if you have any feedback, questions or comments, we would love to hear ‘em! feedback@epiphani.ai

--

--