Kibana objects auto-importing

Gilad Neiger
Develeap
Published in
4 min readMar 10, 2021

Introduction

The well-known EFK stack, which includes Elasticsearch, Fluentd & Kibana, has become a very popular logging stack nowadays. If you had deployed this stack on your Kubernetes cluster, at some point, you might ask yourself — “Can I export these dashboards out?” or “Can I import ‘built-in’ dashboards to a brand new Kubernetes cluster?” so yes, you can.

In the following article, I’ll show you how to export and import Kibana dashboards and Elasticsearch index-pattern which will index your logs. Lastly, I’ll show you how you can run the import process as a part of your Helm deployment, so when you’ll deploy your applications and your logging tools, you will deploy also ‘built-in’ Kibana dashboards.

Kibana API

Kibana has enabled some of its features by using a REST API. This is perfect for configuring Kibana in an automation way like we love to do in DevOps.
You should be aware that by Kibana’s documentation: “Each API is experimental and can include breaking changes in any version of Kibana, or might be entirely removed from Kibana” — so be aware.

Of course, there’s documentation for the Kibana API, but I find it a bit confusing sometimes so I think this article will make your life much easier.

Moreover, I would also like to mention that down below you’ll find some ‘missing’ variables, such as ${KIBANA_PORT} or ‘user:pwd’ on the CURL commands — make sure you’re replacing them with your relevant data.

Export index-pattern using Kibana API

So, first of all, we’ll need to index our logs using our index pattern. Therefore, we need to export our current index-pattern using the Kibana API. This is pretty straightforward, just follow the command below (make sure you are replacing ${KIBANA_URL}, ${KIBANA_PORT}, user:pwd, and lastly the ${INDEX_ID}):

You can get your index ID by navigating to the index web page on Kibana, and you’ll see the ID at the address bar as a part of the URI:

You’re expected to get a NDJSON output, save it as a file, you’ll use it later.

Exporting Kibana Dashboards using Kibana API

Now, after we exported the Elasticsearch index-pattern, we would like to export our dashboards the same way (make sure you are replacing ${KIBANA_URL}, ${KIBANA_PORT}, user:pwd, and lastly the ${DASHBOARD_ID}):

You can get the ID by navigating to the dashboard web page on the Kibana UI and you’ll find the ID at the address bar as a part of the URI.

Import index-pattern using Kibana API

After you exported your index-pattern by the command I mentioned above, you can now import it using the following command:

The is the file name that you have exported the index-pattern to.

Import dashboards using Kibana API

Of course, you would like to show some data in dashboards, and fortunately, you have a dashboard you’ve exported just before by the steps above. Now you just need to run the command:

*Optionally — you can curl command with -d ‘ <JSON_DATA>’ instead of file path.

Import as a part of your Helm chart

So the exciting part is here: sometimes we would like to import index-patterns and dashboards as a part of the Helm chart itself, or maybe as a post-process that happens after the Helm chart installation. This can be done by Helm Jobs. We will also use Kubernetes ConfigMaps in order to bring our objects JSON data inside the container.

Dashboards & index-patterns as ConfigMap

Firstly, we will create a ConfigMap which includes:

  1. Shell script that curling the Kibana API (to import the index-pattern & the dashboards)
  2. A JSON file of the dashboard
  3. A NDJSON file of the index-pattern

The next step is to mount this ConfigMap as 3 different files inside the Helm Job’s container, follow me.

Helm job as your handler!

We’ll now use a Helm job in order to:

  1. Mount this ConfigMap to a container
  2. Run the Shell script in order to send the API requests to the Kibana

Please note:

  • Obviously, you need a ready Kubernetes cluster.
  • You can add the ConfigMap and the Helm job to your EFK stack helm chart, or you can just run it statically at your cluster — this is really your choice following your needs.

This job will run a single pod which will go down right after it finishes its task: Importing Kibana index-pattern & dashboards.

Summing up

To sum up, this is a very useful way to add an import functionality to your Helm chart and allow your developers to control their built-in dashboards & index-patterns from the ConfigMap. You give them the control but do the import progress for them, using the Helm job we’ve described above. You can also decide how you want to run this Helm job. In my case, I decided to add this Job to my ECK helm chart, and then when I deployed the EFK stack I already had some built-in dashboards & index-pattern, but you can obviously decide how you want to use it.

--

--

Gilad Neiger
Develeap

DevOps Group Leader, DevOps professional & 日本語の学生