Introduction to Kibana — as a dev tool

Arun Mohan
elasticsearch
Published in
6 min readOct 5, 2019

Phase 03 — Kibana 7.x, installation, basics — Blog 10

Its almost 2 years that I have written a blog in medium on the ELK stack, and a lot has changed during this time . Right from its name of ELK stack to the Elastic stack to almost every member of the stack has got an update.
But the one change it mattered the most was my shift from using the elasticsearhc-head plugin for day to day operations in Elasticsearch. I have completely transformed myself from using the head plugin to the world of Kibana.
In the last 2 years, from Kibana 5.x to Kibana 7.x, a lot has been changed and that too, the changes the users and developers were dying to see. Those changes range from the simple, JSON prettifying in the dev tools to the bulk data indexing facility via Kibana (either by supplying a JSON or CSV file).

So let me introduce the basic and most useful features of the power packed Kibana 7.x in this series, starting right from the installation and setup.

Installation

The installation mentioned here is only for the Ubuntu flavor of Linux.
Step 1 — Download the setup file
you can download the Debian installation file from this link.
Here we are using the version 7.2.0 of Kibana.
Also, please make sure you have the version 7.2.0 or above Elasticsearch installed in your system. The installation is pretty much the same as we have seen in my earlier blog
Via command line you can just type in the following:

wget https://artifacts.elastic.co/downloads/kibana/kibana-7.2.0-amd64.deb

Step 2— Install the debian package

sudo dpkg -i kibana-7.2.0-amd64.deb

Step 3 — Configuration of the Kibana.yml file
By default the configuration file of the Kibana would be in the

/etc/kibana/config/kibana.yml

In the above file, we can specify many things, like the Elasticsearch host and the port on which Kibana is to run.
The default settings for the host adress for Elasticsearch would be the localhost and the port for Kibana running is the 5601. So ideally, if you are running the Elasticsearch in the same machine as that of Kibana, you should change nothing.

Step 4— Start Kibana
You can start the Kibana service using the command below:

sudo service kibana start

Now after this, go to the browser and visit, localhost:5601,Kibana would be loading and the landing page would appear as below:

In the above page, as of now, we are interested in only two sections.
1. The Data Loading Section
2. Dev tools section

We are restricting our areas of interest to only the above sections, because in the Phase 03 and Phase 04 of this blogs, we are not dwelling into the details of how visualizations and dashboards are created. Rather, we will be working on sample data indexing/loading to Elasticsearch and also to query the indexed data.
The other sections of the Kibana will be explored after Phase 04 in much more detail.

The Dev Tools Section

The dev tools section in Kibana, serves pretty much the same purpose as that of the elasticsearhc-head plugin we have seen before, but with greater flexibility and addendum. Let us create an index, named testindex from the dev console. Open the dev console (by clicking the box 2) and then type in the following

PUT testindex

This will look as below in the dev console

Now after typing in the PUT request, press the play button (indicated by box 1) in the above image. The index will be created and the response will be coming in the right section , marked as the red box 2.

Like wise in the console, we can try most of the REST APIs for Elasticsearch. We will be dealing with the queries APIs, primarily in the next two phases.

The data loading section

In the above picture, click on the box 1, which says “ Import a CSV, NDJSON, or log file” and now a screen would appear like below:

Now download the sample data from here and upload it using the above screen. After that, the next screen would look like below:

After pressing the “import” from the above screen, the next screen would be asking for the index name where the data should be loaded as below:

In the screen shown in the left, select the “Advanced” tab, in order to edit the mapping.
Since the sample data, we have contains a date, field, it would be helpful, if we change the mapping of the same, which can be done in the “advanced” section. Upon clicking the “advanced” tab, the screen would show the following screen:

In the above screen, in the 1st section (red box 01), I have provided a unique index name (testindex-01), then in the “mappings” section, I have edited the mapping type of the field “joiningDate” to the type “date”. After this click on the 3rd box, namely the “import”. This will start the uploading the file and indexing the data. A progress bar like below will indicate the completion of the data indexing process.

Now, from the left navigation bar, click the “Dev tools” and type like below and press the play button to run the query and in the response we can see the indexed data.

In the left panel, where we run the queries, is a simple search request, which will return 10 documents from the index. The response in the right panel shows the documents indexed. The red box marked in the red shows one such document. The indexed document starts from the object under the “_source” field. The “_index”,”_type”,”_id”,”_score” fields are called as the meta data.

Conclusion

In this blog we have seen how to install, run and run simple requests using Kibana. Also we have seen how to load the data in CSV using the Kibana console. Now I iterate, these two make only 10–20% of the Kibana usage, the other 80% is building visualizations/dashboards, which will be covered later.

The 2 sections we saw, will be helpful for the blogs relating to the queries and many other APIs which will be coming in the next phases. So lets gear up for the next lessons in queries and aggregations.

--

--