Introducing Kibana

Franco martin
Getting started with the ELK Stack
6 min readOct 20, 2020

If you are following along with the Getting started with the ELK Stack series, by now you should have a three Elasticsearch nodes with Filebeat and Metricbeat feeding data into your cluster.

In this write-up I will give you a brief introduction on what Kibana is and what we can use it for. Then we will install ODFE’s Kibana version and create a basic monitoring dashboard.

What is Kibana?

Kibana is a user interface that we can leverage to visualize data, it runs in a web interface and can connect to an Elasticsearch cluster to query the data. Kibana will also help us in the future to set up alerting, index policies and templates.

Requirements

Kibana’s requirements are much more forgiving than Elasticsearch’s. We will create a single Ubuntu 20.04 instance with 1 vCPU and 1 GB of memory.

Installing Kibana

In the Other useful information section, you will find a link to Amazon’s Debian package installation instructions. Just make sure you do not start the service yet.

Configuring Kibana

Just some minor modifications to the /etc/kibana/kibana.yml will let us connect to the Elasticsearch cluster.

ODFE sets basically everything for us, albeit not in the most secure way, we need to setup the “elasticsearch.hosts” key to the same YAML array we used for Metricbeat and the “server.host” key to “0.0.0.0” so Kibana will listen on all interfaces. Once you are done, go ahead and run systemctl start kibana to start the service.

If everything went well, you should be able to browse the server’s IP address in port 5601 and log in using admin as username and password.

Creating index patterns

Alright, now that the boring part is over let’s start playing with our data. Index patterns allow us to query data that belongs to indices with a common pattern in their names. We will be discussing why using a single index is a bad idea, but now is not the time.

On the top left corner there’s the “hamburger” icon that will display all the sections in Kibana.

Follow these steps to create an index pattern:

· Navigate to the “Stack Management” section

· Click on the “Index Patterns” option

· Click on the “Create index pattern” button

· Type “metricbeat-*” in the text box (check that a green success message is shown, if it doesn’t then Metricbeat is not running properly) and click on “Next step”

· Select @timestamp from the dropdown and click on “Create index pattern”

Visualizing the raw data

Kibana now knows what indices you are interested in, now let’s see what data we have so we can turn it into information.

Go to the discover section. Since we only have one index pattern, you will be seeing all your Metricbeat data.

In the left side of your screen you will be able to see all the fields present in your indices. If you click on a field, you’ll get a small list of the most common values and what percentage of the documents contain that value with the corresponding key. In the top of the list you have a search field to look for specific fields.

This fields are the ones we are going to use to create visualizations.

Creating a metric visualization

The Discover section is good for troubleshooting and defining what is useful and what is not but it is not the best way to present data.

For that, lets create a visualization. Go to the Visualize section and click on the “Create visualization” button. For the first visualization we will stick to the basics, let’s create a metric visualization that shows us the number of hosts that are sending information to Elasticsearch.

After clicking the “metric” visualization type, select the “metricbeat-*” index pattern. At first, you’ll see a big number that represents the number of documents in the indices that fall into your index pattern in the timeframe defined in the top right corner.

To display the number of hosts, we need to change the metric. Right now, we only have one that uses the “count” aggregation, change it to “unique count”. This will add a “field” dropdown where you will select “host.name” (you can type into that dropdown to narrow the list down). After you click the desired field, click on “Update” at the bottom right corner. The visualization should display 3 now. Go ahead and click on “Save” at the top left corner and give your visualization a name.

If you see a lower number, you can go back to the Discover section and look for the values in the “host.name” field. There you will see what hosts are missing.

Metric visualization with 3 hosts

Creating a line visualization

Now let’s create another visualization, a more complicated one.

Go back to the Visualize section and create a line visualization using the same index pattern. For this visualization we want to see the history of CPU usage over a certain period.

To begin, lets put dates in our X axis. For this we will need to use bucket aggregations, bucket aggregations separate documents into buckets based on a criterion. Then we will use metrics to aggregate data again and create datapoints into our graph.

To add a bucket aggregation, go to the Buckets section on the right side of the screen and click on Add. Click on “X-axis” and select “Date Histogram” as the aggregation type, then click on Update. The visualization will change to display the number of documents in the default interval set by the Date Histogram aggregation.

Now let’s change the count metric to display the average CPU usage. Scroll up and click on the “Y-axis” metric to expand its contents. Change the aggregation to average, that will add a field dropdown where you will select “system.cpu.total.norm.pct” (remember you can type the name of the field). If you click on update your graph now will display a single line. Each datapoint in that line is the average CPU usage in our bucket. Now that I think about it, this is not enough, we need to know the average CPU usage per host. To do so, we need to split our buckets for every host.

Go back to the buckets section and add another bucket, but now select “split series” and select the terms aggregation to create a bucket for every term in the “host.hostname” field. When you are done click “Update” and if you are happy with what you see, save the visualization.

Line visualization

Now its time to get creative

You may have noticed we only used a couple of visualization types and only half of our data. We didn’t even touch the Filebeat indices! Go ahead and create the “filebeat-*” index pattern and get some visualizations going. Alternatively, you can create other visualizations using the metricbeat index pattern. I suggest trying memory usage, and filesystem usage if you want to go the extra mile.

Memory usage is stored in the “system.memory.actual.used.pct” and filesystem is stored in the “system.filesystem.used.pct”. I’ll also suggest creating another “Split series” aggregation using the terms in the “system.filesystem.mount_point” field so you can visualize how much occupied space you have in the “/” filesystem per host.

Grouping visualizations into a dashboard

Visualizations are cool, but they don’t necessarily show all the picture. Dashboards give us the ability to display data from different sources (indices). For example, I want to see if the number of errors in the Filebeat indices over time correlates to CPU or Memory usage that are stored in the Metricbeat index.

To create a dashboard, go to the dashboards section and click on the “Create dashboard” button. In the top left corner you will have an “add” button to add existing visualizations. Click on both of your visualizations to add them to the dashboard. Once they are added, you can resize them to fit you needs.

Once you are happy with the results, go ahead and save the dashboard with a descriptive title.

Dashboard with CPU, Memory, Network and Filesystem Usage

Next Steps

In the next article we will set up alerts to notify Slack based on elasticsearch queries.

See other related articles.

Other useful information

Installing Kibana

--

--

Franco martin
Getting started with the ELK Stack

Im a solutions architect, passionate about scalable and maintainable architectures.