Why and how of centralized logging with ELK stack for Rasa - Part 1.

Simran Kaur Kahlon
Gray Matrix
Published in
4 min readNov 29, 2020
Image Source — Logz.io

ELK is a long topic, so I am dividing it into a 3 part series, but promise to keep them short and to the point.

ELK stack gives you the ability to aggregate logs from all your systems and applications, analyze these logs, and create visualizations for application and infrastructure monitoring, faster troubleshooting, security analytics, and more.

So let’s start by understanding what each alphabet in ELK means, by separating them and understanding it as an individual component and also in the broader picture where it fits.

E: Elasticsearch

  • Elasticsearch is a search and analytics engine. It is an open-source, distributed, RESTful, JSON-based search engine that is easy to use, scalable, and flexible. In simpler terms, it's where we store logs.

L: Logstash

  • Logstash is a server-side data processing pipeline that ingests data from multiple data sources transforms it and sends it to Elasticsearch.
  • So, you get your logs, massage it, add filters, and send it to Elasticsearch.

K: Kibana

  • Kibana is a data visualization dashboard for Elasticsearch. It provides us with data visualization capabilities on top of the content indexed on an Elasticsearch cluster.

Still, there’s one more component left, which I would like to talk about. It goes by the name of Beats.

Beats is used for gathering logs. They sit on your servers, with your containers, or deploy as functions — and then centralize logs in Elasticsearch. Beats can send logs directly to Elasticsearch but if you want more processing muscle, they can forward to Logstash for transformation and parsing and then ahead to Elasticsearch.

I think I have covered it all, so now let's move on to actually trying it out and see how they work together.

I have used all 4 components. I have file beats installed on the servers from which I need to collect logs. The other 3 components are installed on one server.

So the flow is as:

File Beat -> Logstash -> ElasticSearch -> Kibana

Assuming you have everything installed. The first example I share with you is collecting logs from a single file.

  • Editing the /etc/filebeat/filebeat.yml file to enter the log file name.
  • Now, further, we need to edit the Logstash output in the same file, since we first want the filebeat to publish data to Logstash.
  • Specify the host details of where your Logstash is located.
  • If you want your Elasticsearch index to have a specific name you can add an index tag as above.
  • If you don’t specify an index value, a default index named as filebeat will be created.

Now you can create an index pattern using Kibana and see your logs.

Index patterns tell Kibana which Elasticsearch indices you want to explore. An index pattern can match the name of a single index, or it can also include a wildcard (*) to match multiple indices.

  • Going to Kibana->Index Pattern and clicking on Create an Index
  • You can see that an index with the name of logfile is created, so we create an index pattern for it as log*
  • Next, we select a primary field that could serve as a global filter. For me, it could be the timestamp field.

You are ready to view your logs, by going to Kibana -> Discover

Next up in the line, I would share how to get logs from multiple files and also how can we have them with different indexes.

Please get in touch in case of any queries.

Thanks.

--

--

Simran Kaur Kahlon
Gray Matrix

JS/ Laravel / AWS / Chatbot Developer #AWS Solution Architect Associate #AWS Developer Associate