Why and how of centralized logging with ELK stack for Rasa - Part 2.

Simran Kaur Kahlon
Gray Matrix
Published in
3 min readNov 29, 2020
Image Source — Logz.io

In Part 1 of this topic, I covered the what and how of ELK, and also we saw the log flow from Filebeat to Kibana.

This article will be more on the technical side, as to how we can write logs from 2 or more locations and also give them different index names.

I have implemented logging for Rasa. Rasa is an open-source machine learning framework to automate text-and voice-based assistants.

Rasa has 2 components i.e Action Server and Core Server and both of them write logs at 2 different locations.

In addition to this, we want them to be indexed as:

  • Action: rasa-action
  • Core: rasa-core

Also, we will create an additional field called ‘component’ that will be available to us in Kibana for additional filtering.

Let’s get started then,

  • We begin by editing the /etc/filebeat/filebeat.yml file to specify the log locations.
  • I added component key under fields for both my logs. Next, I need not add an index field in the Logstash output anymore.
  • We just specify the host details of Logstash.
  • Up next, we need to edit the /etc/logstash/conf.d/30-elasticsearch-output.conf to handle index changes, we now need to pick the index name from the component key that we just defined above in the filebeat.yml file.
  • In the second “if” condition, we check if the fields “component” exists. If it does, we create the index using the component’s value and the version of filebeat.
  • Next, we create the index pattern and check the logs with our additional component field.
  • The index pattern is created as rasa* so it could include logs for both action and core.
  • Select the timestamp field as a global time filter and create the index pattern.
  • Let's visualize the logs now.
  • We now add the “fields.component” and the logs appear as below

So that's it, we are now writing logs from multiple locations with different indexes in Elasticsearch.

The follow up for these articles and the last one, I will write on Logstash filters, and also explain how we can add more fields as we did one above and also use actual log timestamp as a filter and not the time when the logs get stored in Elasticsearch.

Please get in touch in case of any queries.

Thanks.

--

--

Simran Kaur Kahlon
Gray Matrix

JS/ Laravel / AWS / Chatbot Developer #AWS Solution Architect Associate #AWS Developer Associate