Setting up ELK logger and alert system in Node JS

Sankalpa Timilsina
The Startup
Published in
5 min readJul 9, 2019

After several hours of research, I couldn’t find a single article that covered all the setups for a logger. Although I did find a pretty good article, it still didn’t showcase all the configurations in the packages it uses. So, after navigating through the hustle, I thought it might be helpful to write an article on the topic myself.

Before diving into the procedure, let’s briefly discuss the packages we will be using. The best part is that all of them are free.

  • Filebeat: This is a data shipper. It acts as a watcher for our log files. Whenever something new is added to our logs, Filebeat transports the log to Logstash for processing.
  • Logstash: This tool accepts logs from Filebeat, processes/transforms them, and then feeds the output to Elasticsearch for indexing.
  • Elasticsearch: This is a database that stores our logs from Logstash.
  • Kibana: A visualization tool for our Elasticsearch data. It allows you to query the data, build graphs, and a lot of other fancy stuffs.
  • Elastalert: Finally, this is an alert/notification system. You can configure it to monitor changes to Elasticsearch data based on your patterns of interest and send alert messages via email, Slack, and many other channels.

Below is an illustrative image that effectively describes this process:

The packages mentioned above will serve as the services running continuously on our system. Therefore, make sure to download and install the packages before proceeding further. We will explore how to configure them shortly.

In our Node.js server, we need to set up a logger that logs to a file. You can choose any logger you prefer, such as Winston, Bunyan, or any other. In this article, I will be using Bunyan.

By default, the Bunyan log line will look something like this:

{“name”:”myapp”,”hostname”:”DESKTOP-LS5NT1L”,”pid”:8712,”level”:50,”msg”:”This is an error!”,”time”:”2019–07–08T14:40:21.220Z”,”v”:0}

I needed to show this because we will process the above output in Logstash, it will serve as a reference. This completes the Node.js part. That’s it, nothing more. Essentially, there will be similar lines to the above in one or more files that we will monitor using Filebeat.

Now, we will configure the packages we downloaded earlier

Note: We are setting up everything on localhost, so we are only adding the minimal configurations needed. We are assuming the default port configurations for all services and are not changing any of them. If this does not apply to your case, make sure to adjust the configurations, such as hosts, ports, and SSL, in the files accordingly.

  • Filebeat: We will configure it to monitor the log we generated. Inside your Filebeat package, edit filebeat.yaml. Under the inputs section, make your configuration similar to the following:
enabled: true
paths:
- /path/to/your/logs # this is your logs path to watch

Now, under the output section, comment out the Elasticsearch output and enable Logstash output.

# output.elasticsearch:
# hosts: [“localhost:9200”]
output.logstash:
hosts: [“localhost:5044”]

Then fire up the Filebeat using:

cd path/to/filebeat/dir
filebeat -c filebeat.yaml -e

That’s it for Filebeat. Now it is watching our logs.

  • Logstash: For simplicity, in the /path/to/logstack/bin/dir, create a file called logstash.conf and add the following contents:
input {
beats {
port => 5044
}
}
filter {
json {
source => “message”
target => “message”
}
translate {
field => “[message][level]”
destination => “[message][level]”
dictionary => {
“10” => “trace”
“20” => “debug”
“30” => “info”
“40” => “warn”
“50” => “error”
“60” => “fatal”
}
override => true
}
}
output {
elasticsearch {
hosts => [“http://localhost:9200”]
index => “filebeat”
}
stdout { codec => rubydebug }
}

The configuration is fairly straightforward. We are inputting from Filebeat and outputting to Elasticsearch. In between, I am using json and translate filters, which may not be necessary if your log differs from mine. Here, I am simply parsing the json and mapping the message level values to their corresponding string notations. You might be using other filters like grok. The essential purpose of a filter is to transform the data to suit your needs. Now, start Logstash using:

cd path/to/logstash/bin/dir
logstash -f logstash.conf

We are done with Logstash as well.

  • Elasticsearch and Kibana: These two packages require no further configuration. So, go ahead and start them up.

Now, we will check if everything is working correctly. Go ahead, trigger an error in your Node.js app and let the logger record it to your file. This log should now be tracked by Filebeat, forwarded to Logstash, and finally pushed to Elasticsearch for indexing. If you navigate to localhost:9200/filebeat/_search in your browser, you should see that your log has been indexed! Don't be hasty; it might take a few seconds for your data to appear in Elasticsearch.

Now, you can query this data in Kibana for detailed processing and visualizations. This is incredibly useful for filtering your data, for example, by log levels, to identify which entries are fatal errors, warnings, and so on.

Figure: Filtering the log by message level in Kibana

This completes the Kibana section as well. You can delve deeper and create more filters, visualizations, and so on.

Elastalert: Now we move on to the sweet alert system. In this article, we will set up alerts for Slack. However, you can customize it for other notifications, such as email. Inside your Elastalert package, create a folder called rules. This is where we will store our settings for different alerts. Also, make a copy of the file config.yaml.example in the same directory and rename it to config.yaml. We will modify this file for the base level configurations for Elastalert. Now, open config.yaml and adjust it to look similar to the following:

rules_folder: rules
run_every:
minutes: 1
es_host: localhost
es_port: 9200

You can leave the other configurations as they are. We are setting it up to check Elasticsearch every 1 minute and to run our rules from the rules folder if anything happens. If our rules find a match, it will trigger our configured alert. Now, inside the rules folder, create a file called slack.yaml. Insert the following contents into it:

name: Slack rule
type: frequency
index: filebeat
num_events: 1
timeframe:
minutes: 1
filter:
- term:
“message.level”: “error”
alert:
- “slack”
slack_webhook_url: “<webhook-url-of-the-slack-channel>”
slack_channel_override: “#<channel-name>”
slack_username_override: “@<user-name>”

Here, we are configuring it to send a Slack alert if we have any message in our logs with a level of error. Now, start the Elastalert service using:

cd /path/to/Elastalert/dir
elastalert --verbose

And that’s it, guys. We now have a complete logging system with an alert service. I hope this helped you. Thanks!

--

--