Centralized log monitoring using Elastic Stack

Agnel Nandapurapu
Tensult Blogs
Published in
4 min readApr 16, 2019

This Blog has been moved from Medium to blogs.tensult.com. All the latest content will be available there. Subscribe to our newsletter to stay updated.

Ref: https://bit.ly/2UFcnXn

I have implemented this setup for monitoring logs of multiple EC2 instances in an AWS Account. Generally, every instance will generate several logs during runtime, it might be system logs, application logs, etc. so how can we monitor all these logs at one place? I couldn’t find any service for the centralized monitoring for instance logs in AWS, so I have implemented one customized solution for this by using Elastic Stack and AWS services ( Lambda and CloudWatch).

Note : To make it simple and understandable, I took apache2 server logs of a single instance but the same process can be done on multiple instances with different logs.

Prerequisites :

  1. FileBeat is the main application for shipping and filtering the logs, so we need to install FileBeat on the instance. To download and install follow the steps here.
  2. We use Elasticsearch for storing filtered logs data and here I use AWS Elasticsearch cluster endpoint. To configure AWS Elasticsearch cluster follow the below steps here.
  3. AWS Lambda is used for running the script to fetch error logs data from the Elasticsearch index.

FileBeat Configuration :

FileBeat is one of the core applications in Elastic Stack and it is used for shipping logs to other Elastic Stack services like Elastic Search and Logstash, etc. As per my implementation, logs should be shipped to Elastic Search. For that, Filebeat configuration should be modified. filebeat.yml is the main configuration file of FileBeat. Refer the below image for the location.

Once you have opened the file follow the below steps:

Step 1: Fetch logs from the system: We need to set paths from where FileBeat should fetch the logs. In the below image we are setting up the path of apache2 server, so FileBeat will get apache2 logs here.

Step 2: Filter fetched logs: FileBeat can ignore unwanted event messages from logs. Processors perform certain actions as per commands mentioned under them, here we are saying that ignore messages which have “HTTP/1.0\” 200" sentence. Please refer to the below image:

Step 3: Send logs to Elastic Search: For sending logs to Elasticsearch we need to set up below configurations at Elasticsearch output section in filebeat.yml

Once all configurations are done, we need to start the FileBeat application, use the below command for the same.

~/filebeat-6.6.2-linux-x86_64$ ./filebeat -e -v -c filebeat.yml

Now it will start ingesting apache2 server logs into Elasticsearch cluster index.

Now that the FileBeats configurations are done, we need to configure Lambda and CloudWatch for generating time-based CloudWatch metrics for error logs.

Lambda Configuration :

Lambda is a serverless computing platform to run applications and it is used here to execute the below script. Below script is written for fetching log data based on queries from Elasticsearch and to set custom metric data in CloudWatch. Here I’m fetching error logs data by checking “response_code” and it is written in javascript.

Whenever Lambda executes the script, Elasticsearch will return error log data and it will generate custom metric data in CloudWatch.

Note : Attach a role to lambda with below policy to access Elasticsearch

CloudWatch Configuration :

Lambda will be executing the script every five minutes. To execute the function we have to configure a CloudWatch rule to trigger the Lambda function. Please refer to the image below:

cloudwatch_rule

The script will be continuously executed and it will generate CloudWatch metric data as mentioned in the below image. By setting up alarm on this metric we can get alerts when it reaches a threshold limit so that we can get notifications for error logs.

cloudwatch_metric

Conclusion

This implementation is done in a single AWS account, using one Elasticsearch cluster and we can ship only the filtered data thereby reducing unnecessary storage space in Elasticsearch. So this is how we ship log data to Elasticsearch using FileBeat and Lambda. Let me know your thoughts in the comments section also try to implement this using multiple accounts.

--

--