Centralized Logging System for Lambda Functions

Mohamed Labouardy
5 min readJan 23, 2019

--

Photo by Scott Webb on Unsplash

AWS customers can have access to service-specific metrics and log files to gain insight into how each AWS service is operating through AWS CloudWatch. This service can be used as a centralized logging platform for smaller and less complex applications but if your environment is complex and big then you will be left with no option than going for a third party solution like Splunk, Datadog or ELK (ElasticSearch, Logstash, Kibana) stack.

At Foxintelligence, we are using the famous ELK stack as our logging platform for our Dockerized Microservices and CloudWatch Logs for AWS managed services (API Gateway, Lambda, Lambda@Edge, etc). However, with the rise of Serverless architecture, we needed a single place where we can troubleshoot and debug our Lambda functions spin up in multiple AWS regions using interactive and dynamic dashboards. Therefore, in this post, I will walk you through the process we followed to deliver near real-time feed of logs from CloudWatch to ELK. The workflow is described in the schema below:

Real-time Lambda Logging with Amazon Kinesis, Amazon CloudWatch and AWS Lambda

We will be using CloudWatch subscriptions to get access to a real-time feed of log events from Lambda functions and have it delivered to Amazon Kinesis Data Streams. From there, an AWS Lambda will be triggered for custom processing, analysis and to apply business logic to our log data (log enrichment) before loading them to Logstash.

To begin subscribing to log events, create the receiving source where the events will be delivered. Before you create the Kinesis stream, calculate the volume of log data (throughput) that will be generated. Be sure to create a Kinesis stream with enough shards to handle this volume. If the stream does not have enough shards, the log stream will be throttled:

Next, create an IAM role that will grant CloudWatch Logs permissions to insert logs into your Kinesis stream:

The IAM role should be configured as shown in the following screenshot:

Issue the command below to create a CloudWatch Logs destination:

To deliver near real-time feed of log events to Kinesis, you must create a CloudWatch Logs subscription filter with the following command:

The subscription filter immediately starts the flow of real-time log data from the chosen log group to your Kinesis stream:

After you set up the subscription filter, CloudWatch Logs forwards all the incoming log events that match the filter pattern to your Kinesis stream. Now you need to deploy your logs consumer.

The consumer is a Node-based Lambda function that will process records in Kinesis data streams:

Hence, you need to assign an execution role with permissions to fetch records from Kinesis, as shown in the following example:

And configure the batch size that will be read from your stream at once:

As as result, AWS Lambda will read the records from the data stream and invokes your function handler synchronously with an event that contains stream records. The following, is an example of events published by Kinesis Data Streams:

The function handler is self-explanatory, it receives Amazon Kinesis event data as an input, decode the data attribute record (The data attribute in the Kinesis record is Base64 encoded and compressed in gzip format) and writes the log to Logstash:

The logs will be sent through UDP protocol which is a good trade-off in terms of performance and reliability:

Shipping some logs

For the purpose of this tutorial, I’ve written a simple Lambda function that calculates the Fibonacci value of a given number.

If all goes well, a new index will be created in ElasticSearch, the pattern of which can now be defined in Kibana.

Hit “Create index pattern”, and you are ready to analyze the logs:

Right now, there won’t be much in there because you are only gathering logs from your Lambda function. You can construct a view, or a dashboard, highlighting important information that helps inform decisions regarding the implementation and functionality of the Lambda functions, so feel free to poke around !

If you already have lots of existing log groups, you can use the following shell script to subscribe to Kinesis destination:

As a result, all your Lambda functions logs will be delivered to Kinesis data streams:

You can take this further and leverage the power of CloudTrail to invoke a Lambda function based on the CreateLogGroup event to auto-subscribe new log groups.

We’re not sharing this just to make noise

We’re sharing this because we’re looking for people that want to help us solve some of these problems. There’s only so much insight we can fit into a job advert so we hope this has given a bit more and whet your appetite. If you’re keeping an open mind about a new role or just want a chat — get in touch or apply — we’d love to hear from you!

--

--

Mohamed Labouardy

CTO & Co-Founder @Tailwarden — Maker of @Komiser.io and Author — Newsletter: https://devopsbulletin.com