Lambda Logs in ELK

How to ingest AWS Lambda Log Streams from CloudWatch into an ELK monitoring stack

Nik Rahmel
BBC Product & Technology
4 min readJan 18, 2019

--

Anybody who uses AWS Lambda might have come across something like in the screenshot below when trying to access any kind of log, and felt emotions such as confusion, anger, and annoyance.

A Log Group in the AWS Console

Those emotions were probably even stronger if you have been spoilt by having an ELK stack for your all your centralised logging needs for all of your microservices — maybe even with some neat Kibana dashboards that allow you to dig into the data, filter it, and visualise it.

That’s how we felt anyway on the iBL team (the iPlayer API team) — for the past 3.5 years all of our microservices sent their application and access logs into our Elasticsearch cluster, making monitoring and operational investigations a breeze. Since then we’ve started utilising AWS Lambda a lot more for various tasks — be it for serverless, decoupled integration with other AWS services, or to handle certain traffic requirements.

A lot of Elks — see elastic.co for the whole story

But with serverless environments, there comes one issue: Not having access to the server environment. No journalbeat or filebeat for us to ship off logs to logstash, instead AWS just puts everything into CloudWatch logs, and by default not in the most accessible way. There are some teams in iPlayer who are fully utilising CloudWatch for their logs, and I’m sure that they enjoy just having Lambda logs delivered without much work, but after reviewing our requirements for iBL, we decided to stick with ELK for our main logging tool — among other things it is a lot cheaper to run our own ElasticSearch cluster.

In order to tie up Lambda logs into that, we have.. created another Lambda!

The Pipeline

CloudWatch Logs has a feature called Subscriptions for “Real-time Processing of Log Data”, which is just what we want. It allows sending log events to a Kinesis Stream, Kinesis Firehose Stream, or a Lambda. It is created on a log group. Each log group contains a number of log streams. The screenshot above shows a number of log streams for a log group — and each Lambda gets its own log group created.

So what do we do now with this feature? For each of our existing Lambdas’ log groups, we create a subscription to send the Logs to the new Lambda, the iBL CloudWatch Logs Processor. As per the AWS Example Guide, the event the Lambda gets invoked with looks something like this after parsing:

From the logGroup key, we extract the function name, and create our payload:

On our logstash server, we have created a dedicated input on port 5090 to add a lambda type to the events. They get stored in our ElasticSearch cluster, alongside all other application logs, and are easily accessible through Kibana, so we end up with the following architecture:

Automating Subscriptions

So far so good — but what when we create a new Lambda? We could add the Subscription to our CloudFormation templates that our yeoman generator creates when we start a new lambda project.. or we can go a bit meta and have the CloudWatch Processor create those by itself!

Of course, we go meta.

We can use CloudWatch Events to invoke the Logs Processor every 30 minutes to check for any new Log Groups that need subscriptions created for. Using the AWS SDK, we get a list of all Log Groups and their existing subscriptions, filter out all those that already have subscriptions, and create subscriptions for the remaining ones. Our new architecture diagram does not look too different now:

That font is BBC Reith Sans if anybody is wondering

One thing to be noted is, that we shall not create a subscription for the Log Group of the Logs Processor Lambda itself — every invocation would generate a log that it has been invoked, and in turn invoke it again. A neverending cataclysm of Lambda invocation would not make for a pleasant experience!

The End Result

Getting all log events pumped straight into ElasticSearch allows us to create dashboards in Kibana like this:

A Kibana visualisation of the number of all log entries of selected Lambdas

Or we can find all log messages from a select Lambda containing the name of a service we’re investigating:

Kibana Discovery of all log entries for the Lambda ‘ibl-tag-extractor-live’ containing ‘mediaselector’ in the past 24 hours

Of course, there are various alternatives to this — AWS CloudWatch now supports Logs Insights for more effective querying, and Grafana is about to release Loki, which is much simpler and does not require an ElasticSearch cluster.

Let us know if you have any thoughts about the above, or if you fancy using or improving it yourself, why not join us?

--

--