Exporting of AWS CloudWatch logs to S3 using Automation

Parag Poddar
Tensult Blogs
Published in
4 min readJun 21, 2018

This Blog has moved from Medium to blogs.tensult.com. All the latest content will be available there. Subscribe to our newsletter to stay updated.

In my previous blog, I have explained about “Managing AWS CloudWatch Log Group Retention using Automation”, I recommend you to read that too. In that blog I have discussed why we need to set retention for managing the cost for CloudWatch log storage, when we set that retention to a certain time period then CloudWatch logs will be automatically deleted after that time period. Some times we might need those logs for other debugging/analysis purpose, but we might not be accessing them very frequently so we can move them to more cost effective storage like S3 and keep them for some more time before permanently deleting. This way we can reduce the cost of storing logs for long term without losing flexibility of debugging live issues directly from CloudWatch Logs console.

What this automation accomplishes?

Every day CloudWatch logs of the pervious day will be exported to S3 bucket.

Prerequisites

  • AWS account
  • IAM user of that AWS account (It is best practice for everything to be done by the IAM user, not from a root account)
  • IAM user should be authorised to access services for creating this automation task.

How this automation works

On a daily basis, at a certain time a CloudWatch event rule triggers an AWS Step Functions State Machine. The State Machine works with an AWS Lambda function and both together do the CloudWatch logs exporting task to S3.

Create IAM role

Here we are creating IAM role for an AWS service called Lambda. By this role Lambda can access other AWS resources.

Create an IAM role and add the following IAM inline policy into that. To know how to create IAM role and attach policy for a service please refer this documentation.

Create Lambda function

Create an AWS Lambda function and place this code into the lambda function. While creating lambda function, runtime should be Node.js 8.10 and choose previously created role in existing role. To know how to create AWS lambda function refer this document.

Create Step Function State Machine

Here step function state machine is needed because at a time one export task can be possible. And we have to export multiple log groups data.

  • Go to services → Step Functions → click on Get started
  • Choose Author from scratch → give state machine name → choose Create a rule for me in IAM role and put the following JSON in State machine definition (Put previously created lambda function ARN in JSON in State machine definition)→ click on Create state machine.

Create CloudWatch event rule

  • Go to services → CloudWatch → Rules → click on Create rule .
  • Event Source → choose Schedule → select Cron expression and put 0 10 * * ? * (To know how to set cron expression, refer this document) → Targets → select Step Functions state machine → select previously created state machine → choose Constant (JSON text) and put the following JSON.

Here region is the region where you set up this automation. Only log groups contain logGroupFilter value will be exported. s3BucketName is the destination bucket and in that what value we are giving in logFolderName , that named folder will be created and in that logs will be stored. →choose Create a new role for this specific resource(this role is needed for CloudWatch Events needs permission to send events to your Step Functions state machine. By continuing, you are allowing us to do so.)→ click on Configure details .

  • Give Name, Description, State should be enabled → click on Create rule .

Conclusion

Now that we have learnt how to export logs to S3 automatically, and if you have observed carefully, logs S3 bucket are going to be stored forever. What if we want to store the logs only for 3 months, then how do we do that? If you know the solution then post it as a comment on this blog.

And stay tuned, for my next blogs..

--

--