Automatic Log pushing From EC2 Instance to S3 bucket Using Lifecycle hook

Deepak Surendran
Apr 25 · 6 min read

Auto-Scaling is one of the salient features in AWS, which helps us to scale in and scale out according to our needs. Once an auto Scaling instance has been scaled down, we will lose all the logs which were created inside it. For this purpose, we can use a Lifecycle hook. They enable you to perform custom actions by pausing instances as an Auto Scaling group launches or terminates them. When an instance is paused, it remains in wait state for a certain amount of time until you continue. So here we are utilizing this method to push logs to S3 bucket before termination.

Steps:

2. Now we need to create AWS IAM roles for EC2 Instances and AWS Lambda function, enabling them to run SSM commands and upload files to S3 bucket. Create a custom policy to allow your EC2 Instances and AWS Lambda function to complete Auto Scaling lifecycle hooks and publish to the SNS topic.

Login to AWS Console and open the IAM service; Go to Policies and select the option Create Policy to create your policies; Paste the following policy document in JSON. It allows full lifecycle hook actions and SNS. Later, give the policy name and create the Policy.

3. Next, we need to create the EC2 Instance Role. Go to IAM service page, choose Roles and click on Create New Roles, give the role name and select Amazon EC2 Service and click on Next Step. Add the policies AmazonEC2RoleforSSM (default policy by AWS) and the Policy which we created above, then click on Create the Role.

4. Now we need to create the IAM Role for AWS Lambda function. Just go to the IAM service page and choose Roles and click on Create New Role, give the Role name and choose AWS Lambda Service and Add Amazon SSM Full Access, AWS Lambda Basic Execution Role policies and the Policy which we created above and then click on Create the Role.

5. Now we are going to create the Lifecycle hook.

Choose AutoScaling Groups and select the AutoScaling group and then select the lifecycle hook from the configuration panel. Select the Create lifecycle hook and give the lifecycle hook name; select lifecycle transition as Instance terminate; keep the default Heartbeat Timeout value which is 300(depends upon your need) and create the lifecycle hook.

6. Now we are going to create an S3 bucket to store the backup files. If you already have the S3 bucket, you can use that, otherwise, just create a new bucket.

7. Now we need to create SSM Document which works on Instances and pushes the logs to S3 bucket.

From the AWS EC2 console, go to System Manager Shared Resources; choose Documents then click on Create document, enter the document name, Document Type, keep the default Command, Just copy the below JSON code and paste it on content and then Create document.

{"schemaVersion": "1.2","description": "Backup logs to S3","parameters": {"ASGNAME":{"type":"String","description":"Auto Scaling group name"},"LIFECYCLEHOOKNAME":{"type":"String","description":"LIFECYCLEHOOK name"},"BACKUPDIRECTORY":{"type":"String","description":"BACKUPDIRECTORY localtion in server"},"S3BUCKET":{"type":"String","description":"S3BUCKET backup logs"},"SNSTARGET":{"type":"String","description":"SNSTARGET"}},"runtimeConfig": {"aws:runShellScript": {"properties": [{"id": "0.aws:runShellScript","runCommand": ["","#!/bin/bash ","INSTANCEID=$(curl http://169.254.169.254/latest/meta-data/instance-id)","HOOKRESULT='CONTINUE'","REGION=$(curl -s 169.254.169.254/latest/meta-data/placement/availability-zone | sed 's/.$//')","dt=*.`date '+%Y-%m-%d'`.log","tf=`date '+%Y-%m-%d'`","MESSAGE=''","","if [ -d \"{{BACKUPDIRECTORY}}\" ];","then","for i in `find \"{{BACKUPDIRECTORY}}\" -name \"$dt\"`; do echo $i; sudo /usr/local/bin/aws s3 cp $i s3://s3bucketname/\"$tf\"/\"$INSTANCEID\"/; done","else"," MESSAGE= \"{{BACKUPDIRECTORY}}\" directory Not exits in this server ","echo $MESSAGE","fi","","/usr/local/bin/aws sns publish --subject ' Report-Logs_backup-{{ASGNAME}} ' --message \"$MESSAGE\"  --target-arn {{SNSTARGET}} --region ${REGION}","/usr/local/bin/aws autoscaling complete-lifecycle-action --lifecycle-hook-name {{LIFECYCLEHOOKNAME}} --auto-scaling-group-name {{ASGNAME}} --lifecycle-action-result ${HOOKRESULT} --instance-id ${INSTANCEID}  --region ${REGION}"]}]}}}

8. Now we need to create the lambda function.

Go to the Lambda page from AWS console, then select create Lambda function, select Author from scratch, give the function name, select runtime as Node.js 8.10. Then click on Create Function. Once it created, paste the below code there:

const AWS = require('aws-sdk');
const ssm = new AWS.SSM();const SSM_DOCUMENT_NAME = process.env.SSM_DOCUMENT_NAME;
const S3_BUCKET = process.env.S3_BUCKET;
const SNS_TARGET = process.env.SNS_TARGET;
const BACKUP_DIRECTORY = process.env.BACKUP_DIRECTORY;const sendCommand = (instanceId, autoScalingGroup, lifecycleHook) => {
    var params = {
        DocumentName: SSM_DOCUMENT_NAME,
        InstanceIds: [instanceId],
        Parameters: {
            'ASGNAME': [autoScalingGroup],
            'LIFECYCLEHOOKNAME': [lifecycleHook],
            'BACKUPDIRECTORY': [BACKUP_DIRECTORY],
            'S3BUCKET': [S3_BUCKET],
            'SNSTARGET': [SNS_TARGET],
        },
        TimeoutSeconds: 300
    };
    return ssm.sendCommand(params).promise();
}exports.handler = async (event) => {
    console.log('Received event ', JSON.stringify(event));
    try {
        const records = event.Records;
        if (!records || !records.length) {
            return;
        }
        for (const record of records) {
            if (record.EventSource !== 'aws:sns') {
                console.log('Record is not processed because record.EventSource is not aws:sns');
                continue;
            }
            const message = JSON.parse(record.Sns.Message);
            if (message.LifecycleTransition !== 'autoscaling:EC2_INSTANCE_TERMINATING') {
                console.log('Record is not processed because message.LifecycleTransition is not autoscaling:EC2_INSTANCE_TERMINATING');
                continue;
            }
            console.log("processing autoscaling event");
            const autoScalingGroup = message.AutoScalingGroupName;
            const instanceId = message.EC2InstanceId;
            const lifecycleHook = message.LifecycleHookName;
            await sendCommand(instanceId, autoScalingGroup, lifecycleHook);
            console.log('sent command');
        }
    } catch (error) {
        throw error;
    }
}

set below environment variables there, and provide values,

BACKUP_DIRECTORY
S3_BUCKET
SNS_TARGET
SSM_DOCUMENT_NAME

Then save the Lambda function.

9. Now we need to create a CloudWatch event rule to trigger the lambda function. Open the CloudWatch page from AWS console, select Events and click Create Rule, Select Event Source as Auto Scaling choose AWS Lambda function as target and select our AWS Lambda function. Give a name and click on Create Rule.

Everything is set now. Go to the Autoscaling page and change the Desired capacity to 0. So it will mark the Instance for Termination, then lambda function gets triggered by CloudWatch event. Lambda function triggers SSM document and it will send our logs from Instance to S3 bucket.

Conclusion

So today we saw how to send logs to S3 bucket from an AutoScaling EC2 instance before termination. I took advantage of Lifecycle to do this job. Hope this blog was informative for you and if you have any alternate method for doing the same, please ping me in the comment section.

Tensult Blogs

Stories on Cloud computing, Analytics, Automation and Security

Deepak Surendran

Written by

Tensult Blogs

Stories on Cloud computing, Analytics, Automation and Security