Automating AWS EC2 Management with AWS SDK Python Boto3, Lambda and CloudWatch Rule
A comprehensive EC2 management with Boto3 and Lambda
What is AWS SDK for Python (Boto3) ?
You use the AWS SDK for Python (Boto3) to create, configure, and manage AWS services, such as Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3). The SDK provides an object-oriented API as well as low-level access to AWS services
Why using it?
To fully take advantage of Python’s automation in AWS EC2 management, we’ll be exploring the power of Boto3
In this project, we will be touching upon almost all aspects of EC2 management. We’ll starting from single task based project such as how to manage start, stop your EC2 all the way to using Linux crontab to automate the whole process and even more to deploying Lambda Function using CloudWatch rule and more…
Prerequisites
For this walkthrough, you need the following:
- An AWS account — with non-root user (take security into consideration)
- In terms of system, we will be using RHEL 8.3 by Oracle Virtual Box on Windows 10 using putty
- AWSCLI installed
- Install Terraform
Let us work on them one by one.
Creating a non-root user
Based on AWS best practice, root user is not recommended to perform everyday tasks, even the administrative ones. The root user, rather is used to to create your first IAM user, groups and roles. Then you need to securely lock away the root user credentials and use them to perform only a few account and service management tasks.
Notes: If you would like to learn more about why we should not use root user for operations and more about AWS account, please find more here.
Set up RHEL 8.3 by Oracle Virtual Box on Windows 10 using putty
First, we will download Oracle Virtual Box on Windows 10, please click Windows hosts
Second, we will also download RHEL iso
Let us make it work now!
Click Oracle VirtualBox and open the application and follow instructions here, you will install RHEL 8.3 as shown below
Notes: In case you are unable to install RHEL 8.3 successfully, please find solutions here. Also, after you create your developer’s account with Red Hat, you have to wait for sometime before register it. Otherwise, you may receive errors as well.
Now it’s time for us to connect to RHEL 8.3 from Windows 10 using VirtualBox.
Click activities and open terminal
Notes: In order to be able to connect to RHEL 8.3 from Windows 10 using putty later, we must enable what it is shown below.
Now we will get the ip that we will be using to connect to RHEL 8.3 from Windows 10 using Putty (highlighted ip address for enp0s3 is the right one to use)
Then we will install Putty.
ssh-keygen with a password
Creating a password-protected key looks something like this:
$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/pzhao/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/pzhao/.ssh/id_rsa.
Your public key has been saved in /home/pzhao/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:RXPnUZg/fGgRGTOxEfbo3VOMo/Yp4Gi80has/iR4m/A pzhao@localhost.localdomain
The key's randomart image is:
+---[RSA 3072]----+
| o . %X.|
| . o +=@ |
| . B++|
| . oo==|
| .S . o...=|
| . .oo o . ..|
| o oo=.. . o |
| +o*o. . |
| .E+o |
+----[SHA256]-----+
To find out private key
$ cat .ssh/id_rsa
-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABlwAAAAdzc2gtcn
NhAAAAAwEAAQAAAYEAwoavXHvZCYPO/sbMD0ibtkvF+9/NmSm2m/Z8wRy7O2A012YS98ap
8aq18PXfKPyyAMNF3hdG3xi1KMD7DSIb/C1gunjTREEJRfYjydOjFBFtZWY78Mj4eQkrPJ
.
.
.
-----END OPENSSH PRIVATE KEY-----
Notes: You may take advantage of GUI of RHEL to send Private Key as an email, then open the mail and copy the private key from email
Open the Notepad in Windows 10 and save private key as ansiblekey.pem file
Then open PuTTY Key Generator and load the private key ansiblekey.pem
Then save it as a private key as ansible.ppk file
We now open Putty and input IP address we saved previously as Host Name (or IP address) 192.168.0.18
We then move on to Session and input IP address
For convenience, we may save it as a predefined session as shown below
You should see the pop up below if you log in for the very first time
Then you input your username and password to login. You see below image after log in.
Installing AWS CLI
To install AWS CLI after logging into Redhat8
$ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
To verify the installation
$ aws --version
aws-cli/2.0.46 Python/3.7.4 Darwin/19.6.0 exe/x86_64
To use aws cli, we need to configure it using aws access key, aws secret access key, aws region and aws output format
$ aws configure
AWS Access Key ID [****************46P7]:
AWS Secret Access Key [****************SoXF]:
Default region name [us-east-1]:
Default output format [json]:
Installing Terraform
To install terraform, simply use the following command:
Install yum-config-manager
to manage your repositories.
$ sudo yum install -y yum-utils
Use yum-config-manager
to add the official HashiCorp Linux repository.
$ sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/AmazonLinux/hashicorp.repo
Install terraform
$ sudo yum -y install terraform
Notes: In case of a wrong symbolic link set up, please check out this link. Also, you may need to re login after changing the symbolic link.
To check out installation of terraform
$ terraform version
Terraform v0.14.3
+ provider registry.terraform.io/hashicorp/aws v3.21.0
For advanced user, please adopt either CloudFormation or Terraform as infrastructure as code (Iac) to create EC2 instance
First, we will be terraforming one instance
Terraforming EC2
After logging into REHL 8.3 via Putty on Windows 10, we will make a folder named ubuntu-instance
$ mkdir ubuntu-instance
$ cd ubuntu-instance/
vim ubuntu.tf
provider "aws" {
profile = "default"
region = "us-east-1"
}resource "aws_key_pair" "ubuntu" {
key_name = "ubuntu"
public_key = file("key.pub")
}resource "aws_security_group" "ubuntu" {
name = "ubuntu-security-group"
description = "Allow HTTP, HTTPS and SSH traffic" ingress {
description = "SSH"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
} ingress {
description = "HTTPS"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
} ingress {
description = "HTTP"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
} egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
} tags = {
Name = "terraform"
}
}
resource "aws_instance" "ubuntu" {
key_name = aws_key_pair.ubuntu.key_name
ami = "ami-00ddb0e5626798373" ### ami should match with ami in the region you may create your instance
instance_type = "t2.micro" tags = {
Name = "ubuntu"
} vpc_security_group_ids = [
aws_security_group.ubuntu.id
] connection {
type = "ssh"
user = "ubuntu"
private_key = file("key")
host = self.public_ip
} ebs_block_device {
device_name = "/dev/sda1"
volume_type = "gp2"
volume_size = 30
}
}resource "aws_eip" "ubuntu" {
vpc = true
instance = aws_instance.ubuntu.id
}
Then we need to generate a keypair to use from our RHEL 8.3
$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/pzhao/.ssh/id_rsa):
/home/pzhao/.ssh/id_rsa already exists.
Overwrite (y/n)? y
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/pzhao/.ssh/id_rsa.
Your public key has been saved in /home/pzhao/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:qt91snVaus9YsohMlvX942s8Xn9Clz9KRJ7ig8EyYfM pzhao@localhost.localdomain
The key's randomart image is:
+---[RSA 3072]----+
| |
| |
| + . |
| . = o . |
| oSE.. + .|
| .oo+.o....|
| . +.oo=o*.o|
| . = o *o@.B=|
| ... + o *+B=O|
+----[SHA256]-----+
To use this keypair to logging into our instance later, we need to cp them from default path to our current folder
$ cp ~/.ssh/id_rsa.pub ./$ ls
id_rsa.pub ubuntu.tf
Since we apply our public key as public_key = file("key.pub")
in our ubuntu.tf
file, we need rename our id_rsa.pub
as key
$ mv id_rsa.pub key.pub
$ ls
key.pub ubuntu.tf
Next terraform init
$ terraform initInitializing the backend...Initializing provider plugins...
- Finding latest version of hashicorp/aws...
- Installing hashicorp/aws v3.22.0...
- Installed hashicorp/aws v3.22.0 (signed by HashiCorp)Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.Terraform has been successfully initialized!You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
After that, terraform validate
$ terraform validate
Success! The configuration is valid.
terraform plan
to plan our infrustructure
$ terraform plan
.
.
.
Plan: 4 to add, 0 to change, 0 to destroy.------------------------------------------------------------------------Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.
Lastly, terraform apply
$ terraform apply --auto-approve
.
.
.
aws_key_pair.ubuntu: Creating...
aws_security_group.ubuntu: Creating...
aws_key_pair.ubuntu: Creation complete after 0s [id=ubuntu]
aws_security_group.ubuntu: Creation complete after 2s [id=sg-0d3ced0f09c4ec84d]
aws_instance.ubuntu: Creating...
aws_instance.ubuntu: Still creating... [10s elapsed]
aws_instance.ubuntu: Still creating... [20s elapsed]
aws_instance.ubuntu: Still creating... [30s elapsed]
aws_instance.ubuntu: Creation complete after 34s [id=i-0d3d62d5378073c9c]
aws_eip.ubuntu: Creating...
aws_eip.ubuntu: Creation complete after 2s [id=eipalloc-000f08d08e1a2aefd]Apply complete! Resources: 4 added, 0 changed, 0 destroyed.
Double check ubuntu instance created in AWS console
Notes: Terraform may not be able to provision Amazon Linux instances at this point. I tested multiple Amazon Linux instances. As soon as they were provisioned, they were stopped sponteneously. So please don’t terraform your Amazon Linux instances
Now we may begin working on our project in this EC2 intance
To describe all EC2 instnaces and create file awssutils.py
vim awssutils.py
import boto3
import pprintdef get_session(region):
return boto3.session.Session(region_name=region)session = get_session('us-east-1')
client = session.client('ec2')
pprint.pprint(client.describe_instances())
Execute the file awssutils.py
$ python3 awssutils.py
{'Reservations': [{'Groups': [],
'Instances': [{'AmiLaunchIndex': 0,
'Architecture': 'x86_64',
'BlockDeviceMappings': [],
'CapacityReservationSpecification': {'CapacityReservationPreference': 'open'},
'ClientToken': 'aws-c-Insta-TWDW4QZCUGX6',
'CpuOptions': {'CoreCount': 1,
'ThreadsPerCore': 1},
'EbsOptimized': False,
'EnaSupport': True,
'HibernationOptions': {'Configured': False},
'Hypervisor': 'xen',
'IamInstanceProfile': {'Arn': 'arn:aws:iam::464392538707:instance-profile/AmazonSSMRoleForInstancesQuickSetup',
'Id': 'AIPAWYH7TZJJTSDS3OHDU'},
'ImageId': 'ami-0fea2201f10665f7a',
'InstanceId': 'i-03527d7cb6999a401',
'InstanceType': 't2.micro',
'LaunchTime': datetime.datetime(2020, 8, 6, 3, 12, 10, tzinfo=tzutc()),
'Monitoring': {'State': 'disabled'},
'NetworkInterfaces': [{'Attachment': {'AttachTime': datetime.datetime(2020, 8, 5, 23, 28, 33, tzinfo=tzutc()),
'AttachmentId': 'eni-attach-0421e9ffa81b87a49',
'DeleteOnTermination': True,
'DeviceIndex': 0,
'Status': 'attached'},
'Description': '',
'Groups': [{'GroupId': 'sg-03c91c35255b8dafa',
'GroupName': 'aws-cloud9-Cloud9-1cdcc239aaf541f29d792553a59d2cc0-InstanceSecurityGroup-QPD1ACX5DW6D'}],
'InterfaceType': 'interface',
'Ipv6Addresses': [],
'MacAddress': '12:02:ad:fc:c8:ed',
'NetworkInterfaceId': 'eni-0c6375b4caf7e81cd',
'OwnerId': '464392538707',
'PrivateDnsName': 'ip-10-0-0-243.ec2.internal',
'PrivateIpAddress': '10.0.0.243',
'PrivateIpAddresses': [{'Primary': True,
'PrivateDnsName': 'ip-10-0-0-243.ec2.internal',
'PrivateIpAddress': '10.0.0.243'}],
'SourceDestCheck': True,
'Status': 'in-use',
'SubnetId': 'subnet-044f3f18cfe57810a',
'VpcId': 'vpc-00daa42070c76f919'}],
'Placement': {'AvailabilityZone': 'us-east-1a',
'GroupName': '',
'Tenancy': 'default'},
'PrivateDnsName': 'ip-10-0-0-243.ec2.internal',
'PrivateIpAddress': '10.0.0.243',
'ProductCodes': [],
'PublicDnsName': '',
'RootDeviceName': '/dev/xvda',
'RootDeviceType': 'ebs',
'SecurityGroups': [{'GroupId': 'sg-03c91c35255b8dafa',
'GroupName': 'aws-cloud9-Cloud9-1cdcc239aaf541f29d792553a59d2cc0-InstanceSecurityGroup-QPD1ACX5DW6D'}],
'SourceDestCheck': True,
'State': {'Code': 80, 'Name': 'stopped'},
'StateReason': {'Code': 'Client.InstanceInitiatedShutdown',
'Message': 'Client.InstanceInitiatedShutdown: '
'Instance '
'initiated '
'shutdown'},
'StateTransitionReason': 'User initiated',
'SubnetId': 'subnet-044f3f18cfe57810a',
'Tags': [{'Key': 'aws:cloudformation:stack-name',
'Value': 'aws-cloud9-Cloud9-1cdcc239aaf541f29d792553a59d2cc0'},
{'Key': 'Name',
'Value': 'aws-cloud9-Cloud9-1cdcc239aaf541f29d792553a59d2cc0'},
{'Key': 'aws:cloudformation:stack-id',
'Value': 'arn:aws:cloudformation:us-east-1:464392538707:stack/aws-cloud9-Cloud9-1cdcc239aaf541f29d792553a59d2cc0/58aac960-d773-11ea-bdac-1245bb6cedee'},
{'Key': 'aws:cloud9:environment',
'Value': '1cdcc239aaf541f29d792553a59d2cc0'},
{'Key': 'aws:cloud9:owner',
'Value': '464392538707'},
{'Key': 'aws:cloudformation:logical-id',
'Value': 'Instance'}],
'VirtualizationType': 'hvm',
'VpcId': 'vpc-00daa42070c76f919'}],
'OwnerId': '464392538707',
'RequesterId': '043234062703',
'ReservationId': 'r-0d89a99b5620aed91'},
{'Groups': [],
'Instances': [{'AmiLaunchIndex': 0,
'Architecture': 'x86_64',
'BlockDeviceMappings': [{'DeviceName': '/dev/sda1',
'Ebs': {'AttachTime': datetime.datetime(2021, 2, 22, 19, 30, 49, tzinfo=tzutc()),
'DeleteOnTermination': True,
'Status': 'attached',
'VolumeId': 'vol-0c9c23c50368884ef'}}],
'CapacityReservationSpecification': {'CapacityReservationPreference': 'open'},
'ClientToken': '7DA1B831-4770-4B64-B7E0-7E83357B1B9C',
'CpuOptions': {'CoreCount': 1,
'ThreadsPerCore': 1},
'EbsOptimized': False,
'EnaSupport': True,
'HibernationOptions': {'Configured': False},
'Hypervisor': 'xen',
'ImageId': 'ami-00ddb0e5626798373',
'InstanceId': 'i-09f7c70f5ad21663c',
'InstanceType': 't2.micro',
'KeyName': 'ubuntu',
'LaunchTime': datetime.datetime(2021, 2, 22, 23, 33, 31, tzinfo=tzutc()),
'Monitoring': {'State': 'disabled'},
'NetworkInterfaces': [{'Association': {'IpOwnerId': '464392538707',
'PublicDnsName': 'ec2-54-210-217-199.compute-1.amazonaws.com',
'PublicIp': '54.210.217.199'},
'Attachment': {'AttachTime': datetime.datetime(2021, 2, 22, 19, 30, 48, tzinfo=tzutc()),
'AttachmentId': 'eni-attach-060480e5082ad3503',
'DeleteOnTermination': True,
'DeviceIndex': 0,
'Status': 'attached'},
'Description': '',
'Groups': [{'GroupId': 'sg-04fa2afcf238bdd24',
'GroupName': 'ubuntu-security-group'}],
'InterfaceType': 'interface',
'Ipv6Addresses': [],
'MacAddress': '12:84:42:93:17:47',
'NetworkInterfaceId': 'eni-0d13357bda5ada3fb',
'OwnerId': '464392538707',
'PrivateDnsName': 'ip-172-31-4-194.ec2.internal',
'PrivateIpAddress': '172.31.4.194',
'PrivateIpAddresses': [{'Association': {'IpOwnerId': '464392538707',
'PublicDnsName': 'ec2-54-210-217-199.compute-1.amazonaws.com',
'PublicIp': '54.210.217.199'},
'Primary': True,
'PrivateDnsName': 'ip-172-31-4-194.ec2.internal',
'PrivateIpAddress': '172.31.4.194'}],
'SourceDestCheck': True,
'Status': 'in-use',
'SubnetId': 'subnet-06315eb9149798d01',
'VpcId': 'vpc-346c6f4e'}],
'Placement': {'AvailabilityZone': 'us-east-1a',
'GroupName': '',
'Tenancy': 'default'},
'PrivateDnsName': 'ip-172-31-4-194.ec2.internal',
'PrivateIpAddress': '172.31.4.194',
'ProductCodes': [],
'PublicDnsName': 'ec2-54-210-217-199.compute-1.amazonaws.com',
'PublicIpAddress': '54.210.217.199',
'RootDeviceName': '/dev/sda1',
'RootDeviceType': 'ebs',
'SecurityGroups': [{'GroupId': 'sg-04fa2afcf238bdd24',
'GroupName': 'ubuntu-security-group'}],
'SourceDestCheck': True,
'State': {'Code': 16, 'Name': 'running'},
'StateTransitionReason': '',
'SubnetId': 'subnet-06315eb9149798d01',
'Tags': [{'Key': 'Name', 'Value': 'ubuntu'}],
'VirtualizationType': 'hvm',
'VpcId': 'vpc-346c6f4e'}],
'OwnerId': '464392538707',
'ReservationId': 'r-057eff8f366ae22f2'}],
'ResponseMetadata': {'HTTPHeaders': {'cache-control': 'no-cache, no-store',
'content-type': 'text/xml;charset=UTF-8',
'date': 'Tue, 23 Feb 2021 00:03:23 GMT',
'server': 'AmazonEC2',
'strict-transport-security': 'max-age=31536000; '
'includeSubDomains',
'transfer-encoding': 'chunked',
'vary': 'accept-encoding',
'x-amzn-requestid': 'c522b73a-7044-4add-aeac-2da31c89255c'},
'HTTPStatusCode': 200,
'RequestId': 'c522b73a-7044-4add-aeac-2da31c89255c',
'RetryAttempts': 0}}
To Retrieving EC2 Instance Details
Create ec2_instance.py
vim ec2_instance.py
import boto3
import pprintdef get_session(region):
return boto3.session.Session(region_name=region)
session = get_session('us-east-1')
client = session.client('ec2')
demo = client.describe_instances(Filters=[{'Name': 'tag:Name', 'Values': ['demo-instance']}])
pprint.pprint(demo)
Execute Python ec2_instance.py
python3 ec2_instance.py
{'Reservations': [],
'ResponseMetadata': {'HTTPHeaders': {'cache-control': 'no-cache, no-store',
'content-length': '230',
'content-type': 'text/xml;charset=UTF-8',
'date': 'Tue, 23 Feb 2021 00:53:05 GMT',
'server': 'AmazonEC2',
'strict-transport-security': 'max-age=31536000; '
'includeSubDomains',
'x-amzn-requestid': '8589c651-f41a-4c7a-bcd1-7064822a49ce'},
'HTTPStatusCode': 200,
'RequestId': '8589c651-f41a-4c7a-bcd1-7064822a49ce',
'RetryAttempts': 0}}
To find out instance ami id in case you may need
Create ec2_instance_ami_id.py
file
vim ec2_instance_ami_id.py
import boto3
import pprintdef get_session(region):
return boto3.session.Session(region_name=region)
session = get_session('us-east-1')
client = session.client('ec2')
ec2 = boto3.resource('ec2', region_name='us-east-1')
ami_name = input('Enter your AMI Name: ')
filter = {'Name': 'name', 'Values' : [ami_name]}
for i in ec2.images.filter(Filters = [filter]): print(i)
Execute Python file ec2_instance_ami_id.py
$ python3 ec2_instance_ami_id.py
Enter your AMI Name: InstanceID_i-09f7c70f5ad21663c_Image_Backup_20210223
ec2.Image(id='ami-059181256b80d0ec0')
To stop EC2 instance created previously
Create ec2_instance_stop.py
file
vim ec2_instance_stop.py
import boto3
import pprintdef get_session(region):
return boto3.session.Session(region_name=region)
session = get_session('us-east-1')
client = session.client('ec2')
instance_id = input("Enter your instance_id to start ")
pprint.pprint(client.stop_instances(InstanceIds=[instance_id]))
Execute Python file ec2_instance_stop.py
$ python3 ec2_instance_stop.py
Enter your instance_id to stop i-09f7c70f5ad21663c ### input your instance ID to stop
{'ResponseMetadata': {'HTTPHeaders': {'cache-control': 'no-cache, no-store',
'content-length': '579',
'content-type': 'text/xml;charset=UTF-8',
'date': 'Mon, 22 Feb 2021 23:24:47 GMT',
'server': 'AmazonEC2',
'strict-transport-security': 'max-age=31536000; '
'includeSubDomains',
'x-amzn-requestid': '5b9f2620-3771-448b-9b46-a21b81bd9a31'},
'HTTPStatusCode': 200,
'RequestId': '5b9f2620-3771-448b-9b46-a21b81bd9a31',
'RetryAttempts': 0},
'StoppingInstances': [{'CurrentState': {'Code': 64, 'Name': 'stopping'},
'InstanceId': 'i-09f7c70f5ad21663c',
'PreviousState': {'Code': 16, 'Name': 'running'}}]}
Cross check in AWS Console
To start EC2 instance created previously
Create ec2_instance_start.py
file
vim ec2_instance_start.py
import boto3
import pprintdef get_session(region):
return boto3.session.Session(region_name=region)
session = get_session('us-east-1')
client = session.client('ec2')
instance_id = input("Enter your instance_id to start ")
pprint.pprint(client.start_instances(InstanceIds=[instance_id]))
Execute Python file ec2_instance_start.py
$ python3 ec2_instance_start.py
Enter your instance_id to start i-09f7c70f5ad21663c ### input your instance ID to start
{'ResponseMetadata': {'HTTPHeaders': {'cache-control': 'no-cache, no-store',
'content-length': '579',
'content-type': 'text/xml;charset=UTF-8',
'date': 'Mon, 22 Feb 2021 23:33:30 GMT',
'server': 'AmazonEC2',
'strict-transport-security': 'max-age=31536000; '
'includeSubDomains',
'x-amzn-requestid': 'ac2e293a-3003-49c7-9732-a5da9af481e6'},
'HTTPStatusCode': 200,
'RequestId': 'ac2e293a-3003-49c7-9732-a5da9af481e6',
'RetryAttempts': 0},
'StartingInstances': [{'CurrentState': {'Code': 0, 'Name': 'pending'},
'InstanceId': 'i-09f7c70f5ad21663c',
'PreviousState': {'Code': 80, 'Name': 'stopped'}}]}
[pzhao@localhost ubuntu-instance]$ vim ec2_instance_start.py
Cross check in AWS consle
Alternative Approach to Fetching, Starting, and Stopping
In addition to the EC2.Client
class that we've been working with thus far, there is also a EC2.Instance class that is useful in cases such as this one where I only need to be concerned with one instance at a time
To stop EC2 instance created previously
Create ec2_instance_stop_alternative.py
file
vim ec2_instance_stop_alternative.py
import boto3
import pprintdef get_session(region):
return boto3.session.Session(region_name=region)
session = get_session('us-east-1')
ec2 = session.resource('ec2')
instance_id = input("Enter your instance_id to stop ")
instance = ec2.Instance(instance_id)
instance.state
pprint.pprint(instance.stop())
Execute Python file ec2_instance_stop.py
$ python3 ec2_instance_stop_alternative.py
Enter your instance_id to stop i-09f7c70f5ad21663c
{'ResponseMetadata': {'HTTPHeaders': {'cache-control': 'no-cache, no-store',
'content-length': '579',
'content-type': 'text/xml;charset=UTF-8',
'date': 'Tue, 23 Feb 2021 16:10:10 GMT',
'server': 'AmazonEC2',
'strict-transport-security': 'max-age=31536000; '
'includeSubDomains',
'x-amzn-requestid': 'c872170e-ed65-45ad-aaea-7cf26bc266ae'},
'HTTPStatusCode': 200,
'RequestId': 'c872170e-ed65-45ad-aaea-7cf26bc266ae',
'RetryAttempts': 0},
'StoppingInstances': [{'CurrentState': {'Code': 64, 'Name': 'stopping'},
'InstanceId': 'i-09f7c70f5ad21663c',
'PreviousState': {'Code': 16, 'Name': 'running'}}]}
Cross check in AWS Console
To start EC2 instance created previously
Create ec2_instance_start_alternative.py
file
vim ec2_instance_start_alternative.py
import boto3
import pprintdef get_session(region):
return boto3.session.Session(region_name=region)
session = get_session('us-east-1')
ec2 = session.resource('ec2')
instance_id = input("Enter your instance_id to start ")
instance = ec2.Instance(instance_id)
instance.state
pprint.pprint(instance.start())
Execute Python file ec2_instance_start_alternative.py
$ python3 ec2_instance_start_alternative.py
Enter your instance_id to start i-09f7c70f5ad21663c
{'ResponseMetadata': {'HTTPHeaders': {'cache-control': 'no-cache, no-store',
'content-length': '579',
'content-type': 'text/xml;charset=UTF-8',
'date': 'Tue, 23 Feb 2021 16:12:29 GMT',
'server': 'AmazonEC2',
'strict-transport-security': 'max-age=31536000; '
'includeSubDomains',
'x-amzn-requestid': 'eef32dff-b2c4-47f8-817a-6afb4df46fcc'},
'HTTPStatusCode': 200,
'RequestId': 'eef32dff-b2c4-47f8-817a-6afb4df46fcc',
'RetryAttempts': 0},
'StartingInstances': [{'CurrentState': {'Code': 0, 'Name': 'pending'},
'InstanceId': 'i-09f7c70f5ad21663c',
'PreviousState': {'Code': 80, 'Name': 'stopped'}}]}
Cross check in AWS consle
Creating a Backup Image of an EC2.Instance
Create ec2_instance_ami.py
file
vim ec2_instance_ami.py
import boto3
import pprint
import datetimedef get_session(region):
return boto3.session.Session(region_name=region)
session = get_session('us-east-1')
client = session.client('ec2')
date = datetime.datetime.utcnow().strftime('%Y%m%d')
ec2 = session.resource('ec2')
instance_id = input("Enter your instance_id to start ")
name = f"InstanceID_{instance_id}_Image_Backup_{date}"
pprint.pprint(client.create_image(InstanceId=instance_id, Name=name))
Execute Python file ec2_instance_ami.py
$ python3 ec2_instance_ami.py
Enter your instance_id to copy i-09f7c70f5ad21663c
{'ImageId': 'ami-059181256b80d0ec0',
'ResponseMetadata': {'HTTPHeaders': {'cache-control': 'no-cache, no-store',
'content-length': '242',
'content-type': 'text/xml;charset=UTF-8',
'date': 'Tue, 23 Feb 2021 16:56:11 GMT',
'server': 'AmazonEC2',
'strict-transport-security': 'max-age=31536000; '
'includeSubDomains',
'x-amzn-requestid': 'e3789391-4be7-4f0b-9a5a-8757e77c6ed7'},
'HTTPStatusCode': 200,
'RequestId': 'e3789391-4be7-4f0b-9a5a-8757e77c6ed7',
'RetryAttempts': 0}}
Cross check in AWS consle
Alternative Approach to Copying EC2 Instance
Create ec2_instance_ami_alternative.py
vim ec2_instance_ami_alternative.py
import boto3
import pprint
import datetime
def get_session(region):
return boto3.session.Session(region_name=region)
session = get_session('us-east-1')
ec2 = session.resource('ec2')
date = datetime.datetime.utcnow().strftime('%Y%m%d')
instance_id = input("Enter your instance_id to start ")
instance = ec2.Instance(instance_id)
name = f"InstanceID_{instance_id}_Image_Backup_{date}"
image = instance.create_image(Name=name + '_2')
pprint.pprint(image)
Execute Python file ec2_instance_ami_alternative.py
$ python3 ec2_instance_ami_alternative.py
Enter your instance_id to start i-09f7c70f5ad21663c
ec2.Image(id='ami-0e427f424d7dc7684')
Cross check in AWS consle
Tagging Images of EC2 Instances
Create ec2_instance_tagging.py
vim ec2_instance_tagging.py
import boto3
import pprintdef get_session(region):
return boto3.session.Session(region_name=region)
session = get_session('us-east-1')
client = session.client('ec2')
ami_id = input('Enter your ami_id: ')
remove_on = '20210223'
pprint.pprint(client.create_tags(Resources=[ami_id], Tags=[{'Key': 'RemoveOn', 'Value': remove_on}]))
Execute Python file ec2_instance_tagging.py
$ python3 ec2_instance_tagging.py
Enter your ami_id: ami-0e427f424d7dc7684
{'ResponseMetadata': {'HTTPHeaders': {'cache-control': 'no-cache, no-store',
'content-length': '221',
'content-type': 'text/xml;charset=UTF-8',
'date': 'Wed, 24 Feb 2021 17:37:14 GMT',
'server': 'AmazonEC2',
'strict-transport-security': 'max-age=31536000; '
'includeSubDomains',
'x-amzn-requestid': 'ce180fc4-d26c-4f65-afa1-e97905813d42'},
'HTTPStatusCode': 200,
'RequestId': 'ce180fc4-d26c-4f65-afa1-e97905813d42',
'RetryAttempts': 0}}
Cross check in AWS consle
Creating an EC2 Instance from a Backup Image
Create ec2_instance_new_from_ami.py
vim ec2_instance_new_from_ami.py
import boto3
import pprintdef get_session(region):
return boto3.session.Session(region_name=region)
session = get_session('us-east-1')
client = session.client('ec2')
ami_id = input('Enter your ami_id: ')
pprint.pprint(client.run_instances(ImageId=ami_id, MinCount=1, MaxCount=1, InstanceType='t2.micro'))
Execute Python file ec2_instance_new_from_ami.py
$ python3 ec2_instance_new_from_ami.py
Enter your ami_id: ami-0e427f424d7dc7684
{'Groups': [],
'Instances': [{'AmiLaunchIndex': 0,
'Architecture': 'x86_64',
'BlockDeviceMappings': [],
'CapacityReservationSpecification': {'CapacityReservationPreference': 'open'},
'ClientToken': '',
'CpuOptions': {'CoreCount': 1, 'ThreadsPerCore': 1},
'EbsOptimized': False,
'EnaSupport': True,
'Hypervisor': 'xen',
'ImageId': 'ami-0e427f424d7dc7684',
'InstanceId': 'i-018a92f243a56fa32',
'InstanceType': 't2.micro',
'LaunchTime': datetime.datetime(2021, 2, 24, 17, 19, 3, tzinfo=tzutc()),
'Monitoring': {'State': 'disabled'},
'NetworkInterfaces': [{'Attachment': {'AttachTime': datetime.datetime(2021, 2, 24, 17, 19, 3, tzinfo=tzutc()),
'AttachmentId': 'eni-attach-0a9734be86e332eca',
'DeleteOnTermination': True,
'DeviceIndex': 0,
'Status': 'attaching'},
'Description': '',
'Groups': [{'GroupId': 'sg-2dc14503',
'GroupName': 'default'}],
'InterfaceType': 'interface',
'Ipv6Addresses': [],
'MacAddress': '12:a0:61:01:8f:53',
'NetworkInterfaceId': 'eni-0b6eb7d91052b20ef',
'OwnerId': '464392538707',
'PrivateDnsName': 'ip-172-31-0-147.ec2.internal',
'PrivateIpAddress': '172.31.0.147',
'PrivateIpAddresses': [{'Primary': True,
'PrivateDnsName': 'ip-172-31-0-147.ec2.internal',
'PrivateIpAddress': '172.31.0.147'}],
'SourceDestCheck': True,
'Status': 'in-use',
'SubnetId': 'subnet-06315eb9149798d01',
'VpcId': 'vpc-346c6f4e'}],
'Placement': {'AvailabilityZone': 'us-east-1a',
'GroupName': '',
'Tenancy': 'default'},
'PrivateDnsName': 'ip-172-31-0-147.ec2.internal',
'PrivateIpAddress': '172.31.0.147',
'ProductCodes': [],
'PublicDnsName': '',
'RootDeviceName': '/dev/sda1',
'RootDeviceType': 'ebs',
'SecurityGroups': [{'GroupId': 'sg-2dc14503',
'GroupName': 'default'}],
'SourceDestCheck': True,
'State': {'Code': 0, 'Name': 'pending'},
'StateReason': {'Code': 'pending', 'Message': 'pending'},
'StateTransitionReason': '',
'SubnetId': 'subnet-06315eb9149798d01',
'VirtualizationType': 'hvm',
'VpcId': 'vpc-346c6f4e'}],
'OwnerId': '464392538707',
'ReservationId': 'r-012c5661804d0212a',
'ResponseMetadata': {'HTTPHeaders': {'cache-control': 'no-cache, no-store',
'content-length': '4780',
'content-type': 'text/xml;charset=UTF-8',
'date': 'Wed, 24 Feb 2021 17:19:03 GMT',
'server': 'AmazonEC2',
'strict-transport-security': 'max-age=31536000; '
'includeSubDomains',
'vary': 'accept-encoding',
'x-amzn-requestid': 'ac286fed-1552-4ad2-b6d2-f870ac304015'},
'HTTPStatusCode': 200,
'RequestId': 'ac286fed-1552-4ad2-b6d2-f870ac304015',
'RetryAttempts': 0}}
Cross check in AWS consle
Removing Backup Images
Create ec2_instance_remove_ami.py
vim ec2_instance_remove_ami.py
import boto3
import pprintdef get_session(region):
return boto3.session.Session(region_name=region)
session = get_session('us-east-1')
client = session.client('ec2')
ami_id = input('Enter your ami_id: ')
remove_on = '20210223'
images = client.describe_images(Filters=[{'Name': 'tag:RemoveOn', 'Values': [remove_on]}])
for img in images['Images']:
client.deregister_image(ImageId=ami_id)
Execute Python file ec2_instance_new_from_ami.py
$ python3 ec2_instance_remove_ami.py
Enter your ami_id: ami-0e427f424d7dc7684
Cross check in AWS consle
Terminating EC2 Instance
Create ec2_instance_terminate.py
vim ec2_instance_terminate.py
import boto3
import pprint
import datetime
def get_session(region):
return boto3.session.Session(region_name=region)
session = get_session('us-east-1')
client = session.client('ec2')
ec2 = session.resource('ec2')
instance_id = input("Enter your instance_id to start ")
pprint.pprint(client.terminate_instances(InstanceIds=[instance_id]))
Execute Python file ec2_instance_terminate.py
$ python3 ec2_instance_terminate.py
Enter your instance_id to start i-09f7c70f5ad21663c
{'ResponseMetadata': {'HTTPHeaders': {'cache-control': 'no-cache, no-store',
'content-type': 'text/xml;charset=UTF-8',
'date': 'Wed, 24 Feb 2021 21:10:17 GMT',
'server': 'AmazonEC2',
'strict-transport-security': 'max-age=31536000; '
'includeSubDomains',
'transfer-encoding': 'chunked',
'vary': 'accept-encoding',
'x-amzn-requestid': 'e72aa61d-5dec-4c20-9a08-c6763213a999'},
'HTTPStatusCode': 200,
'RequestId': 'e72aa61d-5dec-4c20-9a08-c6763213a999',
'RetryAttempts': 0},
'TerminatingInstances': [{'CurrentState': {'Code': 48, 'Name': 'terminated'},
'InstanceId': 'i-09f7c70f5ad21663c',
'PreviousState': {'Code': 80, 'Name': 'stopped'}}]}
Cross check in AWS consle
Here is the script that backup core ec2 instance for DR. We can run a script to call AWS API, it check the tag of instances to decide if need to take snapshot for backup. And clean up the expired snapshot.
Usage of this script: Set up tags for the instances which need to backup, example : key — backup value — 7. This means backup the server every day and keep 7 snapshot at most. Here will be using our local environment which is Redhat 8 (However, I’ll be diving deep into other crontab options in my post ahead, stay tune!)
Notes: We can definitely put our AWS_ACCESS_KEY
and AWS_SECRET_KEY
straightly into this file. However, for security and efficiency, we provide with os.envrion[‘AWS_ACCESS_KEY’]
and os.envrion[‘AWS_SECRET_KEY’]
respectively.
In case you would like execute this .py file straight, we need to do like below in Redhat 8 command line
$ export AWS_ACCESS_KEY = Your_Access_Key ### Provide your own AWS_ACCESS_KEY
$ export AWS_SECRET_KEY = Your_Secret_Key ### Provide your own AWS_SECRET_KEY
Here are the real meets: Cronjob using Crontab
First, we need to get access to our crobtab in Redhat 8
$crontab -e
Notes: In case this command line doesn’t work, please refer to this post as reference
Using Linux cronjob to excecute this file at 15:16 pm every day
AWS_ACCESS_KEY=Your_Access_Key ### Provide your own AWS_ACCESS_KEY
AWS_SECRET_KEY=Your_Secret_Key ### Provide your own AWS_SECRET_KEY
16 15 * * * /usr/bin/python3 /home/pzhao/ubuntu-instance/Ec2BackUpCleanUp.py
Above is what we need to type into the file opened by crontab -e
in Redhat 8
Notes: AWS credentials must be provided in this file to make it work. Also, for more about how to set up the scheduled time in crontab, please refer to this post
finally, the cronjob is set up
$ crontab -e
no crontab for pzhao - using an empty one
crontab: installing new crontab
Cross check in AWS console at 15:16 pm
Notes: As we introduced previously, we set up tag name for our instance as backup
and value of it as 1
in my case. What this means is that, ami will be backed up once at most. Since 3 days were configured for snapshot as default, which indicates that snapshot will be deleted after 3 days
AWS Lambda Implementation
Now let us do our EC2 back up and clean up using AWS Lambda Function to accomplish serverless EC2 management
To start with, create a file named ec2backup.py
vim ec2backup.py
Then, we need to create an IAM role and attach AWSLambdaBasicExecutionRole
to it as shown below
$ aws iam create-role --role-name lambda-ex --assume-role-policy-document '{"Version": "2012-10-17","Statement": [{ "Effect": "Allow", "Principal": {"Service": "lambda.amazonaws.com"}, "Action": "sts:AssumeRole"}]}'
{
"Role": {
"Path": "/",
"RoleName": "lambda-ex",
"RoleId": "AROAWYH7TZJJVV7TA2ESB",
"Arn": "arn:aws:iam::464392538707:role/lambda-ex",
"CreateDate": "2021-02-26T04:45:03Z",
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
}
}$ aws iam attach-role-policy --role-name lambda-ex --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
Since Lambda function can’t execute .py
file straight, we need to zip it as shown below
$ zip ec2backup.zip ec2backup.py
adding: ec2backup.py (deflated 71%)
Now we will create a Lambda function using AWS CLI shown below
$ aws lambda create-function --region us-east-1 --function-name ec2backup --zip-file fileb://ec2backup.zip --role arn:aws:iam::464392538707:role/lambda-ex --environment '{"Variables":{"REGION":"us-east-1", "ACCESS_KEY":"AKIAWYH7TZJJWTR7QNCK", "SECRET_KEY":"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"}}' --handler ec2backup.lambda_handler --runtime python3.6
{
"FunctionName": "ec2backup",
"FunctionArn": "arn:aws:lambda:us-east-1:464392538707:function:ec2backup",
"Runtime": "python3.6",
"Role": "arn:aws:iam::464392538707:role/lambda-ex",
"Handler": "ec2backup.lambda_handler",
"CodeSize": 826,
"Description": "",
"Timeout": 3,
"MemorySize": 128,
"LastModified": "2021-02-26T21:44:36.923+0000",
"CodeSha256": "MD7NgrewJo11morzN5RZGwhvUU7H4IvgE0RaeLxsUkU=",
"Version": "$LATEST",
"Environment": {
"Variables": {
"SECRET_KEY": "TtAaF20YsPIrOBYBTqiiXWKO5YfdMiSSOJLsZH2m",
"ACCESS_KEY": "AKIAWYH7TZJJWTR7QNCK",
"REGION": "us-east-1"
}
},
"TracingConfig": {
"Mode": "PassThrough"
},
"RevisionId": "9bfa06b3-5577-44e5-82b8-48387127a91c"
}
Notes: For python, lambda handler should be set up as <file name>.lambda_handler. For more reference, please visit here
Moving along, we need to create our trigger, here we will be using AWS CLI as we follow along with Iac. However, you are free to use AWS console
We create a CloudWatch rule as shown below
$ aws events put-rule --name "DailyLambdaFunction-backup" --schedule-expression "cron(25 22 * * ? *)"
{
"RuleArn": "arn:aws:events:us-east-1:464392538707:rule/DailyLambdaFunction"
}
Notes: For more about CloudWatch Cronjob, please refer to this AWS document
Notes: Permission must be added to allow Lambda Function to be triggered by CloudWatch since we created our Lambda Function using AWSCLI. For more reference, please visit here
$ aws lambda add-permission --function-name ec2backup --statement-id MyId --action 'lambda:InvokeFunction' --principal events.amazonaws.com --source-arn arn:aws:events:us-east-1:464392538707:rule/DailyLambdaFunction-backup
{
"Statement": "{\"Sid\":\"MyId\",\"Effect\":\"Allow\",\"Principal\":{\"Service\":\"events.amazonaws.com\"},\"Action\":\"lambda:InvokeFunction\",\"Resource\":\"arn:aws:lambda:us-east-1:464392538707:function:ec2backup\",\"Condition\":{\"ArnLike\":{\"AWS:SourceArn\":\"arn:aws:events:us-east-1:464392538707:rule/DailyLambdaFunctionclear\"}}}"
}
Then we need to set a targets for the rule
$ aws events put-targets --rule DailyLambdaFunction-backup --targets "Id"="1","Arn"="arn:aws:lambda:us-east-1:464392538707:function:ec2backup"
{
"FailedEntryCount": 0,
"FailedEntries": []
}
Notes: For the above IAM role and Lambda Function creation, you may refer to AWS documents here in case of any issues
Cross check in AWS console
Notes: If your Lambda Function is not working to create EC2 AMI, please adjust Basic settings for time out, default 3 seconds should be increased to 1 min 3 seconds.
Now we will be kicking off our AMI cleanup process by creating a file named ec2cleanup.py
vim ec2cleanup.py
Then, we need to create an IAM role and attach AWSLambdaBasicExecutionRole
to it as shown below (You may not need to go over this again since we already made it while applying ec2backup.py)
$ aws iam create-role --role-name lambda-ex --assume-role-policy-document '{"Version": "2012-10-17","Statement": [{ "Effect": "Allow", "Principal": {"Service": "lambda.amazonaws.com"}, "Action": "sts:AssumeRole"}]}'
{
"Role": {
"Path": "/",
"RoleName": "lambda-ex",
"RoleId": "AROAWYH7TZJJVV7TA2ESB",
"Arn": "arn:aws:iam::464392538707:role/lambda-ex",
"CreateDate": "2021-02-26T04:45:03Z",
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
}
}$ aws iam attach-role-policy --role-name lambda-ex --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
Since Lambda function can’t execute .py
file straight, we need to zip it as shown below
$ zip ec2cleanup.zip ec2cleanup.py
adding: ec2cleanup.py (deflated 53%)
Now we will create a Lambda function using AWS CLI shown below
$ aws lambda create-function --region us-east-1 --function-name ec2cleanup --zip-file fileb://ec2cleanup.zip --role arn:aws:iam::464392538707:role/lambda-ex --environment '{"Variables":{"REGION":"us-east-1", "ACCESS_KEY":"AKIAWYH7TZJJWTR7QNCK", "SECRET_KEY":"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"}}' --handler ec2cleanup.lambda_handler --runtime python3.6
{
"FunctionName": "ec2cleanup",
"FunctionArn": "arn:aws:lambda:us-east-1:464392538707:function:ec2cleanup",
"Runtime": "python3.6",
"Role": "arn:aws:iam::464392538707:role/lambda-ex",
"Handler": "ec2cleanup.lambda_handler",
"CodeSize": 695,
"Description": "",
"Timeout": 3,
"MemorySize": 128,
"LastModified": "2021-02-26T22:51:59.634+0000",
"CodeSha256": "RUHnl6N7GS7Wt1I5UW531VQ9h/bOhMEaFcmbUlv9nZM=",
"Version": "$LATEST",
"Environment": {
"Variables": {
"SECRET_KEY": "TtAaF20YsPIrOBYBTqiiXWKO5YfdMiSSOJLsZH2m",
"ACCESS_KEY": "AKIAWYH7TZJJWTR7QNCK",
"REGION": "us-east-1"
}
},
"TracingConfig": {
"Mode": "PassThrough"
},
"RevisionId": "b3fbb4d1-fea4-4625-8fa0-915cf7b206b4"
}
Notes: For python, lambda handler should be set up as <file name>.lambda_handler. For more reference, please visit here
Moving along, we need to create our trigger, here we will be using AWS CLI as we follow along with Iac. However, you are free to use AWS console
We create a CloudWatch rule as shown below
$ aws events put-rule --name "DailyLambdaFunction-cleanup" --schedule-expression "cron(25 22 * * ? *)"
{
"RuleArn": "arn:aws:events:us-east-1:464392538707:rule/DailyLambdaFunction"
}
Notes: For more about CloudWatch Cronjob, please refer to this AWS document. Also, upon updating your schedule, you must use AWSCLI. I tried to make adjustment manually, it wouldn’t work
Notes: Permission must be added to allow Lambda Function to be triggered by CloudWatch since we created our Lambda Function using AWSCLI. For more reference, please visit here
$ aws lambda add-permission --function-name ec2cleanup --statement-id MyId --action 'lambda:InvokeFunction' --principal events.amazonaws.com --source-arn arn:aws:events:us-east-1:464392538707:rule/DailyLambdaFunction-cleanup
{
"Statement": "{\"Sid\":\"MyId\",\"Effect\":\"Allow\",\"Principal\":{\"Service\":\"events.amazonaws.com\"},\"Action\":\"lambda:InvokeFunction\",\"Resource\":\"arn:aws:lambda:us-east-1:464392538707:function:ec2cleanup\",\"Condition\":{\"ArnLike\":{\"AWS:SourceArn\":\"arn:aws:events:us-east-1:464392538707:rule/DailyLambdaFunction-cleanup\"}}}"
}
Then we need to set a targets for the rule
$ aws events put-targets --rule DailyLambdaFunction-cleanup --targets "Id"="1.1","Arn"="arn:aws:lambda:us-east-1:464392538707:function:ec2cleanup"
{
"FailedEntryCount": 0,
"FailedEntries": []
}
Notes: For the above IAM role and Lambda Function creation, you may refer to AWS documents here in case of any issues
Cross check in AWS console
Notes: If your Lambda Function is not working to clean up ami scheduled, please adjust Basic settings for time out, default 3 seconds should be increased to 1 min 3 seconds.
As a bonus, we will be touching upon our Snapshot cleanup process by creating a file named snapshotcleanup.py
vim snapshotcleanup.py
Then, we need to create an IAM role and attach AWSLambdaBasicExecutionRole
to it as shown below (You may not need to go over this again since we already made it while applying ec2backup.py)
$ aws iam create-role --role-name lambda-ex --assume-role-policy-document '{"Version": "2012-10-17","Statement": [{ "Effect": "Allow", "Principal": {"Service": "lambda.amazonaws.com"}, "Action": "sts:AssumeRole"}]}'
{
"Role": {
"Path": "/",
"RoleName": "lambda-ex",
"RoleId": "AROAWYH7TZJJVV7TA2ESB",
"Arn": "arn:aws:iam::464392538707:role/lambda-ex",
"CreateDate": "2021-02-26T04:45:03Z",
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
}
}$ aws iam attach-role-policy --role-name lambda-ex --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
Since Lambda function can’t execute .py
file straight, we need to zip it as shown below
$ zip snapshotcleanup.zip snapshotcleanup.py
adding: snapshotcleanup.py (deflated 49%)
Now we will create a Lambda function using AWS CLI shown below
$ aws lambda create-function --region us-east-1 --function-name snapshotcleanup --zip-file fileb://snapshotcleanup.zip --role arn:aws:iam::464392538707:role/lambda-ex --environment '{"Variables":{"REGION":"us-east-1", "ACCESS_KEY":"AKIAWYH7TZJJWTR7QNCK", "SECRET_KEY":"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"}}' --handler snapshotcleanup.lambda_handler --runtime python3.6
{
"FunctionName": "snapshotcleanup",
"FunctionArn": "arn:aws:lambda:us-east-1:464392538707:function:snapshotcleanup",
"Runtime": "python3.6",
"Role": "arn:aws:iam::464392538707:role/lambda-ex",
"Handler": "snapshotcleanup.lambda_handler",
"CodeSize": 817,
"Description": "",
"Timeout": 3,
"MemorySize": 128,
"LastModified": "2021-02-27T18:45:22.934+0000",
"CodeSha256": "XTSjZcK9VirpcYYkMWB6lChEvzUskJgNx6f2u/HUarE=",
"Version": "$LATEST",
"Environment": {
"Variables": {
"SECRET_KEY": "TtAaF20YsPIrOBYBTqiiXWKO5YfdMiSSOJLsZH2m",
"ACCESS_KEY": "AKIAWYH7TZJJWTR7QNCK",
"REGION": "us-east-1"
}
},
"TracingConfig": {
"Mode": "PassThrough"
},
"RevisionId": "26f3474b-c297-4e66-92e9-c8ed7b8c68e1"
}
Notes: For python, lambda handler should be set up as <file name>.lambda_handler. For more reference, please visit here
Moving along, we need to create our trigger, here we will be using AWS CLI as we follow along with Iac. However, you are free to use AWS console
We create a CloudWatch rule as shown below
$ aws events put-rule --name "DailyLambdaFunction-snapshotcleanup" --schedule-expression "cron(0 19 * * ? *)"
{
"RuleArn": "arn:aws:events:us-east-1:464392538707:rule/DailyLambdaFunction-snapshotcleanup"
}
Notes: For more about CloudWatch Cronjob, please refer to this AWS document. Also, upon updating your schedule, you must use AWSCLI. I tried to make adjustment manually, it wouldn’t work
Notes: Permission must be added to allow Lambda Function to be triggered by CloudWatch since we created our Lambda Function using AWSCLI. For more reference, please visit here
$ aws lambda add-permission --function-name snapshotcleanup --statement-id MyId --action 'lambda:InvokeFunction' --principal events.amazonaws.com --source-arn arn:aws:events:us-east-1:464392538707:rule/DailyLambdaFunction-snapshotcleanup
{
"Statement": "{\"Sid\":\"MyId\",\"Effect\":\"Allow\",\"Principal\":{\"Service\":\"events.amazonaws.com\"},\"Action\":\"lambda:InvokeFunction\",\"Resource\":\"arn:aws:lambda:us-east-1:464392538707:function:snapshotcleanup\",\"Condition\":{\"ArnLike\":{\"AWS:SourceArn\":\"arn:aws:events:us-east-1:464392538707:rule/DailyLambdaFunction-snapshotcleanup\"}}}"
}
Then we need to set a targets for the rule
$ aws events put-targets --rule DailyLambdaFunction-snapshotcleanup --targets "Id"="1.1","Arn"="arn:aws:lambda:us-east-1:464392538707:function:snapshotcleanup"
{
"FailedEntryCount": 0,
"FailedEntries": []
}
Notes: For the above IAM role and Lambda Function creation, you may refer to AWS documents here in case of any issues
Cross check in AWS console
Notes: If your Lambda Function is not working to clean up snapshot, please adjust Basic settings for time out, default 3 seconds should be increased to 1 min 3 seconds.
Last but not least, we’ll have free give-aways — automate your EC2 server updates using SSM in Boto3 and CloudWatch cron expression
Firstly, since we need to gain access to our EC2 server to check out our installation and update, let us provision a new Linux EC2 using CloudFormation
Create Amazon Linux 2 AMI instance using CloudFormation
CloudFormation yaml file is provided below
vim amazonlinux2ami.yml
Resources:
EC2SecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Allow http to client host
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 22
ToPort: 22
CidrIp: 0.0.0.0/0
- IpProtocol: tcp
FromPort: 443
ToPort: 443
CidrIp: 0.0.0.0/0
- IpProtocol: tcp
FromPort: 80
ToPort: 80
CidrIp: 0.0.0.0/0
Ec2Instance:
Type: AWS::EC2::Instance
Properties:
InstanceType: t2.micro
ImageId: ami-0915bcb5fa77e4892 # Amazon Linux 2 ami in us-east-1
KeyName: Amazonlinux2KeyPair
UserData:
'Fn::Base64':
!Sub |
#!/bin/bash
yum -y update# install apache
yum install httpd24 -y# start server
service httpd start
chkconfig httpd on
SecurityGroups:
- !Ref EC2SecurityGroup
Tags: ### Our Tags here are intended for further use
- Key: Name
Value: amazonlinux2
- Key: Type
Value: Worker Instance
- Key: Instance state
Value: running
Notes: In order to gain access to Amazonlinux 2 instance, we may need to create key pair using AWS CLI before generating CloudFormation Stack
$ aws ec2 create-key-pair --key-name Amazonlinux2KeyPair
{
"KeyFingerprint": "e5:47:a6:9b:9d:96:3c:24:87:d8:6d:1e:11:2f:25:2f:76:3f:d9:2c",
"KeyMaterial": "-----BEGIN RSA PRIVATE KEY-----\nMIIEpQIBAAKCAQEApfSiHI/rJCUD/mrTNBYdzoPkgZDIBjYvL3QxEvizZ1vC1aO7\nQ+czDS6fVX2KPNAF+xOrUuJ0Bd4Quqf7Qiz+4bdXp7K+aG4YgKRYes1vfhIgldu+\nulbpCabsXaUmkbS8qxWywTTeaez8ZimzXtmMxSLZAnCqjYq8tRWNT5J9RkBU1/J5\nazO708UGyzFbK9KSHA5j0wOP553utVekc0QT7xBrzYLgGsx5jxvYf0tEMUIepHCZ\n/zGdueR0n6JrGe9aTp8L+30O/VMNaZRYPi+QUtOx9pESk4hhNQPX2Ue0qThgW1Pg\nOLC1sMZTc3b0A5XQKKj07/UG6da8w0USCGaeiwIDAQABAoIBAQCSXmMkoeKYbHVL\nTVieFeuQG0/M3q8sm1melvI5c1R4ErySxOgDicTDGZ26PxFPdYHw4nY2kjgWfLdw\niXvX7+uVlKkg5Ut+u6usuka3eL2fCcnnonpjyweaVbkfFuwfkrLcijSwpzqLXlN2\nn8zuGR5JOOUBe/FRCU5KwIlz5xXKgLr+7sR94Hh+TB9+CPF/7z7YMwmsxbI7+t6q\nbhuGVQis7N6J+0T2Bq/pZmpKNoTvXKCee68HdYf0mvqi/gEldlFtRtzII/7tHeVY\n0jOqD3aqX9uD2e0W0KoCoxVhoHKYhxz0GavEr65iSTX/r+qirUOTW7nfg5Z2/mWX\n81PHwc+xAoGBAP7vhvv81Ix6SORwXnQtS72LFndlNx/UCAtQCSTLOcHjCabGUT6N\n0r/0F4WAz1QLdKyzGBINavOz4qhK9gRbpAKXyK8IqsbWDv+OjPykgc1q1NgRYGny\ncMjziYz3VFpzJpFTmMlz1Yl/JzxC6Rro0LyN7OU65d8LzgyPPPJdmhqvAoGBAKam\nAUT8eUB8bSFF/0XyPNHbthe+l/WnaKqMn4M02cZwYPZfC+/UlW2a4fCCj0B9GSu0\nQsvMlqF5nX7Z+xo9nDXsY/bJap9zgGH5pLTAzsDns21hHQ+f8RYMgMkD0JfiSOFO\nsCZtktW7lXg/7oDTKeLpA6mdhr7NVUeAkXBbb0DlAoGBAP5ULaEUoWMn17J2W/Sg\n/6+vo5EnW7AYEpJenVCkohFIk+dab9DtIfQ36oNYdv4Mk7B61yejVTCdJCDq77Z3\nSg8AJ8he6CiHgtz29LZS09//lSmdZEcuA9CmDXKhh/jYagCPmpxXQA/010qqIe2j\nmCKToGMruAolt8EV4SKVuNinAoGBAJiij6MaBAy2alYBgLAWEPK95GiXHyPW85zS\nM/++1oBUydqeb5Z5BWxgYfUiAAc3DWjkMBHuD0FS3JglG0KLj5osK9sL3GazKbGT\nL/KGblhtYAAc5Ls86VoilYqHvfR/Q/VpVxm7XrP4ngyHsfG4dzLn7XzbVWFJPITD\nE3LvrFSpAoGAFntQrakouC+i2I8epb90Vqmdszggo80DkFyil9+6Lc2zGi2WXm7Y\nCm5Gjyo014rKcunzuHQObVm7G86woJKef5k7LYjGP3n4ba4dajP4iqV8AQyW33Ik\n+NxkhG44TfPIhMqKLikhFsElczjBuWTlO4+6xtxpYweFLY/IonGl0cM=\n-----END RSA PRIVATE KEY-----",
"KeyName": "Amazonlinux2KeyPair"
}
Using AWS CLI to provision Amazonlinux2 EC2 instance
$ aws cloudformation create-stack --stack-name Amazonlinux2-instance --template-body file://amazonlinux2ami.yml
{
"StackId": "arn:aws:cloudformation:us-east-1:464392538707:stack/Amazonlinux2-instance/ed4d97c0-7997-11eb-a1f6-12638e32c80b"
}
Double check CloudFormation created
Let now login to this server to check out if ansible and docker are preinstalled.
Notes: Here we’ll be applying 3 command lines to install both ansible and docker as well as update our server so that we can easily see that our Lambda Function is working
Notes: If using Putty, you need to save .pem file as .ppk file via Puttygen in order to gain access to EC2 server
Under connection, then SSH, then Auth. Thereafter, input your hostname on port 22 to login to the EC2 server as shown below
$ docker --version
-bash: docker: command not found
$ ansible --version
-bash: ansible: command not found
Both ansible and docker were not preinstalled
vim ec2_daily_update_targets.py
Notes: In case you wants to dive deep into How to Run Two or More Terminal Commands at Once in Linux , please visit the blog. It can be handy!
Then, we need to create an IAM role and attach AWSLambdaBasicExecutionRole
to it as shown below (You may not need to go over this again since we already made it while applying ec2backup.py)
$ aws iam create-role --role-name lambda-ex --assume-role-policy-document '{"Version": "2012-10-17","Statement": [{ "Effect": "Allow", "Principal": {"Service": "lambda.amazonaws.com"}, "Action": "sts:AssumeRole"}]}'
{
"Role": {
"Path": "/",
"RoleName": "lambda-ex",
"RoleId": "AROAWYH7TZJJVV7TA2ESB",
"Arn": "arn:aws:iam::464392538707:role/lambda-ex",
"CreateDate": "2021-02-26T04:45:03Z",
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
}
}$ aws iam attach-role-policy --role-name lambda-ex --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
Since Lambda function can’t execute .py
file straight, we need to zip it as shown below
zip ec2_daily_update_targets.zip ec2_daily_update_targets.py
adding: ec2_daily_update_targets.py (deflated 64%)
Now we will create a Lambda function using AWS CLI shown below
$ aws lambda create-function --region us-east-1 --function-name Ec2DailyUpdateTargets --zip-file fileb://ec2_daily_update_targets.zip --role arn:aws:iam::464392538707:role/lambda-ex --environment '{"Variables":{"REGION":"us-east-1", "ACCESS_KEY":"AKIAWYH7TZJJWTR7QNCK", "SECRET_KEY":"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"}}' --handler ec2_daily_update_targets.lambda_handler --runtime python3.6
{
"FunctionName": "Ec2DailyUpdateTargets",
"FunctionArn": "arn:aws:lambda:us-east-1:464392538707:function:Ec2DailyUpdateTargets",
"Runtime": "python3.6",
"Role": "arn:aws:iam::464392538707:role/lambda-ex",
"Handler": "ec2_daily_update_targets.lambda_handler",
"CodeSize": 592,
"Description": "",
"Timeout": 3,
"MemorySize": 128,
"LastModified": "2021-02-28T07:54:37.545+0000",
"CodeSha256": "4LVixPNvsx6W3JDCBXvyK/7HqL9RpS2e62HGKjSpKwo=",
"Version": "$LATEST",
"Environment": {
"Variables": {
"SECRET_KEY": "TtAaF20YsPIrOBYBTqiiXWKO5YfdMiSSOJLsZH2m",
"ACCESS_KEY": "AKIAWYH7TZJJWTR7QNCK",
"REGION": "us-east-1"
}
},
"TracingConfig": {
"Mode": "PassThrough"
},
"RevisionId": "9a622778-f7d1-499b-8219-501ba8cca8c8"
}
Notes: For python, lambda handler should be set up as <file name>.lambda_handler. For more reference, please visit here
Moving along, we need to create our trigger, here we will be using AWS CLI as we follow along with Iac. However, you are free to use AWS console
We create a CloudWatch rule as shown below
$ aws events put-rule --name "DailyLambdaFunction-ec2dailyupdatetargets" --schedule-expression "cron(0 8 * * ? *)" {
"RuleArn": "arn:aws:events:us-east-1:464392538707:rule/DailyLambdaFunction-ec2dailyupdatetargets"
}
Notes: For more about CloudWatch Cronjob, please refer to this AWS document. Also, upon updating your schedule, you must use AWSCLI. I tried to make adjustment manually, it wouldn’t work
Notes: Permission must be added to allow Lambda Function to be triggered by CloudWatch since we created our Lambda Function using AWSCLI. For more reference, please visit here
$ aws lambda add-permission --function-name Ec2DailyUpdateTargets --statement-id MyId --action 'lambda:InvokeFunction' --principal events.amazonaws.com --source-arn arn:aws:events:us-east-1:464392538707:rule/DailyLambdaFunction-ec2dailyupdatetargets
{
"Statement": "{\"Sid\":\"MyId\",\"Effect\":\"Allow\",\"Principal\":{\"Service\":\"events.amazonaws.com\"},\"Action\":\"lambda:InvokeFunction\",\"Resource\":\"arn:aws:lambda:us-east-1:464392538707:function:Ec2DailyUpdateTargets\",\"Condition\":{\"ArnLike\":{\"AWS:SourceArn\":\"arn:aws:events:us-east-1:464392538707:rule/DailyLambdaFunction-ec2dailyupdatetargets\"}}}"
}
Then we need to set a targets for the rule
$ aws events put-targets --rule DailyLambdaFunction-ec2dailyupdatetargets --targets "Id"="1.1","Arn"="arn:aws:lambda:us-east-1:464392538707:function:Ec2DailyUpdateTargets"
{
"FailedEntryCount": 0,
"FailedEntries": []
}
Notes: For the above IAM role and Lambda Function creation, you may refer to AWS documents here in case of any issues
To this moment, our game is not done yet. In order to make it work. We need to install SSM agent in our EC2 server. Also we need to grant permission for our resource group. Both of them are going to be invoked by Lambda Function upon time scheduled
Firstly, install SSM agent in our EC2 server (In case that SSM agent was preinstalled)
$ sudo yum install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm
Loaded plugins: extras_suggestions, langpacks, priorities, update-motd
amazon-ssm-agent.rpm | 30 MB 00:00
Examining /var/tmp/yum-root-JpLfi3/amazon-ssm-agent.rpm: amazon-ssm-agent-3.0.655.0-1.x86_64
Marking /var/tmp/yum-root-JpLfi3/amazon-ssm-agent.rpm as an update to amazon-ssm-agent-3.0.161.0-1.amzn2.x86_64
Resolving Dependencies
--> Running transaction check
---> Package amazon-ssm-agent.x86_64 0:3.0.161.0-1.amzn2 will be updated
---> Package amazon-ssm-agent.x86_64 0:3.0.655.0-1 will be an update
--> Finished Dependency Resolution
amzn2-core/2/x86_64 | 3.7 kB 00:00Dependencies Resolved================================================================================
Package Arch Version Repository Size
================================================================================
Updating:
amazon-ssm-agent x86_64 3.0.655.0-1 /amazon-ssm-agent 104 MTransaction Summary
================================================================================
Upgrade 1 PackageTotal size: 104 M
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Updating : amazon-ssm-agent-3.0.655.0-1.x86_64 1/2
Cleanup : amazon-ssm-agent-3.0.161.0-1.amzn2.x86_64 2/2
Failed to execute operation: File exists
Verifying : amazon-ssm-agent-3.0.655.0-1.x86_64 1/2
Verifying : amazon-ssm-agent-3.0.161.0-1.amzn2.x86_64 2/2Updated:
amazon-ssm-agent.x86_64 0:3.0.655.0-1Complete!
Secondly, we need to create a policy for our user to gain access to resource group since our Lambda Function requires SSM and resource groups are used for targets
Create a policy file named resourcegrouppolicy.json
vim resourcegrouppolicy.json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"resource-groups:*",
"cloudformation:DescribeStacks",
"cloudformation:ListStackResources",
"tag:GetResources",
"tag:TagResources",
"tag:UntagResources",
"tag:getTagKeys",
"tag:getTagValues",
"resource-explorer:*"
],
"Resource": "*"
}
]
}
Create this policy using the file
$ aws iam create-policy --policy-name resourcegrouppolicy --policy-document file://resourcegrouppolicy.json
{
"Policy": {
"PolicyName": "resourcegrouppolicy",
"PolicyId": "ANPAWYH7TZJJ33HEJB2A5",
"Arn": "arn:aws:iam::464392538707:policy/resourcegrouppolicy",
"Path": "/",
"DefaultVersionId": "v1",
"AttachmentCount": 0,
"PermissionsBoundaryUsageCount": 0,
"IsAttachable": true,
"CreateDate": "2021-03-01T01:51:23Z",
"UpdateDate": "2021-03-01T01:51:23Z"
}
}
We need to attach this policy to our user
$ aws iam attach-user-policy --policy-arn arn:aws:iam::464392538707:policy/resourcegrouppolicy --user-name adminuser
Lastly, we need to attach IAM role with SSM access to EC2 instance in order to allow SSM to manage our EC2 instance (in our case, we’ll be installing a few tools and updating our server)
Notes: Here I attempted to use AWSCLI to create the role and attach the role to EC2 instance. However, the error occured upon creating the role. So I used AWS console to bypass this issue for now.
Please create the role and attach those 2 policies shown below
Upon the time scheduled, we ssh to our EC2 server to check out if commands in Lambda Functions were executed
$ docker --version
Docker version 19.03.13-ce, build 4484c46$ ansible --version
ansible 2.9.13
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/ec2-user/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.18 (default, Feb 18 2021, 06:07:59) [GCC 7.3.1 20180712 (Red Hat 7.3.1-12)]]$ ansible --version
ansible 2.9.13
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/ec2-user/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.18 (default, Feb 18 2021, 06:07:59) [GCC 7.3.1 20180712 (Red Hat 7.3.1-12)]$ yum list installed | grep -i git
crontabs.noarch 1.11-6.20121102git.amzn2 installed
git.x86_64 2.23.3-1.amzn2.0.1 @amzn2-core
git-core.x86_64 2.23.3-1.amzn2.0.1 @amzn2-core
git-core-doc.noarch 2.23.3-1.amzn2.0.1 @amzn2-core
lm_sensors-libs.x86_64 3.4.0-8.20160601gitf9185e5.amzn2
net-tools.x86_64 2.0-0.22.20131004git.amzn2.0.2 installed
perl-Git.noarch 2.23.3-1.amzn2.0.1 @amzn2-core
python-pillow.x86_64 2.0.0-21.gitd1c6db8.amzn2.0.1 installed
runc.x86_64 1.0.0-0.1.20200826.gitff819c7.amzn2
screen.x86_64 4.1.0-0.25.20120314git3c2946.amzn2
All commands were successfully executed!
Notes: If your Lambda Function is not working to clean up snapshot, please adjust Basic settings for time out, default 3 seconds should be increased to 1 min 3 seconds.
Cleanup:
Before jumping into conclusion, do not forget to terminate our infrastructure. For the instructure created using CloudFormation, we can easily delete the stack using AWS CLI or in AWS console. The infrastructure create by CloudFormation will be removed accordingly. EC2 instance provisioned by Terraform, on the other hand, would be terminted by simply applying terraform destroy. Last but not least, those infrastructure created by AWSCLI needs to be removed either by AWSCLI or inside AWS console. With that said, we can easily tell the power of Iac with Terraform as well as CloudFormation. To provision without hassle and to clean up without hassle either
Conclusion:
Let us recap our project. Below are tasks we accomplish throughout our projects using AWS SDK for Python Boto3
To describe all EC2 instances and create file awssutils.py
To Retrieving EC2 Instance Details
To find out instance ami id in case you may need
To stop EC2 instance created previously
To start EC2 instance created previously
Alternative Approach to Fetching, Starting, and Stopping
To stop EC2 instance created previously
To start EC2 instance created previously
Creating a Backup Image of an EC2.Instance
Alternative Approach to Copying EC2 Instance
Tagging Images of EC2 Instances
Creating an EC2 Instance from a Backup Image
Removing Backup Images
Terminating EC2 Instance
Local Cronjob from Linux
Here is the script that backup core ec2 instance for DR. We can run a script to call AWS API, it check the tag of instances to decide if need to take snapshot for backup. And clean up the expired snapshot
Based on this project infrastructure, we’ll be reiterating our Lambda Function part.
Firstly, by using Cron Expression in AWS CloudWatch rule, we can trigger Lambda at any scheduled time. In this project, we tested at a certain time per day. This trigger can be set up using either AWSCLI or AWS Console. Secondly, the Lambda Function scripted using AWS SDK for Python — Boto3 implements a number of jobs
- Back up EC2 AMI
- Update System
- Clean up EC2 AMI
- Clean up Snapshot