Unleashing the Power of AWS with Boto3: A Beginner’s Guide

Sarumathy P
featurepreneur
Published in
4 min readMar 5, 2023

Boto3 is a Python library that provides a simple and easy-to-use interface for developers to interact with various Amazon Web Services (AWS) using Python code. It allows developers to programmatically access and manipulate AWS resources, such as EC2 instances, and S3 buckets.

Boto3 acts as a bridge between Python code and AWS services, making it easier for developers to integrate and automate their applications with AWS.

This article covers:

  • Connecting to AWS using the AWS Command Line Interface (CLI).
  • Installation of boto3
  • Basic examples and use cases of the Boto3 library.

Here are the steps to connect your system to AWS using the AWS CLI:

  1. Install the AWS CLI on your system if it’s not already installed. You can download and install the CLI from the AWS documentation here https://aws.amazon.com/cli/
  2. Configure the AWS CLI by running the following command and entering your AWS access key ID, secret access key, and default region when prompted.
aws configure

3. Verify that the CLI is properly configured by running the following command:

aws ec2 describe-instances

If your CLI is correctly configured, this command should return a list of all the EC2 instances in your AWS account.

4. Install the Boto3 library by running the following command:

pip install boto3

Note that because the AWS CLI is already configured with your access key and secret access key, you do not need to specify them again in your Python code.

Examples:

  1. Printing the Names and states of all the EC2 instances
import boto3

ec2 = boto3.resource('ec2')

for instance in ec2.instances.all():
print(instance.id, instance.state)`
  • import boto3: This line imports the boto3 library, which is the AWS SDK (A collection of software development tools and libraries provided by AWS) for Python.
  • ec2 = boto3.resource('ec2'): This line creates an EC2 resource object using the boto3.resource method. The ec2 an object is used to interact with EC2 resources in the current AWS account.
  • for instance in ec2.instances.all():: This line starts a for loop that iterates over all EC2 instances in the current AWS account. The ec2.instances.all() method returns a collection of Instance objects that represent all EC2 instances in the current account.
  • print(instance.id, instance.state): This line prints the ID and state of each EC2 instance in the current account. The instance.id and instance.state attributes of each Instance object are used to access the ID and state of the current instance in the loop.

The reason why this code runs without any access keys is that boto3 uses the AWS credentials stored on the system to access AWS resources. If the system has valid AWS credentials with the necessary permissions to interact with EC2 resources, then the code will run without any issues. Else the code will raise an error.

2. Get the name of the specific EC2 instance

import boto3

ec2 = boto3.resource('ec2')

instance = ec2.Instance('your_instance_id')

for tag in instance.tags:
if tag['Key'] == 'Name':
name = tag['Value']
break
else:
name = 'Instance has no name'

print(name)

Replace 'your_instance_id' with the ID of the EC2 instance, you want to retrieve the name. This code retrieves the tags associated with the instance and looks for a tag with the key 'Name'. If it finds a matching tag, it retrieves the tag's value as the name. If it does not find a matching tag, it sets the name to 'Unnamed'.

3. Buckets count in your S3 :

import boto3

s3 = boto3.client('s3')

bucket_list = s3.list_buckets()
count = 0
for bucket in bucket_list['Buckets']:
#print(bucket)
count+=1
print(count)

list_buckets the method outputs a dictionary like this:

{
"ResponseMetadata": {
"RequestId": "....",
"HostId": ".....",
"HTTPStatusCode": 200,
"HTTPHeaders": {
"x-amz-id-2": "....",
"x-amz-request-id": "...",
"date": "Sun,05 Mar 2023 07: 29: 17 GMT",
"content-type": "application/xml",
"transfer-encoding": "chunked",
"server": "AmazonS3"
},
"RetryAttempts": 0
},
"Buckets": [
{
"Name": "<bucket1name>-loaction",
"CreationDate": "<yourbucketcreationdate>"
},
{
"Name": "<bucket2name>-loaction",
"CreationDate": "<yourbucketcreationdate>"
}
],
"Owner": {
"ID": "someid"
}
}

4. Upload a file into your S3 bucket

import boto3


s3 = boto3.client('s3')

s3.upload_file('/home/saru/Desktop/hello.txt', 'my-first-bucket', 'text-files-object')

The boto3.client('s3') the line creates a client object for the S3 service.

The s3.upload_file() the method is then called, which uploads a file to an S3 bucket. The method takes three arguments.

Syntax:

upload_file(<local-file-path>, <s3-bucket-name>, <s3-objectkey-name>)

Therefore, the code is uploading the file located at ‘/home/saru/Desktop/hello.txt’ to the S3 bucket named ‘my-first-bucket’ and assigning the S3 object key ‘text-files-object’ to the uploaded file.

5. Creating CloudFormation Stack

import boto3

cloudformation = boto3.client('cloudformation')

stack_name = 'my-ec2-stack-1'

template_body = '''
Resources:
SaruEC2Instance3:
Type: AWS::EC2::Instance
Properties:
InstanceType: t2.micro
ImageId: ami-0ab0629dba5ae551d
Tags:
- Key: "Name"
Value: "mythirdInstance"
'''

# Create the stack
response = cloudformation.create_stack(
StackName = stack_name,
TemplateBody = template_body
)

# Print the stack creation response
print(response)

This code when executed creates a stack named “my-ec2-stack-1” in AWS CloudFormation with a single EC2 instance named ‘mythirdInstance’.

Thus, Boto3 allows you to automate tasks that would otherwise need to be done manually in the AWS Management Console. This can save time and reduce the chance of human error. But one needs to be more careful while writing the code. In case of any error in the naming conventions(Eg: no space should be given in stack name), the program fails to do its expected job. Make sure all the naming conventions are correctly followed before you execute the code.

Thank you. Hope it helps.

--

--