Change File type in S3 bucket using Lambda Python on AWS

Shashi Sinha
2 min readDec 9, 2018

--

If you are working on renaming the file in S3 bucket using python Lambda. Here is the solution.

Login into AWS with your free account using debit/credit card.

Create the user and groups and assign the roles who can execute the Lambda function

Go to S3 under AWS services and create any S3 bucket. Create few folder inside S3 bucket which you have created. Now your basic environment is ready.

Now you need to go under services and choose Lambda. Add new Lambda function. I have chosen Python 3.7.

import boto3
import json
def lambda_handler(event, context):
s3 = boto3.client(‘s3’)
s31 = boto3.resource(‘s3’)
bucket =’my-test-s3-bucket-sspsinha’
prefix = ‘’
suffix = ‘csv’
kwargs = {‘Bucket’: bucket}
if isinstance(prefix, str):
kwargs[‘Prefix’] = prefix
list=[]
resp = s3.list_objects_v2(**kwargs)
contents = resp[‘Contents’]
for con in contents:
if con[‘Key’].endswith(suffix):
list.append(con[‘Key’])
copy_source = {
‘Bucket’: bucket,
‘Key’: con[‘Key’]
}
s31.meta.client.copy(copy_source, bucket, con[‘Key’].split(‘.’)[0]+’.txt’)
s3.delete_object(Bucket=bucket, Key=con[‘Key’])
return list

Then click on test on top right shown in above screen. Above code will rename all the nested .csv file to .txt file inside S3 bucket.

After execution of above rename function, here is the output

Thats all. Please let us know if you face any difficulty. You need to take care of indentation of python code above.

--

--