Writing a CSV to S3 from AWS Lambda

Karthik Subramanian
3 min readAug 30, 2022

--

In the last post I explained how to scrape a url with selenium and extract the number of search results returned by google for a query string. Let us now see how to insert these results into S3 as a csv file.

Setting up the S3 bucket

Update the template.yaml file and add a new resource for the S3 bucket -

When defining the bucket we have specified a few additional properties -

  • LifecycleConfiguration: This is optional, but I have set this so that my csvs are deleted after a day since I don’t want them lying around forever
  • CorsConfiguration: In my use case I needed the objects in S3 to be available for download to anyone who has a pre-signed url. Because of this requirement I needed to specify a cors config that allowed any origin. Modify this as per your needs

Lets also define a global environment variable for the bucket name so that the lambdas have the name available to them -

We also need to ensure that the Process lambda has access to write to the S3 bucket. Add a new policy to the lambda properties -

Finally, update the Outputs -

Update the process.py file with the following code -

Note: The update order status call for complete was also modified to include the csv file name

Deploying & Testing

Unlike before, we are going to first deploy our changes to aws so that the S3 bucket gets created and then test our code.

sam build
sam deploy

You should see an output like this -

Validate the changes

To validate the changes, lets make another post call from postman to the prod api gateway -

Now login to aws console and check the S3 bucket, you should see the csv file created -

Looking at the dynamodb table we can see that the file_location was also updated with the csv file name

Source Code

Here is the source code for the project created here.

Next: Part 6: Downloading a file from S3 using API Gateway & AWS Lambda

--

--