Serverless Quick Tip #1: Don’t do Multipart Requests on AWS Lambda

Alexander Magnus Partsch
TrustBob Blog
Published in
2 min readNov 26, 2018

Although it is possible to write AWS Lambda functions handling multipart requests (Java handlers even accept InputStreams, reducing memory usage), eventually you are still going to run into timeout issues (since API Gateway limits your functions to a 30 second execution time) and memory shortage (either you are in a runtime not supporting stream input types or the JVM puts too much overhead to your upload function next to the file parts).

The Github Repository for this post.

Since you’re already running on AWS and probably planning to integrate AWS S3 at some point into your solution, you could make use of a handy feature called AWS S3 Pre-Signed Upload URLs.

Instead of implementing multipart request handling on all your AWS lambda functions, you have one temporary bucket with some life-cycle rule, so that old files get deleted regularly (I usually do 1 hour), and a HTTP endpoint to generate these pre-signed URLs for your clients.

The linked repository contains a micro service, written in Clojure, to deploy such an endpoint. You just need to supply the bucket name, AWS region and the desired expiration time for these URLs in seconds.

The function will respond with a JSON object:

{  "key": "82d22e41-25fe-47d1-971b-42a9364f1436.txt",   "fileName": "test.txt",  "expires": 1543151957,  "url": "https://..."}

The `url` contains the PUT endpoint where you can upload your file:

curl -x PUT -H "Cotnent-Type: text/plain" -d @test.txt $URL

You can later use the key attribute in other AWS lambda functions to refer to the temporary file.

Of course, before deploying this function, configure your authoriser in the serverless.yml.

--

--