Pre-Signed AWS S3 URLs with Firebase Functions

Say you want to let the users of your app upload or download from an S3 bucket without having to create complex IAM roles and permissions on AWS S3.

Normally the solution is to use an endpoint in your backend to obtain the pre-signed URL and then send it back to the frontend for the frontend app to use:

Obtaining a pre-signed URL from S3 with a custom backend

We do this because the backend has permissions to contact the S3 services and obtain the URL when instead the frontend doesn’t. It marshals access to the resources for the frontend so that we avoid having to specify rules per-user ourselves.

But how do you achieve the same result if you don’t have your custom backend?

This can be the case if you’re only using Firebase to implement a data store for your frontend app.

Google Cloud Functions to the rescue

Google Cloud Functions (a.k.a. Firebase Functions) are an addition to the services provided by Firebase which allows you to write Javascript functions to be executed on the cloud by calling them from your frontend (or backend) app.

They can be thought of as something similar to AWS Lambda functions: small-ish functions which can be executed arbitrarily and which require almost no backend setup from your side. More about them here:

What we want to achieve is this:

Obtaining a pre-signed URL from S3 with Google Cloud Functions

Example setup

In this article I assume that:

  • You have already setup a frontend application
  • You have created a Firebase project
  • You have an AWS account and have created an S3 bucket with default permissions (you are the only owner and the only one with access)

If you haven’t there’s a myriad of articles on the internet explaining how to do so. Here are some:

Next we need to integrate Google Cloud Functions into our project. This article explains well how to do it, so I’m not going to repeat it here:

CORS setup for S3

In order to send an HTTP request to the pre-signed URL from the frontend, we need to setup our S3 bucket with the correct CORS policy. If we fail to do so, our browser or app will fail to send the request.

This article explains well how to do so:

Here is an example CORS policy which you can use:

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>HEAD</AllowedMethod>
<AllowedMethod>DELETE</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>

These allow all the possible HTTP actions on our bucket from any origin. You might want to create stricter rules, but that’s up to you.

Writing our Cloud Functions

If you have setup the above, you should now have a folder named functions in your project, with this structure:

functions/
index.js
node-modules/
package-lock.json
package.json

index.js is where we’re going to write our function, but you could just as easily write it in a separate Javascript file.

In order to allow the backend to successfully make calls to the AWS services endpoints, in our case S3, we need to create an IAM for it and then obtain the relative API keys.

Again I’m referring you to an external article which explains how to do so:

Essentially, make sure that your IAM account uses a policy which grants it full access over the S3 resource. You can use the AmazonS3FullAccess policy for ease.

If you already have an IAM account, simply make sure that it has access to S3 resources.

Obtain access keys for your IAM account, so that you can use them to login from the Cloud Functions: these will be given to you as a file either when you first create an IAM account or if you manually obtain them from the IAM management panel (in case you already have an account).

These look like this:

aws_access_key_id = YOUR_ACCESS_KEY_ID
aws_secret_access_key = YOUR_SECRET_ACCESS_KEY

We can now install the AWS S3 SDK to our cloud functions installation, so that we can make requests to S3 from the cloud functions. Navigate inside of the functions folder generated by the firebase cli tools and run:

npm add aws-sdk

Example upload function

Let’s write an example Cloud Function which allows the frontend app to upload a file to our S3 bucket.

Open index.js and delete the existing code. Then write our function:

const functions = require('firebase-functions');
const admin = require('firebase-admin');
admin.initializeApp();
const AWS = require('aws-sdk');
exports.getS3SignedUrlUpload = functions.https.onCall((data, context) => {
AWS.config.update({
accessKeyId: "YOUR_ACCESS_KEY_ID",
secretAccessKey: "YOUR_SECRET_ACCESS_KEY",
region: "YOUR_AWS_REGION" // (e.g. :)"eu-west-1"
});
var s3 = new AWS.S3();
const s3Params = {
Bucket: data.S3BucketName,
Key: data.key,
Expires: 600, // Expires in 10 minutes
ContentType: data.contentType,
ACL: 'public-read', // Could be something else
ServerSideEncryption: 'AES256'
};
return s3.getSignedUrl('putObject', s3Params);
});

Let’s dissect what’s happening here.

First we’re importing the required Node.js modules: we need both the Firebase Admin modules and the AWS SDK module so that we can communicate to the S3 endpoint.

exports.getS3SignedUrlUpload = functions.https.onCall((data, context) =>

specifies the name of the function and what type it is. You can define both HTTP Cloud Functions or “App Functions”, which require less setup and are easier to use. In this case we’re using App Functions. You can read more about them here:

data is a JSON structure containing the following fields:

data {
S3BucketName: "my-bucket-name",
key: "file-to-upload-name",
contentType: "mime-content-type", // E.g.: "application/zip",
}

which are used to configure the pre-signed URL. It’s constructed in the frontend app and sent along with the Google Cloud Function request. This way we can have a form in the frontend specify name of the file to upload, which file to upload, etc.

  AWS.config.update({
accessKeyId: "YOUR_ACCESS_KEY_ID",
secretAccessKey: "YOUR_SECRET_ACCESS_KEY",
region: "YOUR_AWS_REGION" // (e.g. :)"eu-west-1"
});
var s3 = new AWS.S3();

Here we configure the AWS SDK with the credentials we obtained from the IAM role for the backend and then create an object which allows us to interact with the S3 APIs.

  const s3Params = {
Bucket: data.S3BucketName,
Key: data.key,
Expires: 600, // Expires in 10 minutes
ContentType: data.contentType,
ACL: 'public-read', // Could be something else
ServerSideEncryption: 'AES256'
};

Then we create a JSON object with the parameters necessary to request a pre-signed URL from S3. Some of these are optional, but help to be more precise about what type of file we’re uploading and what encryption level to use. This page explains the various available parameters:

Feel free to specify more parameters and to change the contents of the JSON data structure.

return s3.getSignedUrl('putObject', s3Params);

Finally we send a request to the S3 API to get a pre-signed URL, and return the URL to the client.

Example Frontend usage

Let’s now use our pre-signed URL in the frontend to upload a file to S3:

...
// Retrieve the Cloud Function
var getPost = functions.httpsCallable('getS3SignedUrlUpload');
// Call the cloud function
getPost(s3GetUrlParams).then(result => {
  // S3 pre-signed URL
const options = {
headers: {
'Content-Type': 'application/zip',
'x-amz-acl': 'public-read',
'x-amz-server-side-encryption': 'AES256'
}
};
   // Run a HTTP PUT request to the pre-signed URL
return axios.put(result.data, this.state.zipFile, options)
.then(result => {...});
}

In this example I am using ReactJS and Axios, but this flow could be used in any other setting:

  1. Call the Cloud Function
  2. Use the returned pre-signed URL

What if I want to download a file from an S3 bucket?

Most of the code would remain the same.

In the Cloud Function, instead of asking for a ‘putObject’ URL, you’d ask for a ‘getObject’ one:

return s3.getSignedUrl('getObject', s3Params);

and on the frontend side, you’d send an HTTP GET request rather than a PUT one.

Conclusion

We’ve seen how it’s possible to perform complex backend-related tasks without having to roll our own backend by just using Google Cloud Functions. In many cases Cloud Functions suffice to run backend-related tasks and they give you the flexibility of not having to spend time writing boilerplate backend code, while at the same time probably being cheaper than having your own backend always up and running.


Follow me on Twitter: twitter.com/albtaiuti

If you enjoyed this article, please click the 👏 button and share to help others find it! Feel free to leave a comment below.