Using AWS lambda to generate presigned URLs

Andrew Trigg
9 min readJan 11, 2019

--

Update: Since doing this, I wrote another walkthrough on how to upload files using Cognito instead of presigned urls. Link above.

If you want to create a single page application with the ability to upload files to S3 without revealing sensitive environment variables, you can do so by using a lambda function to generate a presigned url. This generated url can be returned to your SPA giving it the ability (within certain restrictions, such as a time and file limit) to upload directly from the client. This is useful for a reason other than security; the client is doing all the work to upload the file.

In this walkthrough we will set up an app with AWS appsync and amplify, and use Vue.js for our client SPA.

So ensure you have vue-cli installed (I’m using version 3.1.3) and amplify-cli (I’m using version 0.2.1-multienv.22).

Start a new vue project by typing the following and selecting your preset:

vue create presigned-url-lambda

Move into this newly created folder and initialise it as an amplify project

> cd presigned-url-lambda
> amplify init

Answer the questions at the prompt as appropriate to you and once the initialisation process has finished, add an api with the following command:

amplify add api

At the prompt, select the graphql option for the API and then select all the default answers for the other questions (you may have to change the name of the API if the default uses disallowed characters).

Once you answer ‘yes’ to editing the schema, your text editor will open with the schema.graphql file. Change it to look like this:

type Modform @model {
id: ID!
files: [String]
site: String
page: String
changes: String
}

type File @model {
name: String
url: String
filetype: String
}

Once you have saved the schema, go back to the console window and press enter to continue.

Next we need to add our lambda function. In the console type:

amplify add function

Once again, you will be prompted to provide some details. We will just use the default values, and answer ‘yes’ to edit the local lambda function.

An index.js file will open, which you should edit to look like this:

const AWS = require('aws-sdk');
// if you are using an eu region, you will have to set the signature
// version to v4 by passing this into the S3 constructor -
// {signatureVersion: 'v4' }
const s3 = new AWS.S3();

exports.handler = function (event, context) {

const bucket = process.env['s3_bucket']

if (!bucket) {
console.log('bucket not set:')
context.done(new Error(`S3 bucket not set`))
}

const key = `my-location/${event.input.name}`

if (!key) {
console.log('key missing:')
context.done(new Error('S3 object key missing'))
return;
}

const params = {
'Bucket': bucket,
'Key': key,
ContentType: event.input.filetype
};

s3.getSignedUrl('putObject', params, (error, url) => {
if (error) {
console.log('error:', error)
context.done(error)
} else {
context.done(null, {
url: url,
name: key,
filetype: event.input.filetype
});
}
})

}

Save that and return to your local console and press enter.

This function is basically taking some information from a request (the mime type of the file, and the filename. It creates a key with the filename (essentially the path to the file in S3) and usesS3.getSignedUrl() method to get a presigned url. That url is returned from the function.

Now we have set up our API and Lambda function on our local machine, we need to push these details to AWS:

amplify push

This will set up all the resources we defined, in the cloud. Before the process begins, we will be shown a table in the console indicating the pending operations. We should have a ‘Create’ operation pending for both the API and the function. Answer ‘yes’ to continue, and then provide the default answer to the next few questions to generate code for our API.

When this process finishes you will be shown your GraphQL endpoint and API key. You will need to put these into our src/main.js file.

But before we do that, we need to install a few more packages:

npm install --save graphql-tag@2.10.0 vue-apollo@3.0.0-beta.27 aws-appsync@1.7.0 axios@0.18.0

Open the src/main.js file in our project and change it to the following code, placing your API key and GraphQL endpoint where indicated:

import Vue from 'vue'
import App from './App.vue'

// vue-apollo makes it easier for your vue app to interact with the
// apollo-client inside the aws-appsync package, which, in turn,
// coordinates the data exchanges between the front end store and
// the backend store and deals with caching etc.
import VueApollo from 'vue-apollo'
import AWSAppSyncClient from 'aws-appsync'

const config = {
url: <YOUR_GRAPHQL_ENDPOINT>,
region: 'us-east-1',
auth: {
type: 'API_KEY',
apiKey: <YOUR_API_KEY>
}
}

// The default fetchPolicy is cache-first. This means that if data
// is returned from the cache, no network request will be sent. If
// a new item is in a list, this will not be realised. So here we
// change the policy so that network requests are always sent after // data is returned from the cache.
const options = {
defaultOptions: {
watchQuery: {
fetchPolicy: 'cache-and-network'
}
}
}
const client = new AWSAppSyncClient(config, options)
const appsyncProvider = new VueApollo({
defaultClient: client
})
Vue.use(VueApollo)
Vue.config.productionTip = false
new Vue({
render: h => h(App),
apolloProvider: appsyncProvider
}).$mount('#app')

And then change the src/App.vue file to look like this:

<template>
<div id="app">
<demo-page />
</div>
</template>

<script>
import DemoPage from '@/components/DemoPage'
export default {
name: 'app',
components: {
DemoPage
}
}
</script>

<style>
body {
margin: 0;
padding: 0;
width: 100vw;
height: 100vh;
}
#app {
padding: 40px;
}
</style>

Create the following demo page at src/components/DemoPage.vue:

<template>
<div class="container">
<form class="form" @submit.prevent="handleSubmit">
<h3>Modform</h3>
<label>Site:</label>
<input
type="text"
placeholder="Your site"
v-model="model.site"
/>
<label>Page:</label>
<input
type="text"
placeholder="Your page"
v-model="model.page"
/>
<label>Changes:</label>
<input
type="text"
placeholder="Your changes"
v-model="model.changes"
/>
<label>Files:</label>
<input
type="file"
placeholder="Your files"
@change="addFilenameToModel"
multiple
/>
<div v-if="images.length" class="image-container">
<img v-for="img in images" :src="img" alt="pic" />
</div>
<input type="submit" class="btn-submit" :disabled="uploading">
</form>
</div>

</template>

<script>
import gql from 'graphql-tag'
import axios from 'axios'
import { createModform } from '@/graphql/mutations'
import { listModforms } from '@/graphql/queries'
import { createFile } from '@/graphql/mutations'

const BASE_URL = 'https://presigned-demo-images.s3.amazonaws.com'

export default {
name: 'DemoPage',
data () {
return {
uploading: false,
images: [],
model: {
files: []
}
}
},
methods: {
addFilenameToModel ({target}) {
console.log('Loading...')
this.uploading = true

this.$apollo.mutate({
mutation: gql(createFile),
variables: {
input: {
name: target.files[0].name,
filetype: target.files[0].type,
// idOfSomeForm: like user id
}
}
})
.then(async ({data}) => {

try {

await axios.put(data.createFile.url, target.files[0], {
headers: { 'Content-Type': target.files[0].type }
})

console.log('Loaded')
this.uploading = false

this.images.push(`${BASE_URL}/${data.createFile.name}`)
this.model.files.push(`${BASE_URL}/${data.createFile.name}`)

} catch (e) {

// do we need to remove anything from the model here?
console.log('Upload failed!', e)

}
})
},
handleSubmit () {
this.$apollo.mutate({
mutation: gql(createModform),
variables: { input: this.model }
})
.then(a => {
console.log('Form stored in database')
})
},
}
}
</script>

<style lang="css" scoped>
.form {
max-width: 500px;
min-height: 310px;
display: flex;
flex-direction: column;
justify-content: space-between;
border: 1px solid lightgrey;
padding: 40px;
border-radius: 5px;
}

.form input {
font-size: 1em;
padding-left: 3px;
margin-bottom: 10px;
}

.btn-submit {
background-color: #7979de;
border-radius: 5px;
color: white;
margin-top: 10px;
}

.btn-submit:disabled {
opacity: .5;
}

.container {
display: flex;
justify-content: space-around;
}

.image-container {
display: flex;
margin: 5px;
justify-content: center;
}

img {
max-height: 60px;
max-width: 100px;
margin: 3px;
border: solid grey 1px;
}
</style>

The front end is set up now and you should be able to run the app with npm run serve. When you do, you will see a form with some input fields. Open up your developer tools and try to add a file using the file input.

You will be greeted with an error: TypeError: cannot read property 'protocol' of null

This is because, at the moment, our appsync graphql api resolver is not set up properly for our use case. You need to go into the AWS Appsync section of your aws console, with the ‘Data Sources’ tab active. Click on the ‘Create data source’ button.

Fill the input boxes in as appropriate, but make sure you set the data source type as ‘AWS Lambda function’ and make sure you point to the function we created with amplify. New role will automatically give the right permission.

Then, still in the AWS Appsync section, go to the Schema tab and find your ‘createFile()’ Mutation in the ‘Resolvers’ window and click on the resolver it currently has associated with it. Instead of passing the data to DynamoDB, we want it to go to our Lambda function, which will request a presigned url for us and return it.

Select the data source we just set up, and set your mapping templates so that they forward the arguments return the result. Make sure you press save!

We need to actually create a bucket, so go to the S3 section of your AWS console, and click on the ‘Create bucket’ button.

Fill in the the Bucket Name and Region input fields. Ensure the region is consistent with your lambda function. Click next and continue through the next steps without making any changes until you get to the ‘Create Bucket’ button. Click it.

Now if we run our app again and try to add a file, we’ll get the following error: GraphQL error: S3 bucket not set

This is because in the function we set up, we referenced an environment variable that doesn’t yet exist. So let’s go to the Lambda section of our AWS console, on the ‘functions’ tab. Highlighted is the function I created, yours will have a different name. Click on it and scroll down to the ‘Environment variables’ pane. We want to set this to point to our S3 storage.

Now when you try to run the app and attach a file, you will get a 403(Forbidden) error and a CORS error. We need to adjust the permissions of our s3.

Go to the S3 section of the AWS console and click on the bucket we just created. Then click on the permissions tab and the ‘CORS configuration’ button. You can find examples of CORS policies by clicking the documentation link. We will be using the following policy:

We basically needed to allow PUT and GET http methods, and to allow our origin (presumably localhost:8080 when starting your vue app with npm run serve). So you can cut, modify and paste, here it is again:

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>http://localhost:8080</AllowedOrigin>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>

That will take care of the CORS error we encountered. Then we have to go into the ‘Public access settings’ and edit the public bucket policies as shown in the image.

This will allow us to create a new policy.

Click on the bucket policy button.

You can use the policy generator to create a policy specificy to your requirements. We need to allow GetObject and PutObject actions, and because we want the client app (and anybody using it) to perform these operations we set the Principal to a wildcard.

{
"Id": "Policy1547200240036",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1547200205482",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::presigned-demo-2/*",
"Principal": "*"
}
]
}

If you can’t save the policy, and an error is being shown to the effect of ‘Action does not apply to any resource(s) in statement’, add a wildcard at the end of the resource name — because you need to apply this policy to all resources in the bucket. Make sure you click the save button.

Now you should be able to run the app, add images to the form and save the modform with an array of urls to the images. Hooray!

*If you find you can PUT the files in S3, but you can’t GET them, make sure the BASE_URL in your DemoPage is set correctly to point to the correct S3 source.

--

--