Creating an Image thumbnail generator using Vue, AWS, and Serverless (Part 3) — The Lambda

Ramsay Lanier
7 min readDec 7, 2017

--

In part 2, we built out our UI to handle uploading images to an S3 bucket. Now, we’ll need to create a Lambda function that will get run whenever an file is uploaded to that bucket. The function will take the image and generate some thumbnail images for it, then place those images into a new S3 bucket. In order to do this, we need to:

  • Create a basic handler function for Lambda
  • Modify the serverless.yml file, adding the Lambda function and an bucket for our thumbnails
  • Test that the Lambda function is triggered when we upload a file via the client
  • Test the Lambda function Locally using Docker
  • Modify the handler function with the actual code needed to create the thumbnails
  • Test that the thumbnails are created when a file is uploaded

Creating A Handler Function

Every Lambda function needs a handler. The handler is, in this case, a JavaScript function that accepts an event, context, and callback argument. We’ll create a basic handler function that will simply just log the event.

Create a handlers directory inside the root directory, and inside the handlers directory, create a file called transform.js . That file should look like this:

That’s it. Now we just need to modify the serverless.yml file and this handler function will be uploaded to AWS Lambda when were run sls deploy .

Modifying Serverless.yml

Here is the updated serverless.yml file:

On lines 12–18, we’ve added a function called transform and told Serverless that the handler is the function we created earlier. Notice the event that we’re creating. This will actually generate a bucket for us and create an event that will trigger the lambda function when an object is created inside the bucket. Notice further that we’ve replaced the previous S3Bucket that we created in Part 2 of this tutorial with a Thumbnail bucket.

Don’t forget to add the package exclusion at the bottom of the file. By default, Serverless will package up everything in the root directory.

Now we can run sls deploy from the root directory in our terminal and it will create a lambda function for us with the right code!

You can check to make sure this worked by going to the Lambda Console in S3 — you should see your new function listed there. If you click on it, you should see something similar to this:

Also, you should have four total buckets for this project — the serverless bucket, the application hosting bucket, the uploads bucket, and the thumbnails bucket.

Testing The Lambda Function

We can test that the lambda function works but uploading a file via our web application. Once you’ve uploaded a file or two, go back to the Lambda Console and check out the function’s monitoring tab. You should see some invocations in the graph. Mine looks like this:

ignore the invocation errors — I made a boo-boo. You shouldn’t have any, hopefully!

Neat, right? Furthermore, you can actually see the logs in AWS CloudWatch. Remember, we’re logging the event so that should show up somewhere, right? Click the ‘view logs in CloudWatch’ link and it will take you to a screen that has a list of logs. Click the first one, and you should see something like this:

Check it out! The second item in the log shows the actual event coming from S3! Notice the last item in the Records object is s3 and contains another object. We’ll use that in the next part to get the uploaded file and transform it!

Testing Lambda Locally Using Docker

Before we get to modifying the handler, lets test the function locally with a simulated S3 event. The environment in which our lambda function runs is likely very different than our development environment. In order to test the function locally, but in an identical environment to Lambda, we’ll need to use docker and a special docker image.

First, we’ll need to install docker. This varies depending on your OS, but should be relatively painless (both Mac and Windows have easy installers). After installing docker, check that it is running by typing docker ps in your terminal.

Next, we’ll create a file called test.json in the handlers directory. This file will be used to simulate an event from putting an object into an S3 bucket. It should look like this:

Note: its important that the object key on like 9 be the name of an actual file in your upload bucket, because we want to test this on a real file.

Now, from the root directory you can type this monster of a command into your terminal:

docker run -v \"$PWD\":/var/task lambci/lambda:nodejs6.10 handlers/transform.transform "$(cat handlers/test.json)"

This will run your transform function in a docker container that has been imaged to be exactly like the Lambda environment (thanks to docker-lambda). In order to make running this command easier, lets create an npm script. Run npm init -y from your root directory. In the newly created package.json file, add the following in scripts after the default test script:

"test:transform": "docker run -v \"$PWD\":/var/task lambci/lambda:nodejs6.10 handlers/transform.transform \"$(cat handlers/test.json)\""

Now, we can just npm run test:transform from our console.

Great, now we’re ready to get our hands dirty and create some thumbnails.

Modifying the Transform function

In order to resize our upload image, we’re going to use a very lovely library called sharp. There is a problem though — running npm install -save sharp downloads and links libraries for our current platform, but our current platform is not the platform that our lambda function will run on. That means we need to install the package on an environment similar to Lamba. Luckily, we already have a docker image that we can do this on.

First, run npm install --save sharp from your root directory. This will create a node_modules directory that we will eventually overwrite and package with our lambda function. Now, we’ll create a directory called sharp and add a few files to it.

Note: full credit for the following files goes to adieuadieu, who created this awesome service that I basically just copied.

Dockerfile

We’ll add file called Dockerfile which will tell docker how assemble the image we want. Its a pretty simple file.

We’re telling Docker to use the lambci/lambda image with the node 6 build, and we’re telling the image that when its created, run a bash script called build. Lets create that script file.

Build.sh

The build script will look at our package.json file and determine which version of Sharp we’re using. It will make a new temporary directory, install sharp, and create a tarball that will then get copied into a tarball directory. Lastly, it deletes the temporary directory.

Basically, we’re going to open a docker image that is identical to the lambda environment, install sharp on the container, tar all the files up, and then copy it into our own project.

In order to make this easy, we’re going to write another npm script that will build the docker container and extract the tarball in one command. The new package.json file looks like this:

The two new scripts created are on lines 9 and 10. Now, we can run npm run build:sharp which will build our Docker container, run the build script, and then extract the resulting tarball.

Okay, now that the annoying part is out of the way we can get down to business.

The New Handler

We start by importing the necessary modules. Note that Lambda functions have access to the aws-sdk by default, so we don’t actually need to install it locally.

Next, we configure two S3 buckets — one for uploads and one for the thumbnails. This should look familiar, as we did pretty much the same thing in our Vue application in Part 2.

The actual handler function accepts the event, which will contain our s3 object (the uploaded file). First, we need to get the actual object. The event object doesn’t actually contain the body of the file, which is what we need. Note that we do some basic sanitation on the key because AWS will replace each space in the file name with a + in the event object.

When we get the object back, we map through the array of transforms that we want to make. For each value, we use sharp to resize the image and then upload it to the thumbnails bucket. Its worth noting that sharp is capable of doing way more than just resizing. Its also worth noting that each thumbnail we create will be prefixed with the name of the upload. This will make it easy to get the thumnails for a specific upload in Part 4.

We can now test the function locally. Run npm run test:transform and you should get something that looks like this:

See the 3 ETags? Now, go to your S3 console and check the thumbnails bucket — there should be three images in there!

Next Up

In the next and final part, we’ll update our Home.vue component so that our list of uploads refreshes when all the files are done uploading. Then, we’ll add a new route for each upload that will render an Upload.vue component which will show that upload’s thumbnails. Finally, we’ll wrap up and talk about some possible next steps!

Ramsay is a full-stack JavaScript developer for a data analytics company in Springfield, VA. You can follow him on twitter or github. He likes when you do that.

--

--

Ramsay Lanier

I’m a senior web developer at @Novettasol and senior dad-joke developer at home.