Using AWS Rekognition and Lambda to Analyze Images on the Fly

Part 2: Lambda function to store S3 image info in DynamoDB after being passed through Rekognition.

Kim Leung
Kim Leung
Oct 4, 2017 · 6 min read

This post is part of a larger series to build an even smarter Alexa skill using AWS services. This specific post is meant to be read as a tutorial. For a high level look as to why I did this project, read Part 1 of this series!

This is a technical step-by-step walk through to set up your own lambda function. This Lambda pulls an image from S3 upon upload to a bucket, passes the image through Rekognition and stores the returned data on a table in DynamoDB. The first step in this walk through will be creating an IAM role that will allow your lambda function to access all of the other AWS services needed to complete our project. The beauty of using only AWS services is that every action is already authorized through your account and IAM role, giving you the ability to quickly prototype.

The first thing you will need is an AWS account. If you do not have one, here is a helpful link to help you get started. As a new user, you will fall under the Amazon free tier usage plan and be charged nothing.

Once you have your account, you are ready to start connecting all the wonderful AWS services you now have access to.

A role allows you to give permissions to what your lambda can and cannot do.

  1. Navigate to IAM console at console.aws.amazon.com/iam
  2. On left nav menu click “Roles”
  3. Click blue “Create role” button
  4. On Select Role type page, select “AWS Service” section and choose “Lambda”
  5. Click blue “Next: Permissions” button
  6. On Attach Permissions page, search for and select the checkboxes next to both AWSLambdaFullAccess and AmazonRekognitionFullAccess
  7. Click blue “Next: Review” button
  8. Give your role a name that you like and click “Create role”
  1. Navigate to S3 console at s3.console.aws.amazon.com
  2. Click blue “Create bucket” button
  3. Give your bucket a unique name and click “Create”
  1. Navigate to your DynamoDB console at console.aws.amazon.com/dynamodb
  2. Click “Create table”
  3. Give your table a name that you like
  4. Add “faceId” as your partition key and change the type to “Number” in the dropdown to the right
  5. Click on “Add sort key” checkbox
  6. Add “timestamp” as your sort key and change the type to “Number” in the dropdown to the right
  7. Click on blue “Create” button on the bottom
  8. This will kick off the table creation process and will take about a minute to complete
  1. https://github.com/KaleFive/Categorize
  2. Change directory(cd) into the src folder
  3. In the src folder, open your config.js file and add your table name
  4. Run zip -r ../../lambda_categorize.zip *
  1. Navigate to your Lambda console at console.aws.amazon.com/lambda
  2. Click orange “Create Function” button on top right
  3. Click orange “Author from scratch” button on top right
  4. On the “Configure triggers” page, click on box with dotted outline, and select “S3" from the dropdown menu
  5. In bucket dropdown, select the unique bucket that you created
  6. In event-type dropdown, select “Object Created (All)” and then click “Next”
  7. Under Basic Information section, add a Name and Description. Then in Runtime dropdown select “Node.js 6.10”
  8. Under Lambda function code section , choose “Upload a .ZIP file” from the Code entry type dropdown
  9. Upload the “lambda_categorize.zip” file
  10. Under the Lambda function handle and role section, select “Choose an existing role” under role and select the IAM role that you created earlier
  11. Click “Next” and finally the orange “Create function” button on the Review page

Quick and easy way to see your log files. Useful for debugging or just system monitoring.

  1. Navigate to your Cloudwatch console at console.aws.amazon.com/cloudwatch
  2. Click on “Logs” on the left hand nav
  3. Click on your lambda function that you created “/aws/lambda/[function name]”
  4. Now when you run your lambda function, the logs will output to this page
  1. Grab any image from your computer, preferably one of someone’s face to get the most out of Rekognition’s facial detection capabilities
  2. The filename of your image will be stored as well for accessibility later
  3. Head to your S3 console and click on the blue “Upload” button to add the image to your bucket
  4. Once the image is uploaded, you will see a new entry to your Cloudwatch logs
  5. After the lambda function is executed (should only take a second), you can visit your Dynamodb console to see a new row added to your table, along with information gathered from Rekognition on the image uploaded

What you should see is something similar to the table below, except you will only have 1 row of data after uploading that first photo. The important thing to notice, is that not only do you have a row of data representing the photo you just uploaded, but that data also includes information returned from Rekognition. Data such as an age estimate of the person in the photo, possible emotions that this individual is expressing, and even a numerical measurement of how confident Rekognition is with the result it returned.

You may have noticed that you never have to set up Rekognition in this walk through. You gave your role the AmazonRekognitionFullAccess policy but that was it, and that is because in the code that you pulled down from github, you are already instantiating a Rekognition client and making those API calls.

You now have the “Categorize” step set up. Now when you upload a new image into your S3 bucket, lambda will automatically pull that image and pass it through Rekognition, storing the returned information into DynamoDB. It is best to give your filename a specific name such as “Michael” or “John” so that the filename column in your table reflects the individual in the image. This is so when you build methods on your extraction or “Vision” step of the project, it is easy for your function to scan the table and find all the images that are of the same person.

Continue to Part 3 for a walk-through on building the Vision step of the project.

If you liked this post, be sure to follow me on Github, Twitter and LinkedIn.

Here are a couple of links that I used to get going. Thanks to Noelle LaCharite , Brian Donohue and A Cloud Guru as most of my learning came out of posts from them.

Kim Leung

Written by

Kim Leung

Software Engineer at Contently. Will sometimes grab the first marshmallow, but will usually wait for two.