Running AWS applications (including Lambdas) locally via LocalStack

Mayuresh Jakhotia
Ancestry Product & Technology
6 min readDec 22, 2021

Overview

Working in a world where all applications run in the cloud, it can get tricky to develop and test locally. If we wait each time to test until after the code is deployed in the cloud, it adds up to the development and the developer time, reducing productivity drastically. In an effort to mitigate that, my team at Ancestry has been using LocalStack (a cloud service emulator) for a while now. And in case you are curious, Sean Scofield has a great article on it here.

Now, in order to develop AWS Lambda applications locally, things can get a little trickier due to the nature of this service. For the purpose of this article, I’ll assume some familiarity with the Lambda concepts (specifically the zip file archives deployment package).

Example

To understand, let’s work with an image steganography service, that takes a text message as an input, encrypts it with AES-256 encryption, and conceals the encrypted data within the image. Here are the architectural flows of our application.

A. Hide encrypted text inside an image

Step 1: The user uploads an image (that will be used to conceal the hidden message) to S3
Step 2: The lambda receives the image, hidden text, and secret key (for encryption). It encrypts the text and hides it within the image and then uploads it to S3

B. Retrieve the hidden text from the image

Code

Let’s look at what we need in order to get this working locally.

Dockerfile — This file is used to create a zip file archive deployment package for our local lambda functions.

  • lambci/lambda: These are sandboxed images that very closely mimic the AWS Lambda environment. If interested, the image layers can be seen here.
  • The next line makes sure we run as the root user (redundant but no downside to leaving it in while developing against other base images).
  • The next 3 lines are responsible to copy the requirements.txt and install the dependencies therein.
  • Then we copy the source code responsible to run the lambdas.
  • Finally, we create a zip file out of it (lambda-layer.zip).
FROM lambci/lambda:build-python3.7USER rootENV REQPATH /root/requirements.txt
COPY ./requirements.txt /root/requirements.txt
RUN pip3 install -r ${REQPATH} --target=/opt/python/root/

COPY ./src/main.py /opt/python/root/main.py

RUN cd /opt/python/root && zip -r /lambda-layer.zip *

Makefile — This file helps to build and run our application (and also clean up later) using simple commands.

It contains the following targets,

  • make build :
    - Builds, and creates a docker container from the Dockerfile described above. Fetches the container id, and copies the lambda-layer.zip file (that contains the source code along with the dependencies at the root level) onto the local machine. Finally, it removes the container.
    - This zip file would be required when creating our lambda functions.
  • make run : Starts and runs our application.
  • make clean : Deletes the lambda-layer.zip file. Stops and removes the running containers/networks.
.DEFAULT_GOAL := buildIMAGE_NAME = image-crypto-steganography
STACK_NAME = image-crypto-steganography
build:
docker build --no-cache -t ${IMAGE_NAME} -f Dockerfile .
$(eval id = $(shell docker create ${IMAGE_NAME} echo))
docker cp $(id):/lambda-layer.zip resources/
docker rm -v $(id)
run:
docker-compose -p $(STACK_NAME) up
clean:
rm -rf resources/lambda-layer.zip
docker-compose -p $(STACK_NAME) down --volumes

docker-compose.yml — This file defines our localstack service with the configurations, volumes, and network needed.

The volumes key mounts the host directory to the path inside the container so that it has access to all the required files. Here are some of the variables specific to configuring our local AWS Lambdas:

  • LAMBDA_EXECUTOR - defines how to execute the lambda functions.
    a. local : run via a directory on the local machine
    b. docker: run each function invocation in a separate container
    c. docker-reuse: create and reuse the containers across invocations
  • LAMBDA_REMOTE_DOCKER- determines how lambda function definitions will be passed into the containers.
    a. true : definitions passed by copying the zip file (similar to how we usually do it on AWS and the way this article demonstrates)
    b. false : definitions passed by mounting a local volume
  • LAMBDA_DOCKER_NETWORK - docker network for the container running our lambda function. If incorrect, it can cause network issues.
version: "3.7"networks:
image_crypto_steganography:
attachable: true
name: image_crypto_steganography
services:
localstack:
image: "localstack/localstack:0.13.1"
networks:
- image_crypto_steganography
ports:
- "4566:4566"
volumes:
- ./resources/:/resources/
- ./resources/localstack-setup.sh:/docker-entrypoint-initaws.d/localstack-setup.sh
- /var/run/docker.sock:/var/run/docker.sock
environment:
SERVICES: "lambda, sqs, s3"
DEFAULT_REGION: "us-east-1"
LAMBDA_EXECUTOR: "docker"
LAMBDA_REMOTE_DOCKER: "true"
DOCKER_HOST: "unix:///var/run/docker.sock"
LAMBDA_DOCKER_NETWORK: "image_crypto_steganography"
AWS_ACCESS_KEY_ID: "fake_key_id"
AWS_SECRET_ACCESS_KEY: "fake_secret_access_key"

resources/localstack-setup.sh — This file is used as a startup script for our localstack docker container to create and populate our local AWS resources.

  • This file usesawslocal to interact with local AWS resources.
  • While creating lambda functions, the handler is the entry point into our lambdas. For example, main.conceal_image_with_secret_text implies the conceal_image_with_secret_text method within main.py file.
  • Run similar to AWS:
    - We create the local S3 bucket ‘images’ and copy a test image to it.
    - If the application is to be run similar to AWS, we need to use the lambda-layer.zip file to create our local lambda functions, the SQS queue, and the SQS to Lambda event source mapping. We do that based on the check to see if the zip file exists.
    - Finally, we send a message to the SQS so that our Lambda processes the message.
  • Run directly as python code:
    - We create the local S3 bucket ‘images’ and copy a test image to it.
    - Now, since we want to run the python code directly, we won’t need the lambda-layer.zip file, and other steps get skipped based on the file existence check.
    - Note that before we run the code that corresponds to the handlers directly (main.py described below), we would need the dependencies and environment variables set up locally (eg. to hit the localstack AWS). Please refer to this README to learn more.
#!/usr/bin/env bash

echo "Creating required S3 bucket"
awslocal s3api create-bucket --bucket images

echo "Copying input test image to s3"
awslocal s3 cp /resources/test_image.png s3://images/test_image.png

ZIP_FILE=/resources/lambda-layer.zip
if [ -f "$ZIP_FILE" ]; then
echo "$ZIP_FILE exists."

echo "Creating the lambda function to conceal image with encrypted text"
awslocal lambda create-function --function-name conceal_image_with_secret_text \
--zip-file fileb:///resources/lambda-layer.zip \
--handler main.conceal_image_with_secret_text \
--environment Variables="{$(cat < /resources/.env | xargs | sed 's/ /,/g')}" \
--runtime python3.7 \
--role whatever


echo "Creating the lambda function to retrieve decrypted text from a concealed image"
awslocal lambda create-function --function-name get_secret_text_from_concealed_image \
--zip-file fileb:///resources/lambda-layer.zip \
--handler main.get_secret_text_from_concealed_image \
--environment Variables="{$(cat < /resources/.env | xargs | sed 's/ /,/g')}" \
--runtime python3.7 \
--role whatever


echo "Creating required SQS queue"
awslocal sqs create-queue --queue-name image_steganography_queue

echo "Binding Lambda to SQS queue"
awslocal lambda create-event-source-mapping --function-name conceal_image_with_secret_text --batch-size 1 --event-source-arn arn:aws:sqs:us-east-1:000000000000:image_steganography_queue

echo "Trigger steganography lambda by sending a message to the SQS"
awslocal sqs send-message --queue-url http://localhost:4566/000000000000/image_steganography_queue --message-body '{"image_path":"s3://images/test_image.png", "secret_text": "This is a secret text", "secret_password_key": "my_pwd"}'

else
echo "$ZIP_FILE does not exist ie. triggered directly via Python."
fi

main.py — This is the code that contains the functionality and the handlers for our lambda functions.

  • conceal_image_with_secret_text : Creates and uploads a concealed image with the AES-256 encrypted text to S3.
  • get_secret_text_from_concealed_image : Retrieves decrypted text from a concealed image.
  • To speed up development and testing, this code can also be run directly via python (rather than as Lambda functions). Just uncomment the__main__ method at the end of this file. Detailed steps can be found here.

Conclusion

In case it felt ambiguous, please head over to this Github project and follow the README. It would just take a couple of minutes to have it running and will be much easier to understand and fun to play with (without connecting to remote servers).

Thank you!

If you’re interested in joining Ancestry, we’re hiring! Feel free to check out our careers page for more info.

--

--