Building up on Lambda Layers

Sabith Venkitachalapathy
3 min readMar 29, 2020

--

TL;DR (?)

This article tries to capture a method i followed to create Lambda Layers that i had to use in my weekend project. This is in no way an indication of the only method available, but some pointers on how you can automate the layer creation without impacting the local setup.

Lambda Layers?

Lambda Layers in its simple terms is a way to centrally manage code and data that is shared across multiple functions.

Refer to the launch presentation at re:Invent 2018 for more. Yan Cui had captured details on when and where you should ideally use Lambda Layers.

A Curated list of most usable Lambda layers are captured here by Maciej Winnicki.

What’s with this automation?

But what if you wanted to build Layers on your own and faced complications like packaging dependent libraries with the layer (OracleClient required by cx_Oracle for example). While i was trying to get the method mentioned by Duncan Dickinson to build cx_Oracle based Lambda Functions working, modified the code to deploy Lambda layers based on a few parameters defined in a script.

The below is the main script that tries to launch a docker container which is configured to execute a Python env (pipenv) to install the user specified library and to download that to an archive file that later gets uploaded to S3 and then make it to a lambda layer.

Note : S3 can be avoided by using the local zip file. But it’s advisable if your layer is big enough. Reference

build.sh

#!/bin/bash -xeBASE_DIR=dist
ZIP_FILE=<Your Zip files name goes here>
PYTHON_LIB=<Your Python Lib Goes here>
S3_BUCKET=<Your S3 bucket goes here>
REGION=<Your AWS Region goes here>
LAMBDA_LAYER_NAME=<Your Lambda Layer goes here>
S3_KEY=<Your S3 folder structure goes here>
cd layerPackager
#try --no-cache if there are half built layers
docker build -t lambda-layer-deploy .
cd -
mkdir -p dist
rm -rf dist/*
#run any other dependencies, un comment for any deps
#./prepDeps.sh
docker run --name lambda-layer-deploy --env DKR_ZIP_FILE=$ZIP_FILE --env DKR_PYTHON_LIB=$PYTHON_LIB lambda-layer-deploy
docker cp lambda-layer-deploy:/tmp/$ZIP_FILE dist/$ZIP_FILE
docker rm lambda-layer-deploy
cd dist
unzip $ZIP_FILE
rm $ZIP_FILE
zip --symlinks -r9 $ZIP_FILE *
cd ..#Copy the file over to S3
aws s3 cp $BASE_DIR/$ZIP_FILE s3://$S3_BUCKET/$S3_KEY
#https://docs.aws.amazon.com/cli/latest/reference/lambda/publish-layer-version.html
#publish the layer using the file uploaded to S3
aws lambda publish-layer-version --layer-name $LAMBDA_LAYER_NAME --content S3Bucket=$S3_BUCKET,S3Key=$S3_KEY$ZIP_FILE

The main script can (or not) launch a Dependencies prep script which can be used to package the dependencies for the python library (OracleClient for cx_Oracle as an example). You can comment the line if there isn’t a need for one. The script shows how to package OracleClient Lite libraries to the cx_Oracle build (this is a topic on its own)

prepDeps.sh

#!/bin/bash -xecd dist
#https://stackoverflow.com/questions/46937833/connecting-to-oracle-rds
#-j ignores the folder structure inside the zip files
unzip -d lib -j ../zips/instantclient-basiclite-linux.x64-19.6.0.0.0dbru.zip
unzip -d lib -j ../zips/instantclient-sdk-linux.x64-19.6.0.0.0dbru.zip
cp /lib64/libaio.so.1.0.1 lib/
cd lib
ln -s libaio.so.1.0.1 libaio.so.1
ln -s libaio.so.1.0.1 libaio.so
cd ..
ln -s ./lib/libaio.so.1.0.1 ./libaio.so.1.0.1

The main part of this setup is the Docker container and the script that is launched inside it

Dockerfile

FROM amazonlinuxRUN yum -y install gcc findutils unzip zip python3

RUN pip3 install pipenv
WORKDIR /tmp
COPY prepare-libs.sh prepare-libs.sh
RUN chmod +x prepare-libs.sh
ENTRYPOINT ./prepare-libs.sh

The prepare-libs.sh script is launched everytime you run the docker container from the main script and the library and zip file names are dynamically passed via the env variables (Note — env DKR_ZIP_FILE=$ZIP_FILE — env DKR_PYTHON_LIB=$PYTHON_LIB in the main script)

prepare-libs.sh

#!/bin/bash -xecd /tmp
pipenv --python 3
echo "Installing $DKR_PYTHON_LIB"
pipenv install $DKR_PYTHON_LIB
PY_DIR='build/python/lib/python3.7/site-packages'
mkdir -p $PY_DIR #Create temporary build directory
pipenv lock -r > requirements.txt #Generate requirements file
pip3 install -r requirements.txt --no-deps -t $PY_DIR #Install packages into the target directory
cd build
echo "Zipping to file $DKR_ZIP_FILE"
zip -r /tmp/$DKR_ZIP_FILE . #Zip files
cd ..
rm -r build #Remove temporary directory

The rough structure of the setup is as below

--

--

Sabith Venkitachalapathy

Enterprise Cloud Architect with experience in the 4 major CSPs. Inspired by Amazon’s #LPs (https://www.amazon.jobs/en/principles) Ex Alibaba.