Automate python libraries deployment at AWS Lambda layer from Pipfile with Terraform

Melon
Craftsmen — Software Maestros
5 min readAug 22, 2020
Photo by Franck V. on Unsplash

AWS Lambda, a part of Amazon Web Services (AWS) is a serverless computing service that works as FaaS (Function as a Service). A FaaS is a service which provides users to develop and manage application service without thinking about infrastructure.

Terraform is an Infrastructure as Code (IaC) tool which is developed by Hasi Corp to manage resources of cloud services like AWS, Google Cloud, Azure, etc. It is open-source and developed by golang.

It is always challenging to zip the codes and upload them for AWS Lambda every time at the time of deployment. The more complex part is to upload codes of libraries e.g python libraries. At Craftsmen, we need to manage a lot of lambdas for various development purposes. So a smart solution for uploading lambda function code and libraries while deployment is a crying need.

Our approach is to upload function codes as function level and libraries in lambda layer. The reasons are
1. Share python libraries between lambdas
2. The console code editor can only visualize 3MB of code. So uploading libraries can make a chance to see codes in console editor.

Setting the project

Let’s start by setting the project skeleton. We are going to use pipenv because it’s more developer-friendly to maintain dev and release packages. First, we install pipenv from here. Then we will install terraform from here

# Create a project directory
mkdir lambda-with-terraform
cd lambda-with-terraform
# Create lambda code directory
mkdir lambda
# Create Terraform directory
mkdir terraform
# Add handler.py file in lambda directory
touch lambda/handler.py
# Add Pipfile in project root directory
touch Pipfile
# So our skeleton will look like this
tree
├── lambda
│ └── handler.py
├── Pipfile
└── terraform

Add python libraries

We will only use a single library called requests in [packages] and pipenv at [dev-packages]. Also, we are going to use python 3.8. Let’s add all to Pipfile.

# Pipfile[[source]]
name = "pypi"
url = "https://pypi.org/simple"
verify_ssl = true
[dev-packages]
pytest = "==5.3.5"
pipenv = "==2020.8.13"
[packages]
requests = "==2.23.0"
[requires]
python_version = "3.8"

Initiate python virtual environment by pipenv with

pipenv install

This will add the Pipfile.lock file which contains information on all python packages.

Add function code

Let’s start with a simple lambda function which just gets a web page and log it.

# handler.pyimport requests
import logging
LOGGER = logging.getLogger()
LOGGER.setLevel(logging.INFO)
def lambda_handler(event, context):
response = requests.get("https://example.com")
LOGGER.info(response.text)

Add Terraform code

At first, we will add some files for Terraform

# Create file for lambda terraform codes
touch terraform/lambda.tf
# Create file for lambda layer terraform codes
touch terraform/lambda_layer.tf
# Create python file for make pip requirements file from Pipfile
touch terraform/requirements_creator.py
# Add shell script to generate pip requirements and make zip file of # lambda libraries
touch terraform/build_layer.sh
chmod a+x terraform/build_layer.sh

In requirements_creator.py, a python argument parser will be added which gets the filename of pip requirements e.g. requirements.txt

# terraform/requirements_creator.pyimport argparse
from pipenv.project import Project
from pipenv.utils import convert_deps_to_pip

def _make_requirements_file(file_path):
pipfile = Project(chdir=False).parsed_pipfile

requirements = convert_deps_to_pip(pipfile['packages'], r=False)
with open(file_path, 'w') as req_file:
req_file.write('\n'.join(requirements))

def run():
parser = argparse.ArgumentParser()
parser.add_argument(
'--file_path',
'-file_path',
type=str,
default='requirements.txt'
)
args = parser.parse_args()
_make_requirements_file(args.file_path)


if __name__ == "__main__":
run()

Now let’s add shell script to generate a zip file for lambda layer with python libraries

# terraform/build_layer.shDESTINATION_DIR=${DESTINATION_DIR:-$PWD}
MODULE_DIR=${MODULE_DIR:-$PWD}
ZIPFILE_NAME=${ZIPFILE_NAME:-layer}
echo "Module dir $MODULE_DIR"
echo "Destination dir $DESTINATION_DIR"

TARGET_DIR=$DESTINATION_DIR/$ZIPFILE_NAME
echo "Target dir $TARGET_DIR"
mkdir -p "$TARGET_DIR"
REQUIREMENTS_FILE_PATH=$MODULE_DIR/requirements.txt
python3 "$MODULE_DIR"/requirements_creator.py --file_path "$REQUIREMENTS_FILE_PATH"
pip install -r "$REQUIREMENTS_FILE_PATH" -t "$TARGET_DIR"/python
(cd "$TARGET_DIR" && zip -r "$DESTINATION_DIR"/"$ZIPFILE_NAME".zip ./* -x "*.dist-info*" -x "*__pycache__*" -x "*.egg-info*")
rm "$REQUIREMENTS_FILE_PATH"
rm -r "$TARGET_DIR"

Now add lambda layer terraform code

// terraform/lambda_layer.tflocals {
// All lambda codes zip and layer zip file directory
lambda_artifact_dir = "${path.module}/lambda_zip"
lambda_layer_zipfile_name = "layer"
python_version = "python${data.external.python_version.result.version}"
}

// Grab python version from Pipfile. Default is 3.8 if not mentioned // in Pipfile
data "external" "python_version" {
program = [
"python3",
"-c",
"from pipenv.project import Project as P; import json; _p = P(chdir=False); print(json.dumps({'version': _p.required_python_version or '3.8'}))"
]
}

// Generate zipfile for lambda layer
resource "null_resource" "build_lambda_layer" {
provisioner "local-exec" {
when = create
command = "./${path.module}/build_layer.sh"

environment = {
DESTINATION_DIR = abspath(local.lambda_artifact_dir)
MODULE_DIR = abspath(path.module)
ZIPFILE_NAME = local.lambda_layer_zipfile_name
}
}

triggers = {
// Trigger only when something changes in Pipfile
run_on_pipfile_change = filemd5("${abspath(path.module)}/../Pipfile")
}
}

resource "aws_lambda_layer_version" "lambda_layer" {
filename = "${local.lambda_artifact_dir}/${local.lambda_layer_zipfile_name}.zip"
layer_name = "lambda_layer"
compatible_runtimes = [local.python_version]
// It will run after lambda layer zipfile build
depends_on = [null_resource.build_lambda_layer]

lifecycle {
create_before_destroy = true
}
}

Finally, we are going to add lambda terraform code

// terraform/lambda.tf// Zip lambda function codes
data "archive_file" "lambda_zip_file" {
output_path = "${local.lambda_artifact_dir}/lambda.zip"
source_dir = "${path.module}/../lambda"
excludes = ["__pycache__", "*.pyc"]
type = "zip"
}

data "aws_iam_policy_document" "lambda_assume_role" {
version = "2012-10-17"

statement {
sid = "LambdaAssumeRole"
effect = "Allow"
actions = [
"sts:AssumeRole"
]
principals {
identifiers = [
"lambda.amazonaws.com"
]
type = "Service"
}
}
}

// Lambda IAM role
resource "aws_iam_role" "lambda_role" {
name = "test-lambda-role"
assume_role_policy = data.aws_iam_policy_document.lambda_assume_role.json

lifecycle {
create_before_destroy = true
}
}
// Lambda function terraform code
resource "aws_lambda_function" "lambda_function" {
function_name = "test-lambda-function"
filename = data.archive_file.lambda_zip_file.output_path
source_code_hash = data.archive_file.lambda_zip_file.output_base64sha256
handler = "handler.lambda_handler"
role = aws_iam_role.lambda_role.arn
runtime = local.python_version
layers = [aws_lambda_layer_version.lambda_layer.arn]

lifecycle {
create_before_destroy = true
}
}

Time to test

To test if everything works, we have to add Terraform provider. In our case the provider is AWS. Let’s add a file provider.tf in terraform directory

# terraform/provider.tf
provider "aws" {
region = "eu-west-1"
profile = "aws-profile-name-from-aws-config-file-at-your-machine"
}

The project will look like

.
├── lambda
│ └── handler.py
├── Pipfile
├── Pipfile.lock
└── terraform
├── build_layer.sh
├── lambda_layer.tf
├── lambda.tf
├── provider.tf
└── requirements_creator.py

Let’s build infrastructure 😹

# Activate pipenv virtual environment
pipenv shell
# Go to terraform directory
cd terraform
# Initialize terraform
terraform init
# Check terraform infrastructure component to deploy
terraform plan
# And deploy with
terraform apply
# If you want to destroy all
terraform destroy

--

--