Serverless — AWS Lambda Layers and Python Dependencies

Dorian Machado
Dec 27, 2019 · 6 min read

Working with Lambda functions is an amazing experience as you noticed in my previous articles, now it’s time to level up our AWS skills implementing AWS Lambda Layers to handle shared libraries and dependencies between functions.

Photo by Artem Maltsev on Unsplash

Before to start let’s define what is a AWS Lambda Layer ?

Is a ZIP file/archive that contains libraries, files or dependencies to centralized store and manage data that could be shared across different functions in the same account or even multiple AWS accounts. 🎉

Using AWS Lambda Layers will decrease the size of your Lambda packaging because you won’t have to upload all dependencies on every single change performed in your code. You just must create an additional layer with all required packages or shared libraries and that’s it 🎉 🎉 🎉

If we see the above explained graphically it would look something like this:

Not using AWS Lambda Layers

After applying AWS Lambda Layers you will be able to share Libraries across functions and the picture will change and look like this:

Using AWS Lambda Layers. Amazing !!!!!!

Too much theory about, it’s time to get our hands dirty 🛠 and star to brake things 🤓

Photo by m0851 on Unsplash

In order to demonstrate the great utility of AWS Lambda Layers let’s enhance my previous post Serverless — AWS Lambda Python Dependencies leveling up the way the Python Lambda functions consume and share libraries with each other.

Remember that all the source code we use in this article will be available in github.

If you remember the previous post, we figured out how to use the jsonpath_rw as dependency for some Python functions but the library was part of the lambda packaging which implies that the package of the function increases in size.

We will consume the library jsonpath_rw as a Layer from the Python Lambda functions 📚 Magic !!! ✨


OK folks, let’s get down to code 💻

For this lab and following the best practices we will handle our code and the layers has independent Serverless projects called.

  • my-app (will contain the python functions source code)
  • my-layers (will handle the lambda layers)

So let’s play a little bit with the project named my-app

Create the Serverless project

sls create --template aws-python3 --name my-app --path my-appcd my-app

It’s a simple python code I took from the previous post that import and use the library jsonpath_rw and the structure looks like this:

my-app/
├── serverless.yml
└── src
└── handler.py

Where handler.py contains the following code:

import json
from jsonpath_rw import jsonpath, parse
def hello(event, context):
body = {
"message": "Python Function executed successfully!"
}
jsonpath.auto_id_field = 'id'
data = [match.value for match in parse('foo[*].id').find({'foo': [{'id': 'bizzle'}, {'baz': 3}]})]
response = {
"statusCode": 200,
"body": json.dumps(body),
"data": json.dumps(data)
}
return response

And the serverless.yml looks like this, declaring the function called “hello”

service: my-appprovider:
name: aws
runtime: python3.7
functions:
hello:
handler: src/handler.hello

Time to deploy 🚀

sls deploy

and test our function ⚡️

sls invoke -f hello

Don’t worry you will get an error similar to this one

Where we can see that was not possible to import the jsonpath_rw module, believe me or not this is good to us by the moment 😃 don’t worry we’ll enhance this lambda function later.


Now we will create the serverless project to manage the Lambda layers (yes finally we will get dirty with the layers 🎉)

sls create --template aws-python3 --name my-layers --path my-layerscd my-layers

The project will have the following structure:

my-layers
├── layers
│ └── tools
│ └── aws_requirements.txt
└── serverless.yml

Where the folder layers will contain our layers (duhhh 😅) in this case we named the layer tools (just for given a friendly name) and the aws_requirements.txt will have our python libraries dependencies in this case just jsonpath_rw.

The serverless.yml file will looks like this one

service: my-layersprovider:
name: aws
runtime: python3.7
layers:
python-app-dependencies:
path: layers/tools
compatibleRuntimes:
- python3.7
description: "Dependencies jsonpath_rw for python functions"

As you can see we have a new section called layers (where we will declare the layers duhhh 😅) where we can declare one o more layers, in this case the name of the layer is “python-app-dependencies” and we indicates that the layer’s content could be located at “layers/tools”.

Also with the “compatibleRuntimes” we can indicate the compatible runtimes that can use this layer, in this case is python3.7 and the description attribute will be useful because we can specify a short description about what the layer contains.

Before deploy we must execute the following commando inside the folder “layers/tools”

cd layers/toolspip install -t python/lib/python3.7/site-packages -r aws_requirements.txt

The command above will download dependencies inside the following path “python/lib/python3.7/site-packages” which is the default path for python related layers.

Time to deploy 🚀

sls deploy

The output of the deploy should look like this one where in the purple square we can see the ARN of the new Lambda layer created

If we check the Lambda Layers section in the web console we should see something like this:

Amazing we deploy our very first Lambda Layer to AWS 🎉


Now is the time to put everything together and configure our Lambda function called “hello” to to use our just uploaded and beautiful Layer.

⚠️ But wait ⚠️ . . . How we can consume the layer from our function if the App and the Layer are in separate CloudFormation Stacks ?
Answer: using the ARN of the layer

But, how we can use the ARN of the layer without hardcoding it in to the code and without using Environment variables and without use ParameterStore ?
Answer: using the CloudFormation Stack OutPuts

As always in my posts I have the magic solution for our Serverless problems 🤓 in this case we can and this is how we can use the CloudFormation OutPuts from inside our serverless.yml file.

To indicate the lambda function “hello” from the project “my-app” to use the lambda layer from the project “my-layers” this is the configuration

💡 NOTE: no change at all must be executed in the python code of the function “hello”

service: my-appprovider:
name: aws
runtime: python3.7
functions:
hello:
handler: src/handler.hello
layers:
- "${cf:my-layers-dev.PythonDashappDashdependenciesLambdaLayerQualifiedArn}"

and the variable name “PythonDashappDashdependenciesLambdaLayerQualifiedArn” was obtained from the the CloudFormation OutPuts of the stack “my-layers-dev” as shown

Time to deploy 🚀

sls deploy

If you check the Lambda Function in the AWS Web Console it should looks like this one indicating that 1 layer is attached to it 🎉

Now if we test our function ⚡️

sls invoke -f hello

Amazing !!!!!!!! 🎉 🎈 🎊 our Python function worked successfully as expected importing the library jsonpath_rw without problem.

We are awesome guys, we are a Serverless experts 🤓

Conclusions

Using layers with lambda allow to us to keep our python code intact, that means we can use the same code in our local development environment and in our AWS environment.

In addition to adding libraries such as jsonpath_rw, we can also add .py files (common libraries) to a layer to be imported from the python code of the lambda functions.

Dorian Machado

Written by

Serveless Evangelist / Entrepreneur

More From Medium

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade