Lambda Layers in LocalStack (the free version)

Ciaran Sweet
UK Hydrographic Office
4 min readApr 2, 2020
Photo by Kelli McClintock on Unsplash

In my previous blog, I explained how to use LocalStack to test your Lambda functions. While that’s great, you’re limited to the Python packages that the Lambda run-time provides you, which can stop you making cool 😎 stuff. Here, I’ll try to give you a way to use Lambda Layers to increase the cool stuff you can make while still being able to use LocalStack to test it!

Lambda Layers

I’m going to assume that if you’re reading this that you’ve got a pretty good idea what Lambda Layers are, but if not, here’s a TL;DR:

  • A layer is zipped up libraries, custom run-times, or other dependencies for your Lambda functions
  • You get room for five layers per Lambda function
  • You only get 50MB to upload your function as-is, but you get 250MB of space with your function and layers to play with
  • Layers = More space to have fun

LocalStack and Lambda Layers

Everyone who knows me knows that I ❤️ LocalStack. One pain point I have with it though is that the free version does not support Lambda Layers, whereas LocalStack Pro does. Currently we’re using the free version as we ❤️ Open Source and we also don’t like spending when we don’t need to!

After a week of experimenting and having a play, we found a way around this. While it doesn’t mirror perfectly the normal Lambda Layer setup, you can still test your Lambdas with their dependencies.

What’s different?

Normally, a Lambda Layer is a zip that would contain one or more directories but does not contain your handler code (as in the real world this is deployed separately). Inside it might look like:

$ ls unzipped-layer/
python/ # Dir containing all your pip packages
bin/ # Your packages may require binaries
share/ # Ditto the above
lib/ # Ditto also

In our version, we create an uber Lambda which contains all of the directory contents alongside the handler code! We can then deploy this as a Lambda, not a Lambda Layer. This works because LocalStack is free doesn’t enforce the upload file size limits. Our uber Lambda might look like:

$ ls unzipped-uber-layer/
handler.py
package-1/
package-2/
binary1
binary2
...

Real world example

In our work we require geospatial libraries such as rasterio and shapely. Like most geospatial things, these require GDAL (which is a whole other blog post in itself 🌍). But for this blog, I’ll run you through how we bundle all our required dependencies into an uber Lambda for our LocalStack tests.

We’ve made the code for this blog available, which you can take a look at yourself, but I’ll explain what’s happening in each part now.

build.sh

Inside the lambda/ directory the main brunt of the work is done within build.sh.

There are three main steps inside:

1- Build Layer and extract zip of dependencies:

This block runs the RemotePixel Docker image remotepixel/amazonlinux:gdal3.0-py3.7-cogeo which contains our required packages (rasterio, shapely etc.) installed via Python 3.7 and built against GDAL 3.0.

We then run create-layer.sh which is also provided by RemotePixel. This removes unnecessary files that we don’t need in a Layer and zips up the dependencies into a proper Lambda Layer. We then copy out that .zip locally for our uses.

2 - Unzip Layer and move all items up one directory:

This step does what it says on the tin: we move everything up one directory so that we start to flatten the structure of the Layer into our uber Lambda.

3 - Copy in Lambda code and zip up new uber Lambda:

This step adds in our handler code (and any other files your Lambda might need) to the uber Layer and zips it up, ready for deployment.

Easy right? It’s a simple enough work-around, but it’s not immediately obvious!

test-lambda.sh

This script will just run the above and then run our test in test_lamda.py.

test_lambda.py

As you can see, our test is very simple. We just invoke the Lambda and ensure that Printed all versions is returned. If we get this back, we know that our uber Layer has worked as lambda.py just imports all of the packages and prints their versions.

Deploying the Lambda is no different to how you normally would. You’re just ‘fooling’ LocalStack into thinking it’s just the handler code, whereas really you’re sneaking a big ol’ Trojan horse full of packages and cool 😎 stuff in.

Final thoughts

Whilst we recognise this isn’t the perfect solution, it does work 🤷‍♀️. To accommodate this in a multiple environment setup (such as DEV/TEST/PROD) we have different build.sh scripts that produce a .zip appropriate for their environment. For PROD, we wouldn’t create an uber Lambda, we’d zip up our handler code and our Terraform scripts would deploy the Layer properly.

Hopefully this has helped! As always you can find me on Twitter @Ciaran_Evans. Feel free to ping me a message if you’re working on geospatial ‘stuff’ in Lambda, I’d love to hear from you!

--

--