How to create an AWS Lambda python 3.6 deployment package using Docker

Peter [REDACTED]
I like Big Data and I cannot lie
3 min readOct 12, 2017

Recently, I used Docker to create an AWS Lambda deployment package in python 3.6 that uses pandas, numpy, and sqlalchemy. I thought I would quickly share how to do this for others that want to extend this workflow to other packages and projects.

As mentioned in the title, this method requires Docker, so you’ll need to install it if you have never used it before. And don’t worry if you’ve never used Docker before. It’s pretty easy to get started on and you don’t need to know that much to get a lot accomplished. Here is a quick tutorial I use to remember simple commands.

Once you have Docker installed, fire up a terminal and type in the following command:

docker run -it dacut/amazon-linux-python-3.6

This will first look for the image locally on your computer and naturally, when it does not find it, it will look for it on Docker Hub, pull it, and then start the container in your shell. The image we are pulling is an Amazon Linux instance with python 3.6 installed on it, which is exactly what we need to build our deployment package since AWS Lambda functions using python 3.6 are executed in an Amazon Linux environment.

When the download is finished, you will notice your prompt change and you will be in the root directory of the Amazon Linux environment. If you do ls, you will notice that you are not in your local environment. And if you are on a Mac like me, you can run uname to verify that you are running a Linux environment:

Now, create a new directory and cd into it (I’ll call my directory “lambda” for this example). Now run the following inside of the newly created directory:

pip3 install <PACKAGE_NAME> -t ./

This will install your package in the current directory. If you have multiple packages, you can just separate the package names with spaces and run it all on the same line. Once all your packages are installed, in the same directory, run the following:

# you can name the .zip file whatever you prefer
zip -r lambda.zip *

This zips up all your packages into a zip file. The next step is to simply copy this zip file from your Docker container onto your local machine. The docker cp command will accomplish this but you will first need the current container’s ID. In another terminal window, run:

docker ps -a

The first column shows you the container ID for a given image. Knowing this, you can then simply run the following to get back only that ID:

docker ps -a | grep "dacut" | awk '{print $1}'

This first lists the containers, the grep command then filters on those with the name “dacut” in it which is unique to the image we pulled, and finally the awk command returns only the first column, which is the container ID we want. Given all this, we can finally run the following command to copy the zip file to our local machine:

docker cp $(docker ps -a | grep "dacut" | awk '{print $1}'):/lambda/lambda.zip <PATH_TO_YOUR_LOCAL_DIRECTORY_OF_CHOICE>

In other words, the format is:

docker cp <CONTAINER_ID>:<DOCKER_PATH_TO_ZIP_FILE> <LOCAL_PATH>

Now you have the zip file in a directory on your local machine. cd to that local directory and add your lambda_function.py to it by just running:

zip -ur lambda.zip <PATH_TO_LAMBDA_FUNCTION_FILE>

And done! You now have a python 3.6 deployment package that is ready to be deployed to AWS!

Happy coding!

--

--