Serverless Simplified: Integrating Docker Containers into AWS Lambda via serverless.yml

The SaaS Enthusiast
11 min readFeb 19, 2024

--

Cloud Computing and Containers: An image illustrating the concept of cloud computing with symbolic containers or Docker logos floating among clouds, representing the deployment of containerized applications in a cloud environment.

Initiating with the development and local testing of Docker containers tailored for AWS Lambda, this workflow delineates a streamlined process for deploying these containers to the cloud. After ensuring the container’s functionality locally, it is uploaded to Amazon ECR, marking the transition from local development to cloud readiness. Subsequent modifications to the serverless.yml file enable the deployment of these Docker containers as AWS Lambda functions, utilizing the serverless execution model of AWS Fargate. This method represents a seamless integration between Docker and AWS services, optimizing for both scalability and cost-efficiency in serverless application deployment.

How to Choose Lambda vs Lambda + Docker

Choose AWS Lambda When:

  • Short-lived Operations: Lambda functions are designed for short-term, event-driven operations. If your tasks can be executed within Lambda’s execution time limits (up to 15 minutes), Lambda might be a good fit.
  • Scalability and Simplicity: AWS Lambda automatically scales your application by running code in response to each trigger. This model is great for workloads that vary in size and for developers who prefer not to manage the underlying infrastructure.
  • Cost-Effectiveness: For applications with variable or unpredictable workloads, Lambda can be more cost-effective, as you only pay for the compute time you consume.
  • Serverless Architecture: If you’re building a serverless application, Lambda integrates well with other AWS serverless services, providing a seamless development experience.

Transition to Docker Containers When:

  • Longer Execution Time: If your tasks exceed Lambda’s execution time limits or require more prolonged processing, using containers might be more suitable. Docker can run long-lived applications without the same constraints as Lambda.
  • Complex Dependencies: If your application has complex dependencies or requires a specific execution environment that’s difficult to replicate in Lambda, Docker containers allow for greater control over the environment your application runs in.
  • Consistent Environment: Docker ensures your application runs in the same environment during development, testing, and production, reducing the “it works on my machine” problem.
  • Custom Runtime Requirements: When your application needs a specific runtime not natively supported by Lambda, or you need more control over the runtime environment, Docker containers provide the flexibility to customize as needed.

Good Indicators for Transitioning from Lambda to Docker:

  • Hitting Limits: If you’re consistently hitting Lambda’s execution time, memory, or concurrent execution limits, it might be time to consider Docker.
  • Increased Complexity: As your application grows in complexity, the simplicity of Lambda might become restrictive. Transitioning to Docker can provide the flexibility needed for complex applications.
  • Performance Requirements: If your application requires more CPU or memory than Lambda can provide, or if you need to optimize for performance in ways that Lambda does not allow, Docker containers can offer better resources and customization.

AWS offers other container services like ECS (Elastic Container Service) and EKS (Elastic Kubernetes Service), which can manage your Docker containers at scale, providing a middle ground between the simplicity of Lambda and the flexibility of managing your own Docker environments.

Ultimately, the decision should be based on a careful consideration of your application’s specific needs, performance requirements, and the trade-offs between operational simplicity and control over the environment. Transitioning from Lambda to Docker is a significant step that involves more infrastructure management but offers greater flexibility and capability for complex applications.

Let’s get started

Dockerfile

This Dockerfile is designed for deploying a Node.js Lambda function using a Docker container. It specifies how the Docker image for your Lambda function should be built, layer by layer. Here’s a step-by-step explanation:

FROM --platform=linux/amd64 public.ecr.aws/lambda/nodejs:18

# Set the working directory to the Lambda task root
WORKDIR ${LAMBDA_TASK_ROOT}

# Copy the package.json and package-lock.json (if available) for your Lambda function
COPY functions/pdf/package.json ./

# Install any remaining dependencies from your package.json
RUN npm install --production

# Copy the rest of your application's code to the container image
# Copy your function code and any other necessary directories into the Docker image
COPY functions ${LAMBDA_TASK_ROOT}/functions
COPY libs ${LAMBDA_TASK_ROOT}/libs


# Set the CMD to your handler
#CMD ["functions/pdf/group-report-cards.handler"]
CMD ["functions/pdf/hello.handler"]

Base Image Specification:

FROM --platform=linux/amd64 public.ecr.aws/lambda/nodejs:18

This line specifies the base image from which your Docker image starts. It uses the official AWS Lambda image for Node.js 18, ensuring that the environment is pre-configured for running Lambda functions. The --platform=linux/amd64 option ensures that the image is built for the AMD64 architecture, which is useful for cross-platform compatibility.

Setting the Working Directory:

WORKDIR ${LAMBDA_TASK_ROOT}

Sets the working directory inside the Docker container to ${LAMBDA_TASK_ROOT}, an environment variable defined in AWS Lambda base images that points to the directory where the Lambda function code should reside.

Copying package.json and package-lock.json:

COPY functions/pdf/package.json ./

This command copies package.json (and package-lock.json if available) from your project's functions/pdf directory to the current working directory in the Docker image. These files define the project dependencies.

Installing Dependencies:

RUN npm install --production

Executes npm install --production, installing the dependencies listed in package.json without including the packages listed in devDependencies. This keeps the Docker image size smaller by excluding unnecessary packages.

Copying Application Code:

COPY functions ${LAMBDA_TASK_ROOT}/functions
COPY libs ${LAMBDA_TASK_ROOT}/libs

These commands copy the rest of your application code into the Docker image. It copies the functions directory (which contains your Lambda function code) and the libs directory (which might contain shared libraries or utilities) to their respective locations within the ${LAMBDA_TASK_ROOT} directory in the image.

Setting the Command (CMD):

CMD ["functions/pdf/hello.handler"]

Specifies the default command that should be executed when the Docker container starts. In this case, it’s indicating that the Lambda function handler named hello.handler within the functions/pdf directory should be invoked. The handler is the entry point of your Lambda function, typically formatted as fileName.methodName.

This Dockerfile is tailored for AWS Lambda deployment using Docker, ensuring that the environment and file structure are optimized for the Lambda execution model. It demonstrates a straightforward way to containerize a Node.js Lambda function, making deployment and scaling easier to manage within the AWS ecosystem.

Run it locally

The following commands are used to build and run a Docker container based on the Dockerfile for a Node.js AWS Lambda function, and then to invoke the function locally using curl. Let's break down each command and its purpose in detail:

docker build --platform linux/amd64 -t pdf-hello .
docker run --platform linux/amd64 -v ~/.aws:/root/.aws -p 9000:8080 pdf-hello
curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}'

1. Building the Docker Image

docker build --platform linux/amd64 -t pdf-hello .
  • docker build: This is the Docker CLI command used to build a Docker image from a Dockerfile.
  • --platform linux/amd64: Specifies the target platform for the build. This option ensures that the Docker image is built for the AMD64 architecture, which is important for compatibility, especially if you're using a machine with a different architecture (e.g., M1/M2 chips on Mac).
  • -t pdf-hello: The -t flag tags the resulting Docker image with the name pdf-hello. This tag is used to identify the image when you want to run a container based on it.
  • .: The last part of the command specifies the build context (in this case, the current directory). Docker uses the files and directories in the specified context to build the image. The Dockerfile in the current directory is used as the recipe for the build.

2. Running the Docker Container

docker run --platform linux/amd64 -v ~/.aws:/root/.aws -p 9000:8080 pdf-hello
  • docker run: This command creates and starts a Docker container based on the specified image.
  • --platform linux/amd64: Similar to the build command, it specifies the platform for the container. This ensures consistency in the execution environment, especially when running on machines with different CPU architectures.
  • -v ~/.aws:/root/.aws: This option mounts the ~/.aws directory from your local machine to /root/.aws inside the Docker container. This is particularly useful for AWS Lambda functions that require AWS credentials for accessing other AWS services. The container can use your local AWS configuration and credentials.
  • -p 9000:8080: Maps port 9000 on your local machine to port 8080 inside the Docker container. AWS Lambda Docker images use port 8080 for the runtime interface emulator, which allows you to invoke your function locally.
  • pdf-hello: Specifies the name of the image to run, which you tagged when building it.

3. Invoking the Lambda Function Locally

curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}'
  • curl: A command-line tool used for making HTTP requests.
  • -XPOST: Specifies that the HTTP request is a POST method, which is required to invoke the Lambda function.
  • "http://localhost:9000/2015-03-31/functions/function/invocations": This URL is the endpoint exposed by the AWS Lambda runtime interface emulator running inside the Docker container. It follows the AWS Lambda API format. The emulator listens on port 9000 (mapped to your local 9000), allowing you to invoke the function.
  • -d '{}': Sends an empty JSON object as the data payload of the POST request. This mimics the event object that Lambda functions receive when invoked. Depending on your function's logic, you might need to replace the empty JSON object with a more appropriate payload.

Together, these commands provide a workflow for locally building, running, and testing a Dockerized AWS Lambda function. This approach is useful for local development and debugging before deploying the function to AWS Lambda.

Building Changes Locally Automatically

There are several ways to automate the process of building and making changes to your Lambda code available locally. This can significantly streamline the development workflow, especially when frequently updating your code. Here are a few approaches you can consider:

1. Docker Compose with Build Context

Using Docker Compose, you can set up a configuration that rebuilds your Docker image and restarts your container whenever you make changes to your Lambda code. Docker Compose allows you to define and run multi-container Docker applications. With the right setup, you can automate the build process based on file changes.

  • Create a docker-compose.yml file: Define your service, including the build context and any volumes for live code updates. Here's an example configuration:
version: '3.8'
services:
lambda:
build:
context: .
dockerfile: Dockerfile
volumes:
- ./functions:/var/task/functions
- ~/.aws:/root/.aws
ports:
- "9000:8080"
  • Use Docker Compose to Build and Run: Instead of using docker build and docker run separately, use docker-compose up --build. This command builds the image if it doesn't exist or rebuilds it if the Dockerfile has changed, and then starts the container.

For automatically reloading changes without rebuilding the image, mounting volumes is key, as shown above. However, note that for changes in dependencies (i.e., package.json), you'll need to rebuild the image.

2. Using a Watcher Tool

You can use a file watcher tool that monitors changes in your source code and automatically triggers a rebuild and restart of your Docker container. Tools like nodemon, watchexec, or even a simple bash script can be used for this purpose.

  • Example with nodemon:
  • Install nodemon globally with npm if you haven't already:
npm install -g nodemon

Then, you can run nodemon with a custom script to rebuild and restart your Docker container upon any file change. For example:

nodemon --exec "docker-compose up --build" -e js,json

This command tells nodemon to watch for changes in .js and .json files and execute docker-compose up --build whenever a change is detected, rebuilding and restarting your Docker container.

Deploying the Image

After you’ve tested your Dockerized Lambda function locally and are satisfied with its functionality, the next step is to deploy it to a production or staging environment on AWS. Since you mentioned using AWS Fargate, I’ll guide you through the process of uploading your Docker image to Amazon Elastic Container Registry (ECR) and then deploying it to run on AWS Fargate.

Step 1: Create an Amazon ECR Repository

  1. Open the Amazon ECR console at https://console.aws.amazon.com/ecr/repositories.
  2. Create a new repository by clicking the “Create repository” button.
  3. Name your repository and apply any necessary configurations. Then, click “Create repository”.

Step 2: Authenticate Docker to Your ECR Repository

Before you can push or pull images, you need to authenticate your Docker client to your Amazon ECR registry.

  • Run the aws ecr get-login-password command to retrieve an authentication token and authenticate your Docker client to your registry. Use the docker login command to authenticate:
aws ecr get-login-password --region your-region | docker login --username AWS --password-stdin your-account-id.dkr.ecr.your-region.amazonaws.com

Replace your-region with your AWS region, and your-account-id with your AWS account ID.

Step 3: Tag Your Docker Image

Before you can push your Docker image to ECR, you need to tag it with the repository URI you created in Step 1.

docker tag pdf-hello:latest your-account-id.dkr.ecr.your-region.amazonaws.com/your-ecr-repository:latest

Replace pdf-hello:latest with the name of your local Docker image, and update the repository URI with your information.

Step 4: Push Your Docker Image to ECR

Now, push your Docker image to the ECR repository:

docker push your-account-id.dkr.ecr.your-region.amazonaws.com/your-ecr-repository:latest

Step 5: Deploy to AWS Fargate

After pushing your Docker image to ECR, you’re ready to deploy it on AWS Fargate.

  1. Open the Amazon ECS console at https://console.aws.amazon.com/ecs/.
  2. Create a new task definition:
  • Choose “Fargate” as the launch type.
  • Configure the task and container definitions. In the container definitions, specify the image URI from ECR where you pushed your Docker image.
  1. Create a new ECS cluster if you don’t already have one.
  2. Launch a new service within your ECS cluster:
  • Choose the task definition you created.
  • Configure the service to use the Fargate launch type.
  • Follow the prompts to configure networking and other service parameters.
  1. Review and create your service. AWS Fargate will start running your container based on the configurations you’ve specified.

Referencing Image in Serverless.yml

To deploy a Docker container as an AWS Lambda function using the Serverless Framework, you’ll need to modify your serverless.yml configuration to reference the Docker image stored in Amazon ECR (Elastic Container Registry). Below is an example of how you might set up your serverless.yml file to achieve this. This setup assumes you've already pushed your Docker image to an ECR repository.

service: pdf-lambda-service

provider:
name: aws
runtime: provided.al2
ecr:
images:
pdfLambdaFunction:
path: <account-id>.dkr.ecr.<region>.amazonaws.com/your-ecr-repository:latest

functions:
pdfHello:
image:
name: pdfLambdaFunction
events:
- http:
path: pdf/generate
method: post
cors: true

This configuration file sets up a Serverless AWS Lambda function that is triggered by HTTP POST requests to /pdf/generate and executes a Docker container hosted in ECR. It leverages AWS Lambda's support for container images, allowing you to run complex or custom runtime applications as serverless functions.

Empower Your Tech Journey:

Explore a wealth of knowledge designed to elevate your tech projects and understanding. From safeguarding your applications to mastering serverless architecture, discover articles that resonate with your ambition.

New Projects or Consultancy

For new project collaborations or bespoke consultancy services, reach out directly and let’s transform your ideas into reality. Ready to take your project to the next level?

Protecting Routes

Advanced Serverless Techniques

Mastering Serverless Series

--

--