Using Lambda container with the Serverless Framework

Eric Cabrel Tiogo
OVRSEA
Published in
4 min readSep 6, 2021
Photo by Lance Anderson on Unsplash

At Ovrsea, we run on microservices architecture backed by serverless stack to focus on what brings value to our customers. For our team, being able to ship fast and frequently is paramount; this is the benefit we get with serverless. During our journey, we often faced limitations related to how it works. At the end of December 2020, AWS released the Docker container image support for Lambda, bringing unlimited capacities when working with Lambda function.

Before deploying a Lambda function, all you need to do is zip your source code, upload it to an S3 bucket, and deploy it using a CLI tool. With the new container support, you can build a Docker image with your source code, libraries/tools (Puppeteer, ML libraries, SSH client, etc.) needed to run your function. Once done, you can push the image in a Docker repository and finally deploy it using a CLI tool.

This tutorial will show how to create a lambda container with Docker, test it in our local environment, and deploy it in production.

Some benefits of Lambda container

  • No need to manage server infrastructure so you can focus on your code
  • Low cost since you only pay for what you use.
  • Scale automatically according to the traffic load.
  • High availability since you can deploy a Lambda in many regions.
  • Support many programming languages, which make it flexible.
  • It is easy to deploy; you describe your architecture, and the serverless tools do the rest.
  • Favor fast iteration and deployment.

Prerequisites

Before getting our hands dirty by writing code, make sure our development environment has the following tools installed and configured:

  • An AWS account (a free tier is enough)
  • AWS CLI installed with credentials configured
  • Docker installed
  • Node.js 14+

Create Serverless project

Serverless is a tool that makes it easier to create, test, and deploy a serverless architecture project. It supports the majors cloud providers like AWS, GCP, Azure, etc. The first step is to install it on our computer.

Now create a Node.js project with the command below:

The project generated has the following structure:

├── Dockerfile
├── README.md
├── app.js
└── serverless.yml

Create the Docker image

The Docker image can be created from one of the base images AWS provides that already implements the different runtimes required. You can also create your custom image, but it must implement the Lambda Runtime API for this to work. We will choose the second option so that we will see how to implement the Lambda runtime API.

Open the Dockerfile and replace the content with the following:

Let’s explain what happens above:

  • Line 1: We store the project’s path inside the container in a variable since it will be used in many places.
  • Line 3: We use Docker node-buster-slim as the base image.
  • Line 7–13: We create the project directory, then add package.json, and install the dependencies.
  • Line 15–16: Finally we define the command to executes when the container is launched.

Implement the Lambda API Runtime

On the ENTRYPOINT command in the Dockerfile, the first argument call an executable aws-lambda-ric from the node_modules. It is the Lambda Runtime Interface Client who is a lightweight interface that allows your runtime to receive requests from the user through HTTP and forward it to the Lambda service. Since we built our Docker image without the AWS lambda public image, we must add this package through NPM to make our image Lambda compatible.

Build the docker image. Note that you must be in the project root directory:

docker build -t lambda-hello:v1 .

Test it in local

Now I have my Docker image ready but I want to test it locally before deploying the Lambda in production. With Serverless, we usually use sls invoke local -f <function_name> but it will not work for this case because we don’t have the Lambda execution context. Hopefully, AWS provides a Runtime Interface Emulator (RIE) as indicated by his name, it will emulate the Lambda execution context locally. Let’s install it in the user’s home directory:

Time for the test!

  • Open a first terminal and run the command below to launch a container with the RIE
  • Open a second terminal and run the code below to trigger the lambda execution:
curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}'
Test Lambda container in local

In the second terminal, we got the response returned by the lambda function. This code is inside the file app.js

Deploy

Now our code works as expected in the local environment so we will deploy it in production in three steps:

  • Push the Docker image into a container registry. We will use AWS Elastic Container Registry (ECR).

Log into your AWS account, create an ECR repository called lambda-hello, and then execute the commands below to push the image.

  • Update the serverless.yml by providing the Docker image URL of the source code. We will also add a cron job that will schedule the trigger of this lambda function every Saturday at 2:30 AM.
  • Deploy our lambda function using the Serverless Framework
sls deploy

Wait for Serverless CLI to complete the deployment then, go to AWS console to the Lambda. If everything done correctly, you will have an output similar to the one below:

Testing the Lambda function in AWS Console

Conclusion

Throughout this tutorial, we saw how we can take advantage of Docker container support by AWS Lambda to build applications with more advanced capacity.

Find the source code of this project on the GitHub repository.

--

--

Eric Cabrel Tiogo
OVRSEA
Editor for

Software Engineer — Technical Blogger — Open Source enthusiast — Contributor and Mentor at OSS Cameroon