AWS Lambda Notes.

Working with Lambda? What to know about the AWS Lambda Serverless service.

Wibo van der Sluis
My Serverless Notebook
4 min readJun 14, 2021

--

Amazon Web Services has a vast, ever-expanding suite of Serverless services. The leading service powering compute for Serverless is AWS Lambda. First announced around six years ago, it’s widely adopted to work numerous workloads for hundreds of thousands of AWS customers for all kinds of workloads.

I remember that in the beginning, when Lambda came out, I was using it primarily for Ops automation — like running clean-up scripts, custom CloudFormation and enabling event-driven Operations. For a long while, especially in the Financial sector where I am active recently, it was deemed too risky for production workloads.

At present, this has changed, and in my experience, Lambda does a fine job running production loads, actually running a good number of them myself since 2020 without any issues.

So let’s start with a quick introduction. Lambda enables you to run your code as a stateless function-as-a-service (Faas) that supports microservice architectures, deployment, and execution at the function layer without managing servers, event integrations, and runtimes. In addition, AWS notes the following benefits next to no servers to manage: continuous scaling, cost-optimized — billing based on milliseconds these days, and consistent performance at any scale.

Currently AWS Lambda natively supports Java (v11), Go (v1), C# & PowerShell (.NET Core 3.1), Node.js (v14), Python (v3.9), and Ruby (v2.7) code. Up-to-date information on versions is available here. In case you have other requirements, there is also Runtime API/Custom runtime.

What do I need to keep in mind when dealing with Lambdas?

Concurrent executions — 1000 — can be requested to go up to hundreds of thousands.
Things to keep in mind: this number is per region in an account, make sure to monitor with an alert on 80% (adjust for your needs) — for two reasons — One, you won’t get caught with your pants down, and your application starts failing because the limit wasn’t adjusted to keep up with growth. Two, if a Lambda spins out of control — for example, triggered by a self-invoke, it can claim all concurrency and get in the way of other services and possibly give you a nasty surprise on the AWS invoice at the end of the month.

Be aware that this number will be subtracted from the total available number of concurrent executions in that region if you are using provisioned or reserved concurrency.

→ Useful links:
https://docs.aws.amazon.com/lambda/latest/dg/configuration-concurrency.html
https://aws.amazon.com/blogs/compute/managing-aws-lambda-function-concurrency/
https://itnext.io/the-everything-guide-to-lambda-throttling-reserved-concurrency-and-execution-limits-d64f144129e5

Storage for uploads (function .zip files & layers) — 75Gb — can be increased to multiple terabytes.
Every function (version) and layer that you upload will cost space. Now next to making sure not to hit the limit, there is also the cost factor — and, well, proper housekeeping keeps the numbers of versions deployed under control. The best advice is to come up with some automated cleaning process. The following post talks about this in more detail:

https://aws.plainenglish.io/clean-up-old-aws-lambdas-the-easy-way-87a55330f93

ENI’s per VPC — 250 — can be increased to hundreds.
When running your Lambda’s in a VPC, you will have to think about the proper sizing for the subnets you will be running them in. Even though AWS made the usage of ENI’s more efficient — by implementing some sharing around concurrent execution of the same Lambda function — the amount of unique (expected) deployed Lambda functions will dictate the sizing. So if you deploy 300 unique functions and all get invoked simultaneously, that will create 300 unique ENI’s, and your subnet size will need to allow for that. Also, I like to take full advantage of the number of AZ’s in the region. So in my case, most resources are in the eu-west-1 region, which has 3 AZ’s. So, my Lambda subnet will need to account for that max of 300 ENI’s in each AZ. Arrange this, for example, by adding a nice large additional CIDR dedicated to running Lambdas to your VPC.
One thing to keep in mind is that this limit is shared with other AWS services, like for example EC2 and ELB’s.

Overview of significant numbers:
Memory — maximum 10GB of memory to assign — this means access to up to 6 vCPU’s — 1MB increments.
Timeout — 900 seconds = 15 minutes
Layers — 5 layers
Burst concurrency3000: us-west-2 / us-east-1 / eu-west-1 | 1000: ap-northeast-1 / eu-central-1 / us-east-2 | 500: all other regions.
Invocation payload — 6Mb sync or 256Kb async
Deployment package — 50Mb zipped / 250Mb unzipped / 3Mb in the console
Container image — 10Gb
/tmp Storage — 512Mb
File descriptors — 1024
Execution processes/threads — 1024

Lambda is designed to work with more and more other services. This allows functions to be triggered for example in response to incoming HTTP requests, an event like S3 putObject, consuming events from a queue, or just to run the function based on a schedule using EventBridge.
Here’s a list of services that integrate with Lambda: Lambda event sources

Where to go next to get more knowledge on Lambda? Here’s a curated list of resources that are an excellent place to start.

→ Video’s to watch:
Introduction to AWS Lambda & Serverless Applications
A serverless journey: AWS Lambda under the hood
Deep dive into AWS Lambda security: Function isolation
Understanding AWS Lambda streaming events
AWS Lambda networking best practices

→ Further reading:
AWS Lambda documentation
AWS news blogs on Lambda

Learn more about the other services in AWS’s Serverless stack.

--

--

Wibo van der Sluis
My Serverless Notebook

🧙 Cloud Enthusiast & AWS Wizard | 👷 Crafting Future-Proof Platforms | Seasoned Engineer Making the Cloud ☁️ Soar!