Understanding in depth how AWS Lambda layers work

arturoth
7 min readOct 31, 2023

--

This story is about understanding how Lambda layers work in AWS because I have constantly found knowledge and also gaps in the developers’ teams that are important to highlight.

First, the concept

A Lambda layer is a .zip file archive that contains supplementary code or data. Layers usually contain library dependencies, a custom runtime, or configuration files.

This image is from https://docs.aws.amazon.com/lambda/latest/dg/chapter-layers.html

In short, the layers allow to enrich the user experience, it is practically to add a layer that will extend capabilities, it reminds you something of this, of course, Sidecar Containers.
So, the concept of sidecars is the following:

In 2015, sidecars were described in a blog post about composite containers as additional containers that “extend and enhance the ‘main’ container”. Sidecar containers have become a common Kubernetes deployment pattern and are often used for network proxies or as part of a logging system.

This image is from https://spacelift.io/blog/kubernetes-sidecar-container

Second, comparison of the concepts

In practice, both sidecars and layers work differently since sidecars serve more functions and indeed, a poorly implemented sidecar can generate a bad experience in serving the main container, however a layer that overloads the function can quickly encounter the limits of the AWS Lambda service that is important to consider (see AWS Lambda quotas).

So, both models have their distances and proximities but it is important to clarify that the malfunctioning of both generates disruption.

Now, normally when we talk about AWS Layers for Lambdas it is to complement information, help to improve the execution experience or simply reference some libraries as for example, in Python we have the “AWS Lambda Power tools” which are tools that enhance the development for AWS Lambda in Python and add third party very interesting for development.

So, what can go wrong?

Third, landing the concept of layers

According to the article “Working with AWS Lambda and Lambda Layers in AWS SAM” a simple image-based landing model would be as follows.

This image is from https://aws.amazon.com/blogs/compute/working-with-aws-lambda-and-lambda-layers-in-aws-sam/

This directly implies that the layers will always be complementary (main layer and additional layers), all layers need to have good code, to understand what is coming from each one and that the execution can definitely be a consequence of a good development of both.

Four, And how AWS Layers work?

The model is simple, each one of the layers uses the available runtime, initializes each one of its capacities and the invocation process starts the invocation as such, where when it concludes, the extension is turned off and there is a time for the end of the invocation as such.

How two layers starts and use times to processing.

Five, So, what happens if my code at invocation time is very fast and in “cold starts” it is a turtle?

Here we need to land a very important concept, the “Cold start” and for this, the following block explains briefly how it works.

A cold start occurs when an AWS Lambda function is invoked after not being used for an extended period of time resulting in increased invocation latency. One of the interesting observations was that functions are no longer recycled after 5 minutes of inactivity — which makes cold starts far less punishing.

Therefore it is important everything indicated in the previous point, a good development, code construction, references, libraries that do not fall into redundancy is very important to deliver a “Cold start” performant and definitely not fall into lethargy when trying to start a function.

Six, now, how do I prove who is the “guilty” layer?

Now, to find the guilty layer it is necessary to understand how the projects are built and also the code, libraries and CI stages that precede the final deployment, if you do not understand that, we will be in the typical dilemma of the fault is yours.

Classic spider verse meme

Seven, hands on and debugging

In this case, I will use as a layer the integration to New Relic “Layers for New Relic” to send telemetry of a lambda and its invocations where the objective will be to determine what is the actual start and how long it takes for the cold start.

So, we have at our disposal.

  • A deployment of a service with a “nodejs14.x” and “python3.9AWS Lambda runtime in which we will deploy a simple “Hello World”.
  • We will use a layer associated with the New Relic model, in this case for region “us-east-1” (arn:aws:lambda:us-east-1:451483290750:layer:NewRelicNodeJS14X:118 and arn:aws:lambda:us-east-1:451483290750:layer:NewRelicPython39:51) and

First of All, How do I know if my Lambda is in cold start?

You can check the number of times your function is invoked using the ProvisionedConcurrencySpilloverInvocations CloudWatch metric. A non-zero value indicates that all provisioned concurrency is in use and some invocation occurred with a cold start. Check your invocation frequency (requests per second). In New Relic you can get the data with tables AwsLambdaInvocation, AwsLambdaInvocationError with the attribute aws.lambda.coldStart.

Case 1, Understand “cold start” invocation with layer

  • In nodejs case, the real duration when exclude cold start is near to 96 ms.
Start process could be init post 448 ms in Cold start duration
In this case, execution take 542.46 ms with an cold start with 448.34 ms, total execution 96 ms
This is a table comparative when cold start event is execute with New Relic Layer
  • In python case, the real duration when exclude cold start is near to 170 ms.
Start process could be init post 922 ms in Cold start duration
In this case, execution take 923 ms with an cold start with 755.91 ms, total execution 170 ms
This is a table comparative when cold start event is execute with an light layer from New Relic

Case 2, Understand when invocation was execute before “cold start” event

  • In nodejs case without layer an classic start get 51.76 ms to response.
In node js case, when remove the layer duration take 51.76 ms of time
  • Python example differences without layer and an classic start is with 1.45 ms takes to response.
When remove the layer, duration take 1.45 ms of time

Cases conclusion

The New Relic layer is definitely a factor that both in cold start and in a start with the warming already activated will not pass the 700ms, a cold start will always be a cold start and you can have better performance but you should never forget that and everything that comes later that adds up (libraries, building the application as such, referencing, among others), then there is responsibility and good responsibility in a correct construction of the implementation remembering that this will later be an invocation that both in warm and cold can initialize. You cannot leave aside also the lambda@Edge that if you are on a Cloudfront attending capabilities will give you an additional boost and you will get out of a warming model.

Conclusion

To go in depth on how to implement a layer you should first take into consideration that your layer should contribute in the sense, be an integration that adds value and does not overload your implementations. Added to the low weight factor also that it is related to the runtime that is going to be used and therefore, you already have 3 factors and if you add the factors on lambda limits themselves you should not forget that these should be present independent of your implementation, the lambdas have use cases and in other cases you should simply reconsider if the way is to go for a serverless function.

Don’t forget that it is also important to note that the readiness and execution time of a cold start invocation depends on the compute that AWS makes available and if you find yourself running through the layer, don’t forget to consider all of the above factors.

In short, consider the layer you will use and take into account also your implementation, on the way for a lambda function to fulfill its task is to consider all the factors before implementing just because the wonderful world of serverless gives us at the expense of these important details.

References

Does coding language memory or package size affect Cold starts of AWS Lambda.

Does AWS Lambda keep your idle function around before a cold start

Best practices for working with AWS Lambda Functions

--

--