Warming AWS Lambdas based on Navigation Context

Ram Rajagopalan
dtlpub
Published in
3 min readSep 27, 2019

A slightly different approach to the Cold Start problem

Photo by Joseph Pearson on Unsplash

There are numerous articles out there about Cold Start and how to overcome or reduce the time it takes. This article assumes that the reader knows what Cold Start is, in AWS Lambda world. The intention of this article is to consolidate some of the common approaches to reduce cold start and how to take advantage of the navigational context to warm Lambda.

As we develop more applications using Serverless platform and as more services continue to emerge, we have adapted and learnt to uncover ways to keep our Lambdas warm based on application needs.

As this technology is continuously evolving, so are the ways to address the cold start. Currently, the following approaches are common when it comes to cold starts:

1. Using Layers and reducing the size of the package:

Dependencies add to the package size resulting in increased cold start time. Adding some of those huge dependencies as Lambda Layers reduces the time it takes for the main Lambda to bootstrap an instance.

2. Increasing the memory of the instance:

Sometimes there can be slight increase in speed (depending on few other factors) when you increase the memory of the instance (though not that significant) but might turn out expensive.

3. Lambda within VPC or Outside VPC:

Lambda inside VPCs are sometimes slower because of the creation of Elastic Network Interfaces(ENIs). With new Lambda Execution Environments (AWS to roll out updates), the ENIs will be shared and will significantly reduce the startup times.

4. Cron Jobs:

Finally To keep them warm, people have written their own CRON jobs to have the Lambda invoked every 12 minutes or so to keep them warm using CloudWatch.

Contextual Warming of Lambda

Keeping a bunch of Lambdas warm for the entire day might turn out to be expensive, especially for an organisation with multiple serverless applications. So I decided to venture out the option of warming them on demand based on the context of the user.

Our application did not need the Cron Job Warmers for most of the Lambda’s, the reason being, we actually knew which Lambda will be called and when.

Lets take the diagrammatic example below:

Contextual Warmers

This is a schematic representation of a simple frontend application that has its Lambda APIs built to cater users from various geographic locations.

The only Lambda that is always kept warm is the Authorisation Lambda for obvious reasons. But once we know the context of the user that is logged in, we invoke them to warm the specific Lambdas so the users experience uninterrupted service.

As seen on the illustration above, Lambda A, B is warmed only after knowing that the user from North is logged in. Same applies for Lambda C and D. We also have scenarios where there will be common Lambdas (in this example Lambda E ) that are utilised used by both the set of users.

Please note, If you have the Lambda’s within a VPC, you need a NAT Gateway so that the Warming Lambda can make external calls.

There must be lots of different strategies out there that people use, Please comment some of your adaptations.

--

--

Ram Rajagopalan
dtlpub
Writer for

Sr. Development Consultant at Digital Transformation