Lambda Cold Starts, A Language Comparison đľÂ âď¸


Going serverless is more lucrative than ever, but the caveats of serverless are ever increasing. Avoiding cold starts is an important part of having a snappy user experience, choosing a language that helps you achieve this is important!
Cold Start?
A âcoldâ start in the serverless world, is the first time in some set period where a request asks for your code to be executed. Because you are in the serverless world and not paying for server time, your functions or âlambda codeâ, is only executed on a per request basis.
But not really, in fact your lambda functions are actually deployed into a container for you and have a certain time to live. While this code is âaliveâ it doesnât have to reinitialize and responses are of magnitudes faster, it is considered âhotâ. This is serverless, AWS handles the abstraction of managing servers, containers & scaling. Typically your code has 30â45 minutes to live!
Below is a request trace of the exact same lambda function run twice.


You can see in the trace that it takes roughly 650ms before the external function AWS::Lambda hands off the code to my function code AWS::Lambda::Function, and on top of that an initialization period is required. This is what we call a cold start!


The âcoldâ function is over 50x slower, than the âhotâ function. What's more these are test functions that simply return âhello worldâ, functions that have dependencies or actually perform a useful calculation could be even worse.
Methodology
I decided to test all latest languages, Nodejs8, c#.net2, Java8, Go1.x & Python3. I created each respective lambda function from from the console, leaving the default âhello worldâ log. I did this using 128mb, 1024mb & 3008mb of memory respectively. I then created three step functions that triggered all of the jobs for each memory group and configured a cloudwatch schedule to trigger it every hour. I came back to AWS X-Ray at a minimum of 7 hour intervals, recording the averages it had observed over the last 6 hours.
Results
The memory split is important here as the memory allocated to a lambda function increases, so does its CPU time linearly.


Itâs pretty evident that python is the power player here, beating most languages in every category with its 128mb of memory.
The move to 1024mb of memory saw golang, nodejs & java decrease by 25%, whilst python and .net saw a drop of 30%. Whilst 3008mb compared to the base of 128mb saw python, go, node & java see a decrease of around 42â45% whilst node only saw 37%.


I was surprised to see golangâs cold start times, similar to that of nodejs. My best guess would be that the language maturity on the platform is coming into play. As it had a recent release of Jan 2018, you could speculatively expect to see more improvements.


A word on pricing
Lambda is priced entirely around GB/s, so itâs a balance of minimizing run time and resources.
Running a lambda function at 70 rps, or 100,000 times a day, or 3 million request per month, for an average of 0.5 seconds each. Would equate to:

Take this with a grain of salt as, as depending on your workload increasing your memory and thus available CPU, can significantly decrease running time. As Jim Conning eloquently put it âFaster is cheaperâ!
Real World
A real world example of why cold starts are a key consideration when building with lambda functions. Below is my demo from my previous blog post, where I built a serverless app using a nodejs function configured with 512mb.


The cold request takes 1.8s whilst the hot request takes 281ms, a 6x increase. Imagine having your services highly decomposed over lambda functions, waiting for 2â3s for responses is not fun. Luckily Yan Cui has a great article on warming lambda functions.


Thanks for having a read of my article and if you liked it please be sure to clap!
References
Use of serverless computing services has continued to grow since our last post on Amazon Web Services Lambda in earlyâŚblog.newrelic.com