On The Serverless Cold Start Problem

Krish
StackSense
Published in
3 min readJun 11, 2019

One of the biggest complaints against Functions as a Service (FaaS) like AWS Lambda, Google Cloud Functions, Azure Functions, IBM Functions, etc. is the problem of cold start. Cold start is about the delay between the execution of a function after someone invokes it. To be more specific, it is about the function at the time of invocation. In the background, FaaS uses containers to encapsulate and execute the functions. When a user invokes a function, FaaS keeps the container running for a certain time period after the execution of the function (warm) and if another request comes in before the shutdown, the request is served instantaneously. Cold start is about the time it takes to bring up a new container instance when there are no warm containers available for the request.

Most FaaS providers have 1–3 second cold starts and this impacts certain types of applications where this latency will have a dramatic impact. The cold start varies by the cloud provider and programming languages. Though it is almost a year old, this benchmark study shows cold start latency impact in various FaaS offerings.

In the 2018 Serverless Community survey, developers quote cold start latency as the third biggest concern. Rishidot Research has been talking to various enterprise customers about their Serverless adoption plans and we hear a lot about cold starts. At Rishidot Research, we feel that the concerns regarding cold start are overblown and we will talk about our rationale in this blog post.

The cold start problem is overblown for various reasons. First, and foremost, users should understand that while FaaS is maturing fast, it is not suitable for many workloads. They can meet the needs of event-driven functions but, for most other workloads, containers are a better fit. It is also important for users to understand that the low cost of the service is because FaaS providers need not run infrastructure in anticipation of use and they can shut down unused resources in a more fine-grained way. They then translate the cost savings because of these resource efficiencies into cost savings for customers. Users make this choice while picking FaaS as their application platform. IBM is using stem cell containers to cut down on the cold starts and platforms likeOpenFaaS give users control over how they want to use the resources. In fact, users could avoid the cold start problem by embracing serverless container platforms like SpotInst Ocean which gives considerable savings by taking advantage of Spot Instances. Most cloud providers will find out a way to solve this problem eventually but the concerns regarding cold starts are overblown.

We strongly recommend that users consider the continuum of services from containers to serverless containers to services like Google Cloud Run to FaaS. Taking a binary approach of Kubernetes vs FaaS is shortsighted and it will not help your organization use the resources optimally.

--

--

Krish
StackSense

Future Asteroid Farmer, Analyst, Modern Enterprise, Startup Dude, Ex-Red Hatter, Rishidot Research, Modern Enterprise Podcast, and a random walker