Holy Grail of Solutions: Solving .NET Lambda Cold Start Part II

Sarjeel Yusuf
7 min readApr 18, 2019

--

The .NET framework is undoubtedly a powerful tool to build applications with. Incorporating it on serverless platforms unlocks myriads of architectural and cost-related benefits. However, the serverless ambitions of .NET developers are often dashed by unendurable cold start durations. This is mainly attributed to how .NET assemblies are jitted which, among other issues, puts a lot of stress on serverless containers starting up. Nevertheless, there are solutions to the overbearing problem, and that is the purpose of this piece. To highlight the solutions any .NET developer can implement to drastically reduce cold start durations and, ideally, even eliminate them completely.

There are several steps and practices that you can perform to ensure the lowest cold start durations possible. These steps vary depending on what stage of the Lambda lifecycle they can be performed at. Hence it is important to understand how a Lambda function evolves throughout its life cycle, and what are the practices that can be adopted at each stage. The life cycle of Lambda development can be broken down into the following three phases:

Authoring code — The actual programming of your Lambda function and defining the .NET assemblies that shall be used and how they shall be used.

Uploading creating Lambda function — Packaging and deploying your code to AWS Lambda and deciding where execution begins. It also involves deciding the compute parameters such as memory and timeout values.

Monitoring and troubleshooting — The stage where the Lambda is in production and generating relevant metrics regarding performance.

Diminishing cold starts can be achieved throughout the three phases of the Lambda lifecycle. These much-overdue practices and solutions are listed below.

Reducing Set-Up Variables

When the serverless container is being prepared, it involves setting up static variables and associated components of a static language such as C#. Hence one of the easiest and quickest practices that can be implemented is to ensure no unnecessary static variables and classes. This is a solution that is done in the first phase of the Lambda lifecycle, and definitely should be practiced throughout developing the .NET Lambda.

The impact this solution has on the cold start durations is dependant on the initial function size and complexity of your .NET project. Do not expect to see any significant gains in performance simply by removing a handful of static variables. Unfortunately, the impact of avoiding a single static variable is greatly insignificant, and the scale at which the solution must be practiced for notable reductions in durations definitely transcends the scope of the average Lambda. Nevertheless, this is the simplest solution and must unequivocally be realized.

Use Dotnet CLI to Package and Upload

This is a primary solution to be practiced in the second phase of the Lambda lifecycle. One of the powerful features of developing .NET in AWS is the Dotnet CLI. Using the CLI to upload your function to AWS Lambda ensures that the serverless package is optimized in the sense that it has no unused assemblies or other components to ensure that the package is as minimal in size as it should be.

There are countless benefits in using the Dotnet CLI such as templating and achieving functionality outside of Visual Studio, but it also deserves a mention when tackling cold starts. It also allows quick and easy development and exploring its usages is highly recommended.

Memory and CPU Configurations

When uploading your code to the AWS Lambda you can also adjust compute metrics such as how much memory to dedicate a to a Lambda. The amount of memory dedicated to a Lambda also results in proportional CPU cycles being dedicated to the Lambda, and this is important especially for .NET. This is because setting up the static environment for .NET keeping in mind its static properties and weight of memories, greater CPU is required. Ensuring that the .NET function gets adequate CPU power directly relates to lower cold start durations. This is exactly what is observed with tests on .NET functions.

Effects of Tuning Memory on .NET Cold Start

Mikhail Shilkov also ran similar tests in August 2018 where he compared the effect of memory allocation on various languages across serverless platforms such as AWS Lambda and GCP. One of his tests allowed him to infer that not much CPU usage was required in setting up Node.js functions as compared to .NET functions. This was after his observations illustrating that allocating more memory to .NET functions had more significant and impactful effects on the performance of .NET functions compared to Node.js functions. In fact, his results indicate that memory allocation has almost no benefits on cold start performance for Node.js functions.

However, increasing memory allocation may also lead to quicker container tear down times. By allocating more resources to a serverless function also makes the container of that function more susceptible to closing upon inactivity. The balance between reducing the duration of a single cold start and overall probability of experiencing cold starts is a means fine tweaking of your Lambda environment. Hence the need to constantly monitor the performance of your Lambda function and adjust accordingly. Alex Casalboni, a senior tech-evangelist at AWS stressed the importance of tuning Lambda functions back in 2017 in an informative article that is still relevant today. Casalboni laid out detailed steps on tuning your Lambda functions and these steps should still be applied today.

Usage of Lambda Layers

Considering that the main problem of Lambda functions is setting up .NET assemblies to machine specific, avoiding the import of these assemblies would greatly benefit the performance of your serverless application. A Lambda Layer is a zipped archive that can contain anything from required libraries, all the way to custom runtimes. This package, in the form of a Lambda Layer, does not need to be uploaded with the serverless function once on the Lambda console and also does not need to be set up when the serverless container for the Lambda is being created.

In the case of .NET, a layer can comprise of all the assemblies that are included in the .csproj file. Moreover, these layers can easily be made using the Dotnet tools that AWS provides, and your functions would need to set-up very little or no assemblies during the time of a cold start. Creating and deploying Layers in regards to .NET functions is very easy thanks to the Dotnet CLI support and the effects on cold start durations is dramatic.

Effect of Lambda Layers on .NET Cold Starts

Deploying a .NET function that simply writes and retrieves data from DynamoDB uses assemblies from the AWS SDK. Pushing those assemblies to Lambda layers leads to a cut down in cold starts by more than half as seen in the results illustrated above. The results reinforce the knowledge of the stress that .NET assemblies have on serverless containers and also demonstrates the valiant efforts of AWS engineers in perfecting the serverless experience. This solution by far is the most convenient and beneficial solutions that can be employed.

Keeping Container Warm

This method aims at reducing the cold start frequency, and in an ideal world, eliminate cold starts completely. The solution is one which is implemented in the final phase of the Lambda lifecycle and involves periodically sending invocations to the serverless function to simulate perpetual activity.

Deciding the frequency at which to send these warming invocations requires monitoring and fine-tuning. You do not want to bombard your serverless function with too many empty invocations as it can lead to unnecessary costs due to the pay-as-you-go model of AWS Lambda. On the contrary, you also do not want to have a frequency that fails to reduce the recurrence of cold starts. It is difficult to know what time of idleness is allowed to a container before it is closed. Hence implementing the solution means monitoring the regularity of cold starts and increasing the frequency of warming invocations if the rate of cold starts increases, and vice-versa.

You can keep your Lambda functions warm by setting up triggers to periodic invocations using the AWS console. By making use of CloudWatch Events you can create rules to trigger your .NET AWS Lambda at a specific rate. The rate that you decide depends on the expected frequency of cold start occurrences. Therefore you would have to monitor your .NET Lambdas periodically.

Apart from manually setting up triggers and monitoring your Lambda function to conduct the necessary fine-tuning, you could use a monitoring tool. Unfortunately, there are very few monitoring tools out there that can achieve this. In fact, at the time of writing this article, only Thundra.io can effectively monitor .NET Lambdas and keep them warm. With the Thundra monitoring tool, you can easily configure your Lambda warmup and also monitor if any cold starts do occur. With the ideal tuning, you could possibly avoid cold starts all together, and hence not worry about the dreaded .NET latencies. Moreover, you can achieve all of this without the overhead and clutter of CloudWatch Event thanks to Thundra’s environment variable configurations.

Conclusion

Yes, .NET does suffer from longer cold start durations as compared to its counterpart runtimes. However, just like the unique properties of .NET that result in these cold starts, there are unique solutions specific to the runtime that you can implement to overcome the problem. There are other practices that you can employ, such as avoiding VPCs when not needed, but they are not as effective and viable as the solutions specifically targeted for .NET Lambdas. Finally, by understanding the problems of the .NET runtime on AWS, and devising solutions accordingly, all .NET developers can rephrase the famous Disney saying “the cold never bothered me anyway” to “the cold never bothered me anymore”.

--

--

Sarjeel Yusuf

An engineer turned product manager passionate about cloud computing and everything DevOps.Product Manager @Atlassian building DevOps capabilities.