Building .NET Core AWS Lambda with ReadyToRun on Windows

Dima Starodubtsev
5 min readAug 19, 2021

--

Photo by Braden Collum on Unsplash

Recently I’ve been asked to improve the performance of one Lambda function. It was a long journey for me because the function was underloved for many years and had many issues. I decided to start from the two biggest problems: a big deployment package and a huge cold start.

The Lambda function has been implemented using C# and .NET Core 3.1. Many of us know about a great feature from Microsoft called ReadyToRun compilation. AWS also has announced its support. So, for me, it was an obvious choice and I started digging into it.

To my great surprise, quite soon I discovered that on Windows I can’t build a project with enabled ReadyToRun for Linux. Let me explain the reasons why this problem exists and how you can overcome it.

When a .NET app starts just-in-time (JIT) compiler has to transform intermediate language (IL) to native code. This process definitely adds time to Lambda first execution (later I will tell how much time I’ve cut from the app start in my case).

ReadyToRun compilation allows moving this conversion from run time to build time. As a result, your assembly may grow because .NET needs IL and native code inside a DLL. But the work done by the JIT compiler is bigger evil here.

It is easy to enable the ReadyToRun flag. For example, if you use CLI it may look like this:

dotnet publish -c Release -r win-x64 -p:PublishReadyToRun=true

PublishReadyToRun toggle tells the .NET compiler to convert IL to native code during the build. But you may notice that our command has two more parameters. -c Release is well-known. It specifies the release profile. But what about the parameter in the middle?

-r flag called runtime identifier (RID) is used to specify the runtime environment for your project. Without this flag, .NET doesn’t know how to convert IL to native code because native code looks differently on Windows, Linux and any other OS. Even CPU architecture is important for it. So, you have to tell .NET which hosting environment you will use for your application.

After brief googling, I found RID for Linux OS, which is used for Lambda hosting and my publish command transformed into

dotnet publish -c Release -r linux-x64 -p:PublishReadyToRun=true

At that moment, I was thinking that the task is pretty easy. But .NET compiler disagreed with me and wrote me a message:

error NETSDK1095: Optimizing assemblies for performance is not supported for the selected target platform or architecture. Please verify you are using a supported runtime identifier, or set the PublishReadyToRun property to false.

As it turned .NET SDK for Windows can’t compile ReadyToRun code for the Linux platform. The platform where you compile your project has to be similar to the hosting platform. The full list of restrictions is here. Taking into account that our CI pipeline uses Windows machines and AWS Lambda uses Linux, I had to find the latter for building the project. And Docker helped me with this issue.

First of all, I installed Docker. It is an easy task. So, I won’t stop on it.

The solution which I decided to choose has 2 files Docker-build and build.sh . The first file describes the Docker image and the second actually builds your project.

Just add Docker-build file to the root folder of your repository:

# Specify base image
FROM mcr.microsoft.com/dotnet/sdk:3.1-alpine
# Install required OS packages
RUN ["apk", "add", "--no-cache", "bash", "zip"]
# Copy all data from the root folder of the repository
# to the "build" folder of the image
COPY . ./build
# Make "build" folder working directory
WORKDIR ./build
# Give some permissions to "build.sh" folder
RUN ["chmod", "+x", "./build.sh"]
#Specify "build.sh" as entry point for the image
ENTRYPOINT ./build.sh

The first line is the most interesting here. It tells Docker which .NET SDK image should be used as the base image. Actually, this line specifies which Linux version you want to use for building your project. I stopped on a lightweight Linux distro called Alpine. But here you can find the full list of images for different OS.

The second step is to add build.sh file to the root directory of your repository. It should look like this:

#!/bin/bash# Build the project
dotnet publish "PATH_TO_PROJ_FILE" -c Release -r linux-musl-x64 --self-contained false /p:VersionPrefix="1.0.0" /p:PublishReadyToRun=true || exit 1
# Create a new directory where we will put deployment package
mkdir -p /package
# Go to bin folder with built project
cd /build/PATH_TO_PROJECT/bin/Release/netcoreapp3.1/linux-musl-x64/publish
# Archive deployment package and store it to the specified folder
zip -r /package/deployment-archive.zip . || exit 1
# Create the output directory in the mounted volume
mkdir -p /volume/package
# Copy the deployment package to the folder in the mounted volume
cd /package
cp -a deployment-archive.zip /volume/package || exit 1

The file above actually does 3 important things:

  • It builds the project using PublishReadyToRun toggle and uses linux-musl-x64 RID for it. -r parameter depends on the base image which we chose in Docker-build.
  • After that, it archives the built project to deployment-archive.zip file. It is important to note that the path, where DLLs lies, contains RID used for publish command.
  • And finally, the deployment archive is copied to the mounted volume. In reality, we just move the deployment archive from the image to a real machine (CI agent or your workstation).

Also, I would pay your attention to self-contained flag. It has to be set to false for AWS Lambda. The flag tells the compiler that the application will be run on a machine with installed .NET Core runtime. If the flag is true the compiler adds to the deployment package the whole runtime and your package has a size of more than 200 Mb. But it is necessary for some cases like IoT.

Finally, we need to run all this stuff:

docker build --file ./Dockerfile-build . --tag yourimage
docker run --rm --volume `FOLDER_ON_YOUR_PC`:/volume yourimage

If you execute these 2 commands FOLDER_ON_YOUR_PC will contain deployment-archive.zip which can be directly used by the Serverless framework.

And that’s it!

In an ideal world, you just take a Linux machine, install .NET SDK and build your project using a correct RID. But if you don’t have this option then Docker and 20 lines of code will help you with this problem.

And finally, was it worth doing all this stuff? In my case, definitely yes. ReadyToRun build reduced cold starts up to 1.5 sec. In the AWS world, when you pay per each 100ms, it is a lot. It is worth trying this trick as it might improve performance and reduce spendings for your application.

--

--