Benchmarking AWS Lambda runtimes in 2019 (part I)

Have you ever wondered whether your AWS Lambda could be faster if you used a different runtime?

Tai Nguyen Bui
Jul 4, 2019 · 7 min read

AWS Lambda allows us to execute code in the cloud without needing to provision anything. In the past few years, it has become increasignly well-known thanks to the rise of serverless applications.

As an addition to all the available runtimes in AWS Lambda, AWS announced Custom Runtimes at Re:Invent 2018. They released open-source custom runtimes for C++ and Rust, while some some AWS partners also released custom runtimes for Elixir, Erlang, PHP, and Cobol. This announcement unlocked a huge opportunity for many developers willing to execute code in the cloud with some specific programming languages.

AWS Lambda, allowed runtimes

My colleague Nick Tchayka took that opportunity to create a Haskell runtime, just a few hours after the official announcement of custom runtimes in Las Vegas. It is now in its second version, providing great performance improvements over its initial version.

After reading two great articles on this topic, we wanted to see how things had evolved in AWS Lambda, so we performed similar tests, while adding a few more runtimes to the benchmark. If you’re interested in a good read, here are those two articles:

Benchmarking process

First of all, we created Hello World functions, with 1024 MB of memory allocated, for the following runtimes:

Java, Haskell, Go, Python 3.6, Ruby 2.5, Rust, Node.js 8.10, C# (.NET 2.1) and F# (.NET 2.1)

API Gateway, csharp-hello endpoint

We then created an API in AWS API Gateway to gather our endpoints. A resource, such as haskell-hello, and a GET method were assigned to each of our functions. This resulted in endpoints to make requests like:

GET https://<api-gw-stuff>

At this point, we could have tested all these functions manually. However, it wouldn’t have been easy or practical to benchmark this way. Hence, we decided to follow previous comparison approaches and perform load testing using Serverless Artillery based on Artillery, but taking advantage of the “infinite scaling” of AWS Lambda to simulate concurrent requests.

Serverless Artillery Script

AWS CloudWatch helped us collect and plot the metrics for our benchmark. Despite the limitations of the default metrics available, custom widgets allowed us to perform more complex queries based on logs.

Moreover, we placed screenshots and instructions in a public GitHub repository we created, so anyone can reproduce these tests.

Hello World — benchmark result

For this benchmark, we executed 30 requests per second (rps) coming from different unique sources for a period of 1200 seconds, which resulted in 27,000 requests to each of our functions.

The first time that a Lambda function is invoked, it takes longer due to “provisioning” time which can vary dramatically depending on the runtime; this is called “cold-start.” The slowest runtime to warm-up is Java, followed by F# and C#. On the other end of the spectrum, Python cold-start is more than 400x faster than Java and 1.7x faster than Node.js, its direct rival.

AWS Lambda cold-start graph
AWS Lambda functions package size

In previous comparisons we’ve seen that larger package sizes resulted in higher cold starts. However, there are two exceptions, Go and Ruby.

On one hand, Go Lang, with a fairly big package of 4.7 MB, is the third best performing cold-start that we have seen in this benchmark. In contrast, Ruby, being the lightest package with only 240 bytes, has a 5x slower cold-start than Go.

In order to get a better grasp of Lambda runtime performance and to follow previous comparisons, it is more appropriate to take cold-starts out of the equation. Below, we can appreciate the maximum duration fluctuation for a period of 10 minutes.

AWS Lambda, maximum duration without cold-starts
Runtime maximum execution time graph (15 minutes)

We continue to see great performance from Python, but we also see stable execution durations for all of the runtimes. Additionally, C# and F# perform very well once the cold-start has passed.

When assigning memory to a function, we are also assigning proportional CPU allocation, which could also affect the duration of execution and the consequent billing.

The table below shows memory usage for the different runtimes.

Runtime memory usage

The table shows that our Hello World example could work with 128MB of allocated memory. However, as mentioned above, this could also result in different outcomes for our benchmark.

We did not record any errors during the 27k requests to each of the functions, so we can say that for a total of 243k requests that we’ve made, AWS Lambda and AWS API Gateway have performed really well.

All these numbers sound great, but we wanted to see the actual duration from when a request arrives at AWS until it departs, in our case, us-east-1. For that reason, we extracted a few more metrics, latency and integration latency, from the API Gateway. The time for the information to travel to the origin later on also needs to be added; one way to see this latency is by pinging the region you are targeting and multiply it by two in order to get a rough estimate.

API Gateway latency (27k requests)

From the results displayed in the above table, we can see that there is around a 3 ms delay between the global latency and the integration latency. This can be understood as the time it takes for a response from AWS Lambda to be transformed and ready to be returned to the requester.

Moreover, taking Java benchmark as an example, maximum latency is 2400 ms, far greater than the 825 ms of the worst cold-start. This shows some spare time in which AWS could be preparing the VM for an AWS Lambda with Java runtime so it’s ready to be used for the first time.

Comparison against previous benchmarks

  • Go has improved a lot; while previously the average execution duration of Go and Java were almost the same, Go is now almost 3x faster
  • C# and F# continue to have the best average execution duration
  • Node.js and Python runtimes continue to improve at a good pace, being the best performers
  • Package sizes for some runtimes are not related to the cold-start time


When testing Lambda functions, we should take into consideration the full request duration. It is great to see really low execution durations in Lambda, but the latency added in the AWS API Gateway and the AWS Lambda VM provisioning should be accounted for as well. Additionally, some languages may have really high cold-starts but perform great once warm.

We do not have specific preferences about the language to use when developing functions. However, it is clearly apparent that Node.js, Go, and Python are among the best performant runtimes and should be considered when latency and cold-starts are key in the solution. It is worth noting that more execution time is equivalent to greater cost 💵.

A Hello World example gives us a hint of the performance of the different runtimes for some really simple applications. However, this hardly represents a real-world application, as previous comparisons have stated. Nevertheless, in part II of this blog, which we will be releasing soon, we will talk about performance when reading and writing from DynamoDB, a task that could affect the total duration of an execution for a variety of reasons.

All in all, it is great to see all the hard work that is being put into AWS Lambda so that runtimes can continue to improve.

If you are interested in this topic, we have just released Part II of this AWS Lambda benchmark, click here to continue reading :D

Thanks to Carlos Domínguez Padrón for working on this with me and to the rest of The Agile Monkeys for proofreading and giving feedback prior to publishing this post. Also, big thanks to the authors of previous comparisons that inspired us to write this post.

Feedback from everyone else interested in the topic is welcome, I am open to discussion :D

Stay tuned for the part II, where some interesting stuff will be revealed 🚀

If you have questions, or would like to work with us, please contact us

The Agile Monkeys’ Journey

We write about what we learn and what we think.