Exploring serverless technology by benchmarking AWS Lambda

Luke Demi
The Coinbase Blog
Published in
8 min readMar 28, 2019

At Coinbase, we’re excited for the potential of serverless — if implemented correctly it can provide strong security guarantees and “infinite” scalability at a dramatically lower cost. As we’ve explored the potential for serverless, we’ve found ourselves curious about the real world performance of certain use cases. Some of the questions we’ve begun to ask include:

  • How dramatic is the VPC cold start penalty really? This has a big impact on what database technology we choose (AWS DynamoDB and AWS Aurora Serverless have “public” APIs). We’d heard that ENI Cold Start could be up to 10 seconds, is that really true? How frequently?
  • How does the size of the Lambda affect cold start times? If a smaller package can reduce cold start times, it may make sense to divide Lambdas into smaller packages. Otherwise, it might make more sense to leverage “monolith” Lambdas.
  • We’d read that Python can actually exhibit faster cold start times than Golang in the context of Lambda execution. As a Ruby/Golang shop we’re curious to see how the performance of our runtimes stack up.

Terminology

If you read the above bullet points without skipping a beat, feel free to skip on to the next section. Otherwise, the below vocabulary section should help provide a refresher to some of the terms used throughout the post.

  • AWS Lambda — Fully managed compute as a service. AWS Lambda stores and runs code (“functions”) on demand. Functions are run inside of sandboxed containers on host machines managed by AWS and created automatically to respond to changing demand.
  • Warm Start — Between function executions, containers are “paused” on the host machine. A warm start is any execution of an AWS Lambda function where an idle container already exists on a host machine and is executed from this paused state.
  • Cold Start — When a function is executed but an idle container does not exist, AWS Lambda will start a new container to execute your function. Since the host machine needs to load the function into memory (from presumably S3), cold start executions exhibit longer execution times than warm starts.
  • ENI — Elastic Network Interfaces represent a virtual network card and are required for Lambdas to communicate to resources inside of an AWS VPC (Virtual Private Network) to access resources like internal load balancers, or databases like RDS or Elasticache.
  • ENI Cold Start — In order to communicate inside a VPC, an ENI matching the security group of the Lambda must exist on the host machine when a function is initialized. If an ENI does not already exist on the host machine, it must be created before a function can be executed. ENIs can be reused between Lambdas that share the same security group, but cannot be shared across security groups even for the same VPC. AWS Plans to fix these issues sometime in 2019.
  • Box Plot — A method to visually represent numeric data by quartile. In this post outliers are shown as points outside of the box.

The Setup

We began poking around on the internet to find the answers to our questions and found several great papers and blog posts. However, there were some questions that we didn’t feel had been directly answered, or at least not in the context of the specific technologies we were using. Besides, even if we trust the results of the tests, why not verify? So we decided to take an attempt at discovering our own answers to these questions.

We wrote a small testing harness and a series of simple Lambdas to perform these tests. Really we only needed our framework to perform a series of cold and warm invocations of a given Lambda package. We are able to detect whether a Lambda is invoked cold or warm by setting a global variable “cold” to false on first execution so that while the first invocation will return “cold = true”, every subsequent invocation will return “cold = false”. We are able to force cold starts by simply re-uploading a function’s payload.

We used three different sources to measure invocation time: billing duration, observed duration, and AWS X-Ray trace statistics. Since billing duration does not include cold start time, and X-Ray trace statistics do not include ENI creation time, we use observed time for most of our tests.

Database/VPC Performance

Our first test was designed to test the performance of our most common databases from inside of Lambda. Since most of the databases we leverage live inside an AWS VPC, this test would also inherently test the performance of Lambdas created inside a VPC (this mainly being the additional time necessary to initialize an ENI on the host where our Lambda lived).

We tested six databases, Aurora Serverless, Aurora MySQL, DynamoDB, Elasticache Redis, Elasticache Memcached, and MongoDB (Atlas). All except for Aurora Serverless and DynamoDB required us to create the Lambda inside a VPC.

The results from the cold start test surprised us. We had expected ENI creation to contribute more frequently to the cold start time of the Lambdas that were created inside a VPC. Instead, cold start times seemed to be consistent across the board other than significantly more outliers on VPC Lambdas.

It became clear from these results that not every VPC cold start requires ENI creation. Rather, AWS reuses existing ENIs during Lambda execution. So while technically Lambdas in a VPC were more liable to experience an ENI Cold Start, the number of cold starts experienced was dependent on the total number of existing ENIs in the invoking Lambda’s security group.

We wanted to understand more reliably the impact of an ENI Cold Start on Lambda invocation time. So we ran the test again and forced ENI creation by recreating VPC Lambdas in a temporary new security group before each invocation. These tests more clearly highlight the heavy penalty of an ENI Cold Start, requiring at minimum 7.5 seconds and frequently more than 20 seconds!

Usain Bolt can run 100m faster than it takes to ENI Cold start a Lambda

These tests remind us to be careful when placing VPC Lambdas in hot or customer facing paths. Some potential strategies we are looking at to mitigate the impact of ENI Cold Starts are to let related Lambdas share security groups (and therefore ENIs) or placing all VPC Lambdas on a 5–10 minute timer to ensure ENIs are created ahead of execution.

Package Sizes

Our second test was designed to understand the cold start performance of package sizes across the AWS Lambda memory sizes. We’d read that the amount of compute provided to a given AWS Lambda function is based on the provisioned memory of the function.

This test was no different than the previous test, except this time we simply included large randomly generated files in the zip we uploaded to Lambda.

The results for this test were clear: big package sizes equal big cold start penalties. It follows that Lambda pulls down a function’s package to the invoking host on cold starts, but what’s less clear is why there’s such a massive penalty on larger sized packages. The simple math is that a 10s cold start for a 249MB package is a download speed of 200mbps, quite a bit below the 25gbps that an r5.metal or similar could provide. This indicates that AWS is throttling cold start bandwidth on a per-lambda basis. The lack of a performance boost on larger memory Lambdas seems to imply that this is not dependent on Lambda memory size.

Runtime

Our final test was designed to understand the cold and warm start performance of the various AWS Lambda runtimes. We chose to compare Ruby and Golang (along with Python as a control) since they’re the primary languages we leverage internally. This test executes a very simple scripts that simply returns the “cold” global variable and the root X-Ray trace ID.

The results of the test indicate that while Golang comes out on top for both cold and warm start performance, there is not a dramatic difference in execution time between the three languages. The results of this test allow us to feel comfortable allowing engineers to write Lambda functions in whichever language they feel most comfortable.

Summary

In summary some of our major takeaways include:

  • Any ENI Cold Start in a hot/customer facing path will result in what we consider to be an unacceptable spike in latency. ENI Cold Starts can be mitigated by allowing shared Lambda functions to share security groups (at least until AWS solves these issues sometime in 2019).
  • Lambda package size does matter significantly for cold start executions. Users should be careful to avoid hot paths with packages in the 100MB+ range.
  • Provisioned Lambda memory size matters less than anticipated — at the lower end of the scale (128MB) we were able to observe heightened response times, but the impact of size on lambdas larger than 512MB was negligible.
  • The difference between compiled and interpreted languages (Golang vs Ruby) turned out to be far less dramatic than we had anticipated. As a result we can feel comfortable allowing developers to write functions in whichever language they feel most comfortable.

We’re excited to run these same tests in the future to see how AWS Lambda performance is changing over time!

If you’re interested in helping us build a modern, scalable platform for the future of crypto markets, we’re hiring in San Francisco!

This website may contain links to third-party websites or other content for information purposes only (“Third-Party Sites”). The Third-Party Sites are not under the control of Coinbase, Inc., and its affiliates (“Coinbase”), and Coinbase is not responsible for the content of any Third-Party Site, including without limitation any link contained in a Third-Party Site, or any changes or updates to a Third-Party Site. Coinbase is not responsible for webcasting or any other form of transmission received from any Third-Party Site. Coinbase is providing these links to you only as a convenience, and the inclusion of any link does not imply endorsement, approval or recommendation by Coinbase of the site or any association with its operators.

Unless otherwise noted, all images provided herein are by Coinbase.

--

--

Responses (2)