AWS Lambda Javascript Low Latency Runtime(LLRT) Benchmark — Part 1

Oleksandr Hanhaliuk
5 min readFeb 24, 2024

AWS Team introduced a new Javascript runtime called Low Latency Runtime or LLRT (Github: https://github.com/awslabs/llrt) based on QuickJS engine.

AWS says it's 10x faster to start and 2x cheaper overall costs.

Let's do some benchmarks and see if this is true.

Preparing for benchmark

In this article, we will prepare two completely different types of operations:

  1. Calculating Fibonacci sequence
  2. Download data from the URL

Writing lambda code

let isLambdaWarm = false
export async function handler(event) {
console.log('Starting function...:', { isLambdaWarm })
if (!isLambdaWarm) {
isLambdaWarm = true
}

const eventType = event.type

switch (eventType) {
case 'fibonacci':
return benchmarkFibbonachi(event)
default:
return benchmarkUrlFetch(event)
}
}

function benchmarkFibbonachi(event) {
const n = parseInt(event.number, 10) // Get the number from the event object

console.log('Fibinachi number:', n)

// Compute Fibonacci sequence up to n
const fib = (n) => {
if (n < 2) return n
return fib(n - 1) + fib(n - 2)
}

const startTime = Date.now()
const result = fib(n) // Compute the Fibonacci sequence
const endTime = Date.now()

const executionTime = endTime - startTime // Calculate the execution time

const response = {
statusCode: 200,
body: JSON.stringify({
message: `Fibonacci sequence result for ${n}: ${result}`,
executionTime: `${executionTime}ms`,
}),
}

return response
}

async function benchmarkUrlFetch(event) {
const url = event.url || 'https://jsonplaceholder.typicode.com/posts/1'

console.log('starting fetch url', { url })

// Start timing before initiating the fetch
const startTime = Date.now()

try {
// Perform the fetch operation
const response = await fetch(url)

// Optionally, you can await the JSON parsing if you want to include parsing time in the measurement
const data = await response.json()

// Stop timing after the fetch operation completes
const endTime = Date.now()

// Calculate the total time taken for the fetch operation
const fetchTime = endTime - startTime

return {
statusCode: 200,
body: JSON.stringify({
message: 'Data fetched successfully',
fetchTime: `${fetchTime}ms`,
data: data, // Return the fetched data (or omit this if not needed)
}),
}
} catch (error) {
// Handle any errors that may occur during the fetch operation
return {
statusCode: 500,
body: JSON.stringify({
message: 'Error fetching data',
error: error.message,
}),
}
}
}

This code accepts Fibonacci and URL events. The reason for choosing these two operations will be seen later in this article.

Creating lambdas

we will create 2 lambdas — first with Nodejs 20 runtime and second with custom AWS LLRT. We will set the smallest Lambda size 128 MB to have bigger latency in execution

  1. Create lambda with Nodejs 20 runtime

2. Upload code

3. Create lambda with customer LLRT runtime.

For this step, I’ve chosen a method of using a layer with LLRT runtime (see https://github.com/awslabs/llrt)

Create lambda with layer, choose Amazon Linux 2 as default runtime.

In the lambda code section, scroll down to layer and select your layer.

Running tests

We will run two types of tests for each function several times, also comparing cold and warm lambda execution.

Fibonacci sequence

This operation has complexity O(2^n). Therefore, after the number of 35, the execution time will increase dramatically.

This result might surprise you, and you might think AWS is trying to cheat, but this is actually expected and well-explained by AWS team:

There are many cases where LLRT shows notable performance drawbacks compared with JIT-powered runtimes, such as large data processing, Monte Carlo simulations or performing tasks with hundreds of thousands or millions of iterations. LLRT is most effective when applied to smaller Serverless functions dedicated to tasks such as data transformation, real time processing, AWS service integrations, authorization, validation etc. It is designed to complement existing components rather than serve as a comprehensive replacement for everything. Notably, given its supported APIs are based on Node.js specification, transitioning back to alternative solutions requires minimal code adjustments.

Now, let's compare two runtimes with a simple operation of downloading data from the link.

Fetch data from the URL

we will run 10 tests with URL requests from 1 to 55 and compare code execution time(do not confuse it with lambda execution time)

As we can see now, LLRT behaves much better for this type of operation.

Now let's compare lambda init duration, which determines cold start time.

Init duration, ms

Init duration is much better on AWS Low Latency Runtime(LLRT). As you might know, Init Duration is free on most of the runtimes. However, it adds overall latency to your application and can affect performance when your lambda contains a lot of code. Imagine you have a microservice architecture with hundreds of lambdas in flow. Using LLRT might reduce total init duration 4-5 times and, therefore, reduce the total latency of your application.

Summary

Let's summarize this benchmark with the pros and cons of LLRT

Pros

  • much quicker code execution on small serverless IO tasks
  • quicker cold lambda start
  • lower costs if you don’t do large data processing or millions of iterations

Cons

  • slower on large data processing or heavy operations with millions of iterations
  • deploying and using this runtime in Lambda isn’t straightforward. With CDK and Node.js runtime, you can use a nice feature NodeJS Function. It allows you to write code and define the path for your root lambda entry point without packing docker, zip files, etc. CDK will take care of bundling node_modules and all imports and required files to lambda.

Currently, LLRT is in beta, and I hope when it is a stable release, we will see improvements and a dedicated runtime for Lambda.

In the part 2 we will compare these runtimes when running Dynamo DB and S3 bucket operations

--

--