Rust and Serverless, with a focus on Cloudflare Workers

I just wanted to prototype a simple REST API to see how this would work using serverless and Rust. Below are my findings.

TL;DR There is a lot of promise, but the overall state of Rust on serverless is pretty immature. This is likely to change in the next 12 months.

Source code for this article:

The Background

I enjoy coding in Rust and most of my server-side code has been Node.js, either using Express.js or AWS’s serverless stack. Node.js is great for server-side development, it’s fast enough, it quick to get started and everything supports it.

However, as your project scales in size and good development practices take second place over project delivery, Node.js can become an unwieldy. Mistakes can easily be made when you modify one part of the program and forgot to update a property somewhere else. Some of these coding mistakes can be handled with good coding tools and testing, however, mistakes still get through. From experience many of these issues could be resolved by a stricter language, something Rust provides. Hence I’ve been interested in Rust from a server perspective for a while, both for coding efficiency and run-time performance.

I just wanted to prototype a simple REST API to see how this would work using serverless and Rust. I chose to test this on Cloudflare’s platform, since they had promising Rust support.

The Big Serverless Vendors

Below are the list of serverless vendors I reviewed. Note that Cloudflare’s offering for serverless is more like “serverless light” since they do not provide a full cloud stack like the others.

Take the data above with a grain of salt. source: and AWS and Cloudflare

The table above shows the pricing for various serverless vendors at 1,000,000 requests per month. The last row show the price of a small server. In a nutshell serverless is cheap and Cloudflare’s comes out the cheapest.

Read more about each vendor’s Rust offerings below.

Cloudflare’s offering

Cloudflare claim that 10% of the Internet’s requests goes through them. Cloudflare’s serverless offering is called Workers, it’s been around since 2018 (I believe) and has evolved since then. One of the purposes of Workers such that Cloudflare’s customers can have finer real-time control of Cloudflare’s infrastructure.

When you deploy a Worker it will be deployed and run on all 200+ data centers globally. The other serverless vendors will call this Edge Serverless, AWS call this Lambda@Edge. See below. This means that most users will be very close to your website and have a very quick response time.

Cloudflare’s Workers will be deployed to 200+ locations globally

A more recent development has been Workers KV, a Key-Value store to store your data. You could use this as a very basic database. The keys can be up-to 512 bytes, while the value can be up-to 10MB.

They claim fast cold start times, up-to 50 times faster than competitors, although I never tested this.

Both Workers and Workers KV have various limits in place. These types of limits are common among serverless vendors. A couple which stick out for me are:

  • The Worker must complete within 30 seconds, this is typical.
  • The Worker must use less than 50ms of CPU time, this appears to be unique to Cloudflare.

How do Workers work?

The Rust worker will run in a JavaScript sandbox as WASM. The sandbox appears to be similar to Node.js, but it behaved differently than Node.js. There was no process variable, and I could not use load Node.js native modules like “perf_hooks”.

Cloudflare provides a tool called wrangler, this does the following:

  • Creates a project with a worker.js (the entry point for the worker) file and a Rust “hello world” example.
  • Compiles your Rust code to WASM.
  • Deploys your code to Cloudflare.

Once your code is deployed it will be executed within the JavaScript sandbox. The worker.js script will load the WASM module. If you want to pass parameters or environment variable to the WASM function you will need to do this in the worker.js file.

Creating a Cloudflare Worker in Rust

UPDATE 2020–03–06: For Cloudflare workers you can access the Workers KV API directly using wasm_bindgen. This improves performance significantly. A full example can be found here: This section is out-of-date.

It’s pretty easy to get started with Workers and Rust, but going further can be a hurdle. Below are the steps to getting you started.

You will first need to create a Workers account, create a member and get the API Key (we’ll use that later).

Install wrangler, a tool written in Rust to deploy your code. Wrangler is a wrapper for cargo, it will build and deploy your code.

cargo install wrangler

Then let’s create a demo application. By default it creates a Node.js project, but you can use -t rust to create a Rust project.

wrangler -t rust generate demo
cd demo

Now configure your project, this will ask for your API Key.

wrangler config

Then we need to get the account key. Edit your wrangler.toml file and add the account_id, this comes from account.

Optional: Build the project

wrangler build

Optional: You can preview the function without deploying it. This is useful for debugging purposes.

wrangler preview

Finally you can deploy your project:

wrangler publish

It will give you a link to where it’s deployed, but it will be to http://{{your-function-name}}.{{your-account}} The default project will output “Hello, wasm-worker!”.

Working with Workers KV

This is where things go sour, kind of. Cloudflare have a Rust project that gives access to the Cloudflare API, it’s called cloudflare-rs. Below is the full for the project, this says enough:

“This library is a Work in Progress!”

My intention was to use cloudflare-rs in the Worker, but this is not possible, since cloudflare-rs cannot be not compiled to WASM.

This means I had to write my own library to access Workers KV via the REST API. Michael Snoyman’s blog post on Workers helped a lot to get this going with WASM. He has documented his story for getting sortasecret running with Workers, the good, the bad and the ugly.


I did not do a thorough investigation on performance. But below are my findings.

I’m based in Auckland, New Zealand and Sydney, Australia is the nearest place most large serverless vendors have a data center. Sydney is around 2,100 km away and has a ping time of around 25 ms. But the nearest Cloudflare worker is in Auckland the ping time is typically 1.7 to 2.4 ms.

I measured the run-time of loading the WASM module and executing it, I used console.time() that comes with JavaScript, this always returned 0 (zero) ms. This means that run-time performance of the WASM module itself was fast.

See “UPDATE” in the previous section, these timings are incorrect. The surprise was the running a GET or PUT to get/put the KV. This typically took around 250 ms, which is gargantuan, I would expect 1 ms or much less. This may be due to the physical location of the KV data-store, but they claim “Cloudflare Workers KV achieves this low latency by caching replicas of the keys and values in Cloudflare’s global cloud network”. This was disappointing.

There is a network pairing issue between Spark (one of New Zealand’s largest ISP) and Cloudflare. Read here and the last paragraph here. This may contribute to the timing too.

Rust on serverless offerings

I only explored Cloudflare’s serverless offering. However, there are many vendors in the field and some provide Rust support and some don’t.

Cloudflare Workers

Cloudflare Workers has some limitations when using Rust. One limitation is that the binary compiles to WASM and is wrapped around JavaScript. However, I see this space changing a lot in the next year.

AWS Lambda

AWS provides a good Rust offering with a dedicated Rust Lambda run-time that compiles to a binary. However, they do not provide a Rust SDK, I also see this changing in the next year. Rusoto is a 3rd party project that provides an AWS SDK for Rust.

Azure Functions

There is no official support. This blog-post explains how you can shoehorn Rust to compile and run as an Azure function. While this project provide an Azure Functions SDK.

IBM OpenWhisk

Apache OpenWhisk is an interesting open-sourced serverless platform that IBM hosts. It allows you to write functions in Rust. However, I did not investigate if this gives you access to the wider IBM cloud offering.


Using Rust for serverless is still immature, but it’s evolving. I would recommend against it for now. The offerings in a year will surely have developed into something more enticing. AWS and Cloudflare offer the most comprehensive solutions for Rust, they also both use Rust in many of their internal and open-sourced tools. Both of these offerings allow developers to code their functions in Rust, but both are missing a complete Rust SDK to access their cloud API from the serverless function.

I’m an entrepreneur and enjoy inventing things with my own hands.