Serverless Rust: Revisited
A refreshing new story for running Rust on AWS Lambda
Earlier this year I started a series of posts on writing Serverless Rust applications with tools readily on hand. With AWS Lambda not officially support Rust, the story for how to best run Rust without thinking about servers was still being writ. In an interesting twist of fate, it now does (albeit not specifically) as Lambda now supports a new provided runtime which has opened up the opportunity for any arbitrary language to “bring its own runtime” in a structured and documented way. The fine folks at AWS did just that with an official supported reference runtime… implemented in Rust 🦀! This post pivots toward a new fork in the road and represents a new, shared direction I’ll be putting my efforts towards.
Reduce reuse recycle
A lot of the ideas and tools I previously posted about still applies but has since been ported to the new officially supported AWS runtime library. This changes a few semantics but others remain the same.
The new Rust runtime a library. You include it in your application’s dependencies. What it represents is essentially an event loop that is wired into the the AWS provided runtime. It’s surprisingly simple but is needed today regardless.
Previously the use of the AWS Lambda Python(3.6) runtime enabled you to deploy your (Rust) functions independently of the runtime and as a result, your deployment artifacts were smaller in size. Previously you built your application as a library artifact, not a binary, meaning there was no main entrypoint. The new rust runtime expects a main entrypoint and will cost a bit more in the compiled artifact size as you’re essentially bundling the runtime with your application.
With those tradeoffs came some other benefits. The Rust cpython crate that lives at the bottom of the previously used dependency chain is no longer maintained. This would have eventually became a blocker for some useful Rust 2018 edition features. Soon we all will expect these. There is another Python-to-Rust-and-back-again alternative but would take a heavy investment bringing that up the dependency chain towards your application. The new Rust runtime is open source, and the maintainers have been great at making the open source experience smooth. I’ve already transitioned much of lando into the new Rust runtime’s lambda-http module. If you’re interested in the serverless space, do get involved!
Short story: there’s now momentum to make the Rust story for lambda sustainably solid and well supported, lowering any previous bars for organization adoption
What stayed the same
The serverless-rust plugin still allows you to deploy your Rust functions as you would in any other language via serverless, i.e.
npx serverless deploy. The good news is that the tooling got even better! Besides having been updated to use the new Lambda “provided” runtime for your Rust applications, you can now declare your serverless applications runtime as “rust” instead of “python3.6”.
This allows you to deploy heterogeneous functions in serverless applications without conflicts. In practice this enables smoother transition/experimentation with Rust for organizations already using serverless framework. It also makes it clear for those jumping into a serverless codebase what they are looking at. The “python3.6” runtime sometimes caused confusion. You can also now invoke Rust function directly from serverless as you would any other lambda supported language
npx serverless invoke -f rust-app for easier debugging.
In theory, you should just be able to just build your Rust binary as is and things should “just work”. In practice that’s not the case because keeping parity between linking libraries locally and mapping that to target deployment runtime is not a straight forward process, at least not today. That’s okay because the lambda-rust docker image adds the Rust toolchain to the lambdaci provided docker image, a faithful representation of the runtime you will be deploying into. How does it represent your deployment target?
By tarring the full filesystem in Lambda, uploading that to S3, and then piping into Docker to create a new image from scratch
What does that mean for you? You won’t need to spend time on reconfiguring your application with custom feature flags and rust toolchains in attempts to manually reproduce your target runtimes expectations, just create cargo projects as you would for anything else. In truth you could and may eventually be able to get customizing your local environment to work, but it’s honestly not a sustainable process if you intend to develop your application in the context of an organization setting where your coworkers will also have to repeat this process. I’ve been there. Configuring local dependencies is not very productive use of you or your organizations time or money :) You can instead reinvest that time and spend it on your actual applications functionality and not how to build it, echoing the serverless philosophy of focusing on your applications functionality and not how to deploy it.
Short story: is now even easier to build and integrate your Rust functions including serverless tooling your organization is likely already using and knows well, also lowering any bars for previous organization adoption
This was just a brief recap on what happened and where I’ve been redirecting my attention. I believe serverless applications are the next major shift in application architectures and am excited to see the Rust story evolve (quickly). In an upcoming post, I’ll walk though the experience of setting up your own serverless Rust application step by step. Stay tuned 📺