Serverless HTTP

This the second part in a series of posts on writing serverless applications with Rust. You may be interested in the reading previous post first.

1️⃣ serverless rust: the into

As unorthodox as the title of this post sounds, shipping HTTP applications without servers is quickly becoming a viable and in some cases preferred architecture of manufacturing HTTP applications exposed to the internet. In this post, I’ll walk though some of my own exploration on why I think that’s the case and how I’m applying that to Rust.

Let’s quickly rehash some key points from the last post. Running (HTTP) applications today yourself comes with a number of costs, some measured in💲and some measured as an operational catering service bill 🚐( i.e. your time spent not focusing on your actual application ) ⌚💰. Assembly, deployment, routing, dns, tls, configuration management, monitoring, observability, elastic scalability are all requirements of delivering your HTTP application to the internet reliability and securely in today’s market. In an ideal world you could focus more on your application and less on these operational components. That in a nutshell is the promise of serverless and why many providers offer these operational aspects in managed packages for rental 🚚📦, much like a city bike or an Uber ride.

Stateless countrymen

You might be thinking to yourself: does HTTP even even make sense in a serverless world? You may have heard that the serverless rodeo show is all about integration through events but HTTP is traditionally served on a persistent server platter. Surprise! HTTP is actually a stateless event protocol. It just happens to be a synchronous one. It’s also one that just happens to be privileged enough to have ubiquitous clients for!

The server you typically bundle with your application is responsible for managing network connections and parsing the HTTP protocol before handing off the message to your application do perform some domain specific behavior. These servers can add noticeable deployment overhead when you package them up and reship them with every domain logic change! Servers can be tuned by yourself but they can they also be outsourced as they don’t really differentiate you from your customer’s perspective. The reality is that servers are a non interesting implementation detail to you application’s customers.

The HTTP protocol just declares a few basic structures, requests and responses, which you might consider self describing envelopes for content and intent. In practice it’s almost always useful for an application to have access to some form of an envelope in order to understand the intent and contents of an event in order to produce a sensible response and for annotating messages with metadata. In that sense, HTTP is often ideal for generalized serverless events. With other types of triggers you may actually end up needing to invent your own envelope structure to communicate content and intent 📬. With HTTP triggers, you get those for free, so why hide them?

Putting http back in http

(rust crowbar’s logo)

In the past year I’ve spent a lot of time writing serverless applications in Rust with the crowbar crate, and I’ve had great success. I’ve enjoyed the paradigm shift I’ve felt comparing it to my previous experience running (Rust) applications in containers on kubernetes, an orchestration tool for applications that run in docker containers. The serverless approach ticked all of the checkboxes for enabling more operational ownership for teams over their services but without all of the ceremony and mystique that tends to come along with kubernetes. The focus with serverless tends to be more on your application, and less so on its orchestration. Don’t look for a silver bullet between these lines. There are trade-offs with serverless HTTP applications you may want to consider which I’ll touch on a bit later.

One aspect I’ve found myself re-implementing in serverless HTTP applications are the structures that map to API Gateway events and responses. Lambda events tend to be opaque blobs of JSON. Surprisingly, there is no official formal documentation for what fields to expect aside from a handful of sample events AWS provides. For a statically typed language this was definitely a sour patch to wade in. I started factoring some stronger types out into something reusable then spotted an opportunity. The Rust community previously came together to produce a crate that generalized HTTP types over requests and responses absent of many framework specific bells and whistles so that application frameworks could stop reinventing very fundamental primitives and start sharing functionality. This became the http crate. It’s presence begged some questions: What if I could adapt API Gateway types to Rust native http types? Would there be value leveraging the communities existing efforts? What would that look like? The result is a crate called lando.

(a belated introduction)

Lando targets AWS Lambda as a deployment target much like the crowbar crate. However, Lando is specialized for use with API Gateway applications that wish to leverage the Rust ecosystem’s existing http crate as a core interface.

The results have been very pleasant. The first release of lando earlier this year was an MVP to explore it’s usefulness or lack there of in the context of http lambda’s deployed at {WORKPLACE}. The recent release reflects experience I’ve had in practice maintaining a number of these lando applications.

Here is an sample lando application in the most recent release.

gateway!(|_,_| Ok(“hello lambda”));

If you’ve worked crowbar before this will look very similar to it’s lambda! macro. The difference is in the types. This closure is invoked and responds to API Gateway http events with but with stronger types. The inputs here are an http::Request type and lando::LambdaContext. What’s recently changed with lando is that the lando::Result type’s Ok variant value can now be anything that implements the IntoResponse trait, a generic trait for coercing values into http response types. This affords applications a number of ergonomic improvements over the previous releases at no added cost. In practice most of my lambda functions return values that just built a default http::Response with custom body content. This interface still allows for the continued explicit responses ( i.e. `Response::builder() ) but enables other common usages at no extra cost.

Below is an example of yielding an application/json body of {"hello":"world"} using serde_json’s json! macro.

use serde_json::json;gateway!(|_,_| Ok(json!({ "hello": "world" })));

One odd property lando inherited from crowbar was the default lib name is assumed to be “lambda” and not your crate name. There is also an implicit name exported by the default closure interface called “handler”. This had some non intuitive implications if you ever wanted to change the defaults. Attempts to make this more intuitive are included in the latest release of lando. Thanks some work on the mashup ( and now paste ) crate, your crate name is the new default build artifact name, as one might expect! This gets updated automatically when you change your crates name in your Cargo.toml file, no code changes required.

Lando’s commitment is to always work on the stable Rust channel. Lando application should not break without intention. Rust’s recent stabilization of proc macros enabled adding a simple attribute to your vanilla Rust functions to accomplish what the declarative macro did but with more intuitive properties.

fn hello(
_: Request,
_: LambdaContext
) -> Response<impl IntoResponse> {
Ok("hello lambda")

The exported function name is what you’d expect, the name of the function! Note: There is an unresolved implementation detail here that a crate can only export one lando function with this proc macro but it’s a solvable problem.

Welp. That’s it. An Rust HTTP application with no server dependencies required. 🎉

To dispel the villagers' fear of magic, there is very little going on here. Lando leverages crowbar which provides a means of exporting a native linux binary dynamically linked so that the Lambda Python runtime can load it as C extension module on startup🐍. This allows your function to be easily deployable in one of AWS’s lowest cold start overhead runtimes. This also means that your function can make the Python runtime performance faster and more consistent by virtue of running a precompiled native binary. This binary be invoked with very low overhead at native speed and with a very low memory and CPU footprint thanks to Rust’s almost invisible runtime.

The sizing tailor

It’s healthy to be suspicious of convenience as it can sometimes come at cost. Serverless applications require a different mindset than traditional applications. With traditional applications your dependencies don’t tend to have a lasting effect on the operational aspects of your running code as your application is long lived and is responsible for many varied operations which can potentially benefit from those dependencies. In serverless, your application scope is now just a function and your runtime’s efficiency can be bottlenecked by your functions dependencies. As a reminder, a lambda’s lifecycle 1) starts by downloading your code 2) unpacks it from a zip archive and 3) loads into a runtime container. The dependencies you bring on board can have a negative effect any or all 3 of these. The less there is to download and the less there is to unpack, the faster lambda can start running your function. Lando adds a dependency to your function. The flip side of the cost of this dependency is a use of familiar types and a gauranteed correctness by design. Your inputs and outputs will always be well formed. The added assumption here is that for most network applications the http crate is already going to be present and it’s narrow focus will keep it tiny.

Extensionally well done

Lando’s API not meant to be limited by the its use of the http crate. Lando does not hide API Gateway enrichments. This is made possible by a really great feature of Rust as well as a really great and under-hyped feature of the http crate.

In Rust, data and behavior are intentionally separated in definition. You can however call them together on to the dance floor inside an impl block 💃. This lends itself to the extension pattern in Rust. The recipe for this pattern goes something like this. You define some domain specific interface extenstion as a trait and then provide an implementation of that trait for some target data types. You’ll find some examples of this in the futures crate for FutureExt and StreamExt.

The http crate has a really useful storage API hidden in plain view of it’s otherwise uninteresting interfaces for transporting additional request information in a typesafe API called Extensions. It’s a storage facility like a HashMap where the key is a type and the value is an instance of that type. A tradeoff here is that you can only store one value for a given type. You can however compose values with a container type that stores multiple fields of the same type. This is really great because it means applications and frameworks that use the http crate can enrich it as desired without reinventing the wheel with what’s common. I expect to see this feature used in more domain specific utility crates like lando in the future.

In the case of lando I’m interested of course in extending http:Request with information particular to API Gateway as well has some convenience and productivity.

In particular API Gateway provides pre-parsed query and path parameters and context specific information intended to be consumed by an application. Those types are stored as typed keys using http’s extension API then ergonomically exposed with an extension crate. You can see that in action here.

use lando::RequestExt;// print the value of path param /resource/{id}
// for path /resource/123
"path id => {:?}", request.path_parameters().get("id")
// print the value of query string param ?foo=bar
"query foo => {:?}", request.query_string_parameters().get("foo")

Often times requests are sent with structured payloads either using form encoded or application json request bodies. Lando’s RequestExt offers a helper method that will correctly deserialize a type safe value depending on the negotiated content type. Because there are two cases to handle, no body and mismatched fields, the result is represented represented as a Result<Option<T>> type.

use lando::RequestExt;#[derive(Deserialize, Debug, Default)]
struct Args {
x: u32,
y: u32
let args: Option<Args> =

Many API Gateway interfaces are exposed as String keys and values. In earlier versions of lando these were represented as a HashMap<String, String>. In practice this felt less than ergonomic as the majority of cases are read only borrows of it’s inputs. lando::StrMap replaced these. It is like the HashMap but specialized for more common usage in lando.

Bodies in API gateway come in 3 types: none, text and binary. You can learn more about them here. The http crate is non-prescriptive about how you represent response bodies. Lando expresses this has an enum called lando::Body with From impls making it easy to coerce a variety of types into lando::Body instances.

Servings of tradeoffs

This post is not intentional serverless propaganda. To be fair, there are still very good reasons for writing and bundling servers with your http application and running on something like fargate. It really depends on your needs. I’m sorry, your going to have to use your cranial facilities after all!

Ever turned you car on early in the morning mid winter before leaving for work? Why did you do that? It’s likely the case you did that so when you actually did leave, it would be a much smoother and comfortable ride. In the serverless world there exists a similar phenomenon. It’s called the cold start problem. As an emphemeral runtime, AWS can spin up a new instance of you application at any time and that can affect your applications latency outside of your control. There are many reasons to use Rust other than it having no garbage collector pauses but if you are using Rust for an http application specifically because gc is too slow, you may want to think twice about pairing up with serverless because a lambda cold start is likely just as bad if not worse than a GC pause❄⛄ The good news is that providers are innovating on this the cold space issue due to competition, demand, and advancements in technology.

Web technologies like http2, server sent events, and websockets all have fingers that don’t yet fit serverless gloves. These are all cases that currently have a better fit traditional servers.

Concurrency at the language level can sometimes be at odds as a workload distributor with the concurrency provided by the platform.

The function level of scalability in AWS lambda requires a shift in thinking about concurrent workloads. In traditional web servers language level concurrency primitives had many benefits as servers were responsible for handling many unrelated and independent operations while waiting on a remote resource when new connections are knocking at the door can be bottlenecking performance problem. That’s typically why async interfaces in those servers are ubiquitous. In lambda’s scaling model each instance of a function is already independent ( thread safe as a result ) and is intended for a single specific task. An container instance will never handle more than one task at a time. This relates to lambdas mode of scaling. When lambda receives a new request before you finish your current work, it will spawn a new Lambda instance on your behalf. In that sense concurrency is provided by the platform.

What you pay for is memory size which provides proportional CPU, think of this like renting CPU time not number of cpus. Keep this in mind when considering lambda for CPU bound operations. The shift in thinking in AWS Lambda moves toward the model of scaling out your workload by spawning new lambdas. This is an area that deserves more research but it’s been talked about before. I do not advocate for the click bate in that post but it’s an interesting perspective to consider given the nature of the lambda world we live in.

In summary, the current wisdom seems to be: when possible, go serverless for it’s many benefits then fall back on traditional persistent servers for everything else that doesn’t fit the serverless model. Get used to the idea of your old best practices being the new anti patterns as we have different models of operation in a serverless world. Translating full web applications to single functions will leave you disappointed. Learn to think differently in this new solution space. There is definitely room for Rust in that solution space, especially with the communities shared http crate.

I’m planning to put a small mdbook together to help illustrate examples in a bit more depth. I’ll post a link to that here when it’s available.

In the next post in this series I’ll go though a soup to nuts walk through of bootstrapping lando applications with continuous integration and deployment.⚡




Meetuper, rusting at sea, partial to animal shaped clouds and short blocks of code ✍

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Redis as a caching server blog series — Part 2

Building a Facebook Comment Bot with Python and Selenium

Sprint Review — a 3-min overview

Weekly Digest 20/2021

Stream processing with Apache Flink and Minio

Pushing Objects in Unity to Complete Puzzles

Meet the SB Hacks Team: Logistics Team Part 2

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Doug Tangren

Doug Tangren

Meetuper, rusting at sea, partial to animal shaped clouds and short blocks of code ✍

More from Medium

Implementation of a Lambda function and creation of an SQS with Serverless

AWS DynamoDB ORM (Dynamoose) with Node.js #1

aws dynamodb

User sign-up example with Mantil, Go, JWT, AWS Lambda, SES and DynamoDB

Recommendation: Should AWS provide a function for lambda timeout