Serverless Rust

A brief introduction to Serverless applications in Rust

Doug Tangren
13 min readOct 30, 2018

This is going to be the first post ( and hopefully not last ) in a series of posts about writing (and thinking about) serverless applications in Rust. Stay tuned… 📺.

Earlier this year Rustlang cartographers sketched out an ambitious 2018 roadmap. One of the initiatives I’m most excited about is the focus on improved foundation for specific application domains. Having all the game changing language language features you can shake a stick at means little in practice if the ecosystem doesn’t have good stories for common areas where you can apply them. The burden of proof for a language’s true value can be lifted with success stories and principled solutions to common problem domains. In this post I want to share a glimpse into the future of the Rust applications that live on the internet, but first, I’d like do to a thought exercise… 🧘

Asking questions

Thinking rationally ( a recommended approach to thinking ) should always start with asking questions.

When you brush your teeth in the morning, do you walk up to a sink with running water then walk away leaving the water running when you are finished 💧?

When drive your kids to their soccer game, do you walk up to a running car, drop them off, then drive home later pull into your driveway shut the door and leave the car running until next week’s game 🚗?

The answers here should be no because that would be a ridiculous waste of resources and money.

Let’s try some less ridiculous questions.

When you decide you want to finally go market with your artisanal bichon frise cotton candy sticks™, are you going to build a store from scratch? Nope, you’ll probably just use Etsy.

When you move into the city from the suburbs and need to get around, are you going to continue to pay for an expensive parking spot where you car will live while you don’t use it the majority of the time. Nope, you’ll probably just grab an Uber.

When you live in Manhattan and you need a lighter mode of transportation, are you going to buy a bike that will occupy what little space you don’t have in your tiny New York apartment. Nope, you’ll probably just grab a City Bike.

This list could go on but my point is that though these all seem like ridiculous and silly thought exercises, this is actually how we traditionally approach making applications that run on the internet. Of all the practical problems we could be solving with technology, we still seem stuck on how making interesting the exercise of wrapping up the useful bits of our solutions to problems in software that accepts connections from the internet listening on some arbitrary network port and then wrap that in a main method in a process that will run for as long as we’ll let it ( or as long as it can before if falls over ) even when it’s not in use. We treat this as incidental complexity when it really might just be accidental.

Like leaving water running waiting for the next person that brushes their teeth or leaving the car running waiting for the next soccer game, we leave our servers running waiting for the next socket connection. This is often a waste of resources and money, especially for businesses.

Coming up for air in the cloud

We once to ran internet applications on hardware we owned and operated. Eventually there was paradigm shift when we realized we could virtualize this hardware and have someone else own and operate the hardware for us and some competitive cost model. We called this the cloud ☁️. There’s this infamous quote in our industry that goes,

“There is no cloud. It’s just someone else’s computer”

It even sold some tshirts. It was stated to remind us there’s nothing magical about the cloud. We just allowed ourselves to narrow our focus a little closer to what’s differentiating about our applications by stopping repeating ourselves with whats not.

In recent years, we’ve learned similar lessons with the popularity of containerization in the way we package applications for deployment. In many cases we’ve solved for some undifferentiating cases but are still spinning cycles on reinventing others. This is where serverless comes into the picture. Recently I heard a variation of the previous quote that I can only assume will start becoming its own meme.

“There is no serverless, it’s just someone else’s container”

The cost of reassembling servers and containers is now diminishing in value due to their commodity, much the same as hardware virtualization became a commodity and replaced the value in building hosts ourselves. Note serverless, the paradigm, is not just about replacing servers but also in replacing the act of running services which are also undifferentiating. A productive business focuses on utilizing resources efficiently. So why build what you can buy? Yes, in serverless, servers are involved. They just aren’t your concern anymore.

Clocking in on hype cycles

The example questions above were focused on human activities. When building software for humans, be mindful that most humans don’t tend to be very concerned with technology for technology sake. It’s a detail, a detail they don’t care to think about until it doesn’t work as expected. They generally assume software engineers are a rationally thinking bunch.

Peeling away Rust’s outward features, at its core it promotes this idea of safety without making making sacrifices. For example, you normally put on oven gloves to prevent your hands from getting burnt when pulling something hot of the oven, but you do so at the expense of losing almost all the dexterity of your fingers. Rust on the other hand allows you to safely pull the turkey out of the burning oven while making finger strings at the same time, all without burning your hands. That has tangible benefits (for users) in systems where large classes of errors that cause system defects and failures proliferate in the wild. In Rust, many of these are just no longer possible. This is a big deal. Your users could care less about what you have running on your servers, but they do care that it works and works reliably. This is where Rust shines and it does so without sacrificing performance or programmer productivity*

* productivity is the primary focus the Rust core this year

As language, it’s still working its way though its hype cycle which is timely as serverless is working its way though its hype cycle as well. We have strong indications that serverless is going to play a major role in the future of internet applications, but we are still “figuring it out” so to speak. The same is true with Rust. In many regards, what differentiates Rust from the pack could represent the future of of programming and languages that come after but we are still in the “figuring it out” phase with how to apply it. Mind you, betting on language is more than betting on its syntax and semantics. It’s also the ecosystem and processes for ecosystem evolution. Like Rust, serverless is also on its Oregon trail journey through the same hype cycle. The technology trigger for serverless was commodity containerization where the technology trigger for Rust is multifaceted. We’re in an era where software security and robustness places a huge role in our lives and privacy as well as shrinking devices that need shrinking runtimes.

Showing up with working code

I’m fond of showing as well as telling but first I want to illustrate a point.

fn main() {  
Server::run(
application,
7878 // spells RUST on a phone dial
)
)

Though fictitious, this is in essence an example above you’ll see with most web application frameworks today, likely right in a project’s readme.

When writing a traditional internet applications you typically are responsible for two things. Inside a main method you write and configure a server that listens on a port, which accepts incoming connections then dispatches control to your application to handle that special event. Ports are typically arbitrary as long as they don’t conflict with other things listening for network connections on a host but it’s something developers still think about with traditional internet applications. Now that your application has a main entrypoint, it’s up to you to figure out how to run it ( and keep it running ). Keep it running typically means one of two things. Scaling up (handling a lot in process) or out (handling in process but spreading out the work). Fortune typically favors scaling out as there’s its more robust against transient failure. Parts of your system can fail and other parts will account for it. When hosting internet applications in servers you have to figure out how how much to scale up or out yourself. This often leads to over or under estimating in terms of what you end up provisioning. You may eventually find some compromising point of cost vs waste. Some platforms provide a form of “auto scaling” to managing this dynamically with a set of rules whose responsibility to configure get right is also your’s ;) In order to make sure your application is still running you will likely need someway to answer that question without sitting and pinging the server yourself by hand. It will likely need some form of metrics that indicate its “health”. Mind you this has nothing to do with the health of your application, only the health of the server listening on ports hosting your service. You will likely also like to know what’s going on inside your application if you do detect health problems. You can write logs to a file or may open a backdoor so “ssh” in and do some scooby doo detective work. Oh but there may be many server processes! Scooby doo, where are you!? This can be very time consuming. You will want some way for users to address your application so of course you’ll need to set up dns records. Oh and users will want to trust you are who you say you and the information they give is secure so you’re also going to need to set up tls certificates. There are many many things you have to think about and will continue to be distracted by when running servers yourself. Most of these tasks have nothing to do with your actual application but they are tasks you tend to repeat and are responsible for in ever new internet application you create. This properties of servers tend to be purely operational. If you are into this kind of thing, there are many options to keep you entertained in the tech industry.

For those who would rather focus on applications, the serverless space has a growing number of options and they do not exclude Rust.

I’ve been spending some spare time on making the Rust story for serverless more or less seamless, or at least as seamless as it can be for now. To make that happen it’s behooving to focus on a platform to focus in on. There are a number of platforms that offer a “functions as a service” product where you essentially “bring a function”, literally a function unit of code, and they run it at a very small cost on a per per use scale. Typically they bring with their offering a variety of ways to invoke or trigger your function. I’m going to be focusing mostly on AWS Lambda but note that there are others in the same space. That competition is good (for you) because it means prices are only come down and quality is only going to be forced to go up. I’m choosing AWS because it is the most mature offering ( read stable ) and fits all of my personal use cases for applications well as it had good integrations into AWS services I’m already using.

AWS Lambda, being a mature offering you get to reap the benefit from the ecosystem that has already built up around it. Most recently AWS has been trying to catch up with that existing ecosystem around with its own provided tools like sam but tools like this are in many respect ways still quite behind the ecosystem tooling that’s had the chance to mature and grow like the serverless framework. The AWS Lambda model for functions is to provide runtimes for target languages and you provide the code for those languages. Unfortunately Rust is not yet officially supported as a Lambda runtime, but could easily one day be 🤞.

Despite Rust being a high level language its very capable ( with intention ) of being embeddable anywhere you might consider running low level C program. When I stepped onto the Rust serverless scene there were already a few Rust projects take great advantage of this capability. One such project that caught my eye was crowbar, a crate that makes it possible to expose a Rust function as a cpython initializer for use within a lambda python runtime 🐍. What makes crowbar particularly interesting to me is that there have been many independent attempts to study the behavior of these Runtimes in practice. In almost all cases the Python has the lowest overhead, though, python being interpreted language, it doesn’t end out being the fastest to after a other run times have few warm up cycles.

https://medium.com/@nathan.malishev/lambda-cold-starts-language-comparison-%EF%B8%8F-a4f4b5f16a62

The reason the lambda Python runtime’s overhead is so low is likely has do with optimizations AWS Lambda has been able to make since it has been the supported runtime that’s been productionized within their product the longest. It being a heavily productionized runtime also makes it an ideal candidate to partner with Rust. AWS Lambda in particular gives you only few knobs to turn to tune your runtime, in particular memory allocation. The memory you allocate affects the proportional amount of CPU made available to your application. Applications that can make the most efficient use of CPU while retaining a small size are optimal for Lambda’s performance constraints. It turns out that Rust is very good at both being CPU efficient under memory constraints. Improving a host runtime’s performance and safwty is actually a target market for Rust. AWS lambda runtimes just happens to be one of many. For these reasons what makes crowbar ideal today while we wait for more official AWS Lambda Rustlang support.

Okay, so we all know how to write Rust. So what’s the hold up? With Rust, you compile code for a target runtime that it will run on. In the past, it was awkward to figure out how to do this correctly for the AWS Lambda Python runtime. Today that is no longer the case. In fact, that’s where I’ve been spending most of my attention. As mentioned above, the serverless world has a strong and mature ecosystem so it behooving to attempt to work well with it rather than reinventing undifferentiated wheels ( notice a theme here? ). Two key factors make working with AWS Lambda and Rust easy today, aside from the crowbar crate: Docker and Serverless framework.

There is a really amazing project centered around creating a reproducible CI environment for lambda runtimes, not surprisingly it’s called lambda-ci. What this product generously makes publicly available is a set of docker images that faithfully reproduce the AWS Lambda runtimes these functions target, including Python3.6! What’s left is the small work to integrate the Rust toolchain. The rustup tool makes embarrassingly simple. I’ve packaged this as a public docker image which you can then use to build a lambda-ready deployable binary inside your cargo projects `target/lambda` directory.

A crowbar application could then be easily built to run on Lambda with the following docker command.

$ docker run --rm \
-v ${PWD}:/code \
-v ${HOME}/.cargo/registry:/root/.cargo/registry \
-v ${HOME}/.cargo/git:/root/.cargo/git \
-e CARGO_FLAGS="--features python3-sys" \
softprops/lambda-rust

However, I believe we can do better.

no. that’s not an https://en.wikipedia.org/wiki/Arashikage clan tattoo

Serverless framework is not a framework for applications in the traditional way we think about application frameworks. Don’t let that worry you. In fact it has 0 impact on the code you write. Instead it’s a framework for workflows. It simplifies many of the tasks required to build and deploy serverless applications based on the knowledge and practices extracted from many years experience with multiple function as a service providers, including AWS. It’s role is not to change the way you write your application. Instead it’s role is to facilitate pathways for more productive workflows. Where it’s got the biggest a leg up on its competition is an ecosystem of full of workflow plugins. Realizing that it would never be able to read a crystal ball solve all your workflow needs, its designed to be easily extensible so that you can enable it to do what you need. That’s essentially what I did with serverless-rust, a serverless framework plugin that facilitates seamless serverless workflows for Rustlang AWS Lambda applications. For a point of reference, this is how you typically deploy a serverless framework application for a supported AWS language into a native AWS runtime. This has become a familiar workflow to many.

$ serverless deploy

For comparison, this is how you deploy a Rust serverless application

$ serverless deploy

Tada! There is no difference. Why was that important to me? For many engineers and organizations already using serverless framework ( and there are many ), when introducing a new technology, being able to reuse knowledge and familiarity with existing tools is key to productivity and often adoption. If you know how to deploy a serverless framework application you already know how to deploy a Rustlang serverless application. That’s very powerful.

Another productive aspect serverless has is its ability to quickly bootstrap new applications with templates. That too is a point of extension. I’ve done just that with crowbar applications with this serverless template. Below is what it takes to get an application off the ground and into production in one step.

$ serverless install \
--url https://github.com/softprops/serverless-crowbar \
--name my-new-app \
&& cd my-new-app \
&& AWS_PROFILE=prod make dependencies deploy

To me this is one of the key game changers for future of internet applications. Rather than focusing time and energy assembling ( and reassembling as hot new frameworks pop up ) servers that wrap your application, you instead leverage a platform that takes aways those concerns so you can just focus on writing code without think about servers. The result is internet applications that can born in production. That’s kind of a game changer for how fast organizations could be moving and empowering for engineers that have been far removed from any operational ownership over code they write.

In the next post, I’m going to shift focus to another project I’ve been working on that facilitates a productive AWS Lambda workflow for AWS API Gateway applications with stronger and more familiar types by extending crowbar with Rust’s very own generalized http crate. Http applications with no main methods and no servers. Imagine that! Check back soon.

--

--

Doug Tangren

Meetuper, rusting at sea, partial to animal shaped clouds and short blocks of code ✍