Web service with Rust, Rocket and Diesel

A hands-on walk-trough on how to create a REST-API with Rust, Rocket and Diesel

Marco Amann
Digital Frontiers — Das Blog
12 min readFeb 14, 2020


Rocket, ready to launch. Photo by Felipe Simo on Unsplash

What and why

If you are comfortable writing web-services in Java, why should you try writing a web-service in Rust?

Microservices enable us to quickly scale our application according to consumer needs, requiring reasonable start-up times to achieve elasticity. When considering a serverless scenario, this aspect is even more important. But is it worth learning a new language and several frameworks just for a few milliseconds? Obviously not. But Rust has more in store than brisk startup times and raw performance. Over the course of this article, we will build a web-service and by the way have a look at several aspects that make rust an intriguing alternative for your next web service.

Creating the API

Before I detail on how to use rocket, let’s just have a quick overview of the available options to write HTTP-Endpoints in rust, you can refer to AWWY for more frameworks and libraries.

Ecosystem overview

There are several low-level frameworks like hyper or even the raw TCPListener if you need to build something truly minimal. But since I do not care that much about the last few nanoseconds of performance or the size of dependencies, we opted for a higher-level framework.

There is gotham, which claims to focus on stability and hence works with stable rust but coding with it leads to a bit verbose code, especially if you are used to the nice annotation syntax common in frameworks like flask or spring. Further, there is iron that I skipped for this article, since it lacks a beginner-friendly documentation. A promising framework is actix that is based on an actor model and is insanely fast but it was in midst of hefty refactoring at the time I had to decide for a framework, so I chose to postpone its evaluation. The framework that has given me the best first impression and therefore was evaluated further is rocket.

Starting the Rocket

The Rocket framework describes itself as follows:

Rocket is a web framework for Rust that makes it simple to write fast, secure web applications without sacrificing flexibility, usability, or type safety.

That sounds pretty nice, so how do you actually use it? First of all, since Rocket is heavily relying on macros, you need to have a nightly-toolchain for compiling Rocket. This is achieved by a simple rustup toolchain add nightly , if you have rustup installed.

In our example, we will make use of automatic JSON handling, so we need to add that in addition to the default rocket dependencies in our Cargo.toml file:

version = "0.4.2"
features = ["private-cookies"]
default-features = false
features = ["json", "diesel_postgres_pool", "serve"]

With this, we can directly start coding our first endpoint. With Rocket you add a macro to a function you want to have called by the framework. Let’s create a simple health endpoint, that returns ok if you send it a GET request:

The #[get(...) annotation is a macro that enables rocket to invoke the health method. In the main function, in the .mount() call, we make use of another macro, the routes! one, that complements the above get-macro (you will see later, what this does exactly).

If you want, you can run that code with cargo run already (optionally add the --release flag to have a binary with all the optimizations). The nice part about this is, we do no have any external dependencies in our resulting executable (except for glibc but that should be installed anyways), so we do not have to care about matching python versions, updating ruby environments or select the correct JRE vendor when we deploy our code. This means that you can simply copy the executable to your server and run it there. But let’s get back to the code for now.

Since an endpoint without any parameters is quite boring, let’s expand the example with a request object. (I will omit adding the routes since this is always the same here).

We can simply add strings:

fn hello(name: String) -> String {
format!("hello {}", name)

and more complex objects:

In the above example, the Stay type used in the function is directly parsed by rocket and injected into the function call. How does this work? There is surprisingly little magic involved here: All types that can be magically generated from a request have to implement the conveniently named trait FromRequest. The basic, built-in types like String or i32 already to this, so you are done in that case. Since we wrapped our Stay type in a Json Type, actually Jsonhas to implement FromRequestand not our Stay type. Since Json is coming from the rocket-contrib crate and was designed to do exactly this, we are pretty much set. The only thing to note here is, that types that are used with Json must derive the serde traits ( #[derive(Serialize, Deserialize)] above the struct declaration).

But the FromRequest trait is much more powerful that just deserializing JSON. Let’s have a closer look at a nice use case: authentication.

fn super_secure_function(id: i32, user: AuthUser) -> ... {

That userand id parameters of our function are so-called request guards, a parameter that is injected by Rocket and “guards” the function. If either one of them fails to be created successfully, the function is not called and an ErrorCatcher is invoked instead. This is an easy way to validate basic input data and it can be also used to enable authentication. Let’s do this for that AuthUser struct: (Ignore the lifetimes ('a, 'r) for now)

This is the (shortened) implementation of FromRequest for the AuthUser. It reads the username from a private cookie (encrypted and MAC-ed with a secret key) called user_id (L4-6) and either maps it to an AuthUser via its constructor (L8) or returns an error message (L7).

Thanks to the type safety of rust, we do not need to care about malformed input in the parameters. Either they are correctly formed (but not necessarily valid) or the code is not executed. If a parameter is optional, we need to use a Option and handle it accordingly. This means, we will never encounter a null String in the function itself.

But request guards can do even more, let’s add a database connection for saving our Stay structs somewhere:

#[post("/stay", data = "<new_stay>")]
fn stay_new(new_stay: Json<NewStay>, con: DbCon) -> ... {

So if we post something to /stay , the request body is interpreted as a NewStay struct and we can save it with our DbCon.

In the next section, we will have a look at diesel, our ORM, how Rocket injects the database connection into our handler-method and how we can use it.
(If you wonder why I have used NewStay instead of Stay like above, read on to find out why we need this here and why I think this is unfortunate.)

Connecting the Database

A rusty barrel full of diesel? (also resembling the shape of a DB in a flowchart?) Photo by Aleks Dorohovich on Unsplash

Since we want to save our stay structs in a database, we need to connect our application to one. Since postgres was already installed on my machine I chose to use it for the backend. Rust provides several libraries to achieve our goal but since we are already using Rocket, it was tempting to decide for something that already has nice integrations for it: Prepare for meeting Diesel.

Diesel basics

Diesel is an ORM for rust, that works well with postgres and nicely integrates into Rocket, even with a ready-made connection pool, provided by r2d2.

The diesel workflow is as follows:

  1. Initialize the schema: diesel setup
  2. Generate a migration: diesel migration generate migration_name_here
  3. Write some SQL or do some tricks (see below)
  4. Apply the migration diesel migration run

The first command simply generates the necessary folder structure, as does the second one. The last command analyzes the state of the database and decides what migrations need to be applied based on the directory structure and the files contained therein.

Now to the interesting part: Writing SQL…

Why does diesel require us to write SQL? Actually, it does not require us to do so, it enables us. By using an external tool, we can generate the craziest SQL structures based on a custom DSL or fancy diagrams, that then compile to SQL. In addition, we can tweak that SQL to our needs, e.g. by adding custom indices and whatnot. In my opinion, this is a pretty clever approach, since there are already lots of SQL generating utilities out there and this further facilitates integrating the rust application with your existing infrastructure.

On the other hand, using SQL is quite a restriction, since many details expressible in out rust code cannot be mapped to SQL.

Diesel generates the following schema-macro from the above SQL:

This is a macro that creates a lot of code, in this case, the drive table corresponds to a module with 946 lines, so be glad you can use diesel and don’t have to write it yourself.

Using diesel is quite simple, consider the following two functions, inserting and selecting from the drive table:

Apart from the clumsy &** syntax, if feels quite natural to use the diesel APIs, if you have used other low-level ORMs. But where do we get the DbCon object from? Luckily rocket supports this and by adding the following to our setup code:


To the launch code, we can have the connection object directly injected into our handler functions by rocket:

fn stay_get_by_id(con: DbCon, id: i32) -> ...{

Now, if one is going to build the application in a layered manner, decoupling the view representation from the persistence by using a service layer to encapsulate the business logic, we have a bit of a dilemma.
Since the DbCon is created at the beginning of the request handling, we need to pass it down to the service layer and from there to the persistence layer. This is inconvenient but also requires us to let the view layer be concerned with the exact type of the database connection. This can be solved by hiding the specific types in some construction code of the Service in a from_request method, but this requires a lot of boilerplate code or a macro tho generate said code.

In the next section, we will take a look into the internals of rocket to see, how we could solve this problem.


Photo by Leonel Fernandez on Unsplash

What does Rocket do under the hood? This is an important question to ask if we want to be able to adapt or extend its behavior, so let’s look into this.

If you don’t want to get your hands dirty with generated code, you can skip to the next section.

Consider the following code sample:

That tiny thing is pretty innocuous but macro expansion creates the following behemoth: (Don’t get scared away, you can view it with cargo expand but while developing applications with existing rocket-features, you will most likely never encounter such a hideous abomination of code)

Let’s briefly walk through that code to understand what rocket does here:

  • The finished main(L70) function is not that different from the original, apart from the wrapping of the route into a vector of routes, the simple hello function was swapped out for the static_rocket_route_info_for_hello (L74). We can find that struct directly above (L62), containing our metadata like the path or the request method, as well as a handler rocket_route_fn_hello (L66). That is a function (L22) that has been wrapped around our own function (L10). Apart from the expansion of the string formatting macro, our function is exactly the same except the missing macro on top.
  • Let’s dig into the wrapper function rocket_route_fn_hello (22). It gets passed a request from which it then tries to parse a struct of the desired type (in our case String), calls an error handler by forwarding the request or, if everything did go well, invokes the original hello function, from which it then uses the return value to generate a response, usable by rocket.

Except the uri_macro(L59), covered below, there is no magic going on here and we can pretty easily adjust rocket generated code according to our liking, I will discuss this in a different post.

Note that the route macros actually create a custom rocket_uri_macro_hello! macro to be used by user code. You can, for example, write something like this: uri!("stay", id = 32, slug = "whoever") to get a redirect URI for passing back to the client. The choice of generating macros for the user code by internal macros is an interesting one, to say the least.


Albeit rocket makes developing web services with its supported features easy, it is quite hard to implement some patterns you are used to when using other frameworks.

  • It is hard to keep state based on the connection itself. It can be discussed if this is a good pattern at all but none the less this is common in some software and hard to replicate in our setup without some wrapper code around the internal state of rocket.
  • Without a bit of trickery, you would need to pass database-related connection structs all the way through your service layer down to the persistence. Although a wrapper around a constructor of the service struct in combination with the from _request implementation of rocket can hide this from the user, it does not solve the underlying problem that the view layer has to know details about the database.
  • Diesel only supports fields in structs with primitive types. This means that you cannot have a User struct persisted in a database that contains an address field. This scenario has to be solved by referring to it via its key.
  • If a service wants to create a new record in the database, say an ApiUser, the id of the ApiUser has to be filled. But if the database determines that id, e.g. with a postgres serial, the service cannot know that id beforehand. So what does it fill in that field? You could argue that -1 would be a valid choice to signal the field has an unknown value but that breaks the semantics of the id field in my opinion. Another option would be wrapping the field in an Option, allowing diesel to fill in the value if known. That approach also violates the assumptions I have about an id field in my opinion, how can there be records without their primary key? The “official” solution is using a ‘NewApiUser’ struct that lacks the unknown fields and is used for the sole purpose of being inserted in the database. I see this as a bit of a hack but it works and a procedural macro could easily be defined to generate those ‘New…’ structs without the critical fields from annotated ones.
  • Diesel is further limited to a supported set of features expressible in SQL for its code generation. This includes composite keys and such but other things like more complex joins are missing.
  • Rocket does not support using functions implemented on structs or traits as request handlers. this can be solved with a declarative macro (e.g. see here) but is none the less an unfortunate hack. This also is in the way of trait objects being annotated with something like #[CrudHandler] allowing automatic generation of boilerplate code.
  • Build-Times: when running a clean re-build from scratch cargo clean && cargo build --release, it takes about 4 minutes. So you need to buffer intermediate built artifacts in your CI. A simple built like you would use it in development takes about 7 seconds, so that should not slow down your development workflow.


After writing a small sample application, rocket feels more like a fast version of flask (or a hint of django) than like a Spring-Web alternative. This is partially intentional, since it has a much more light-weight focus on approaching things.

The strict ownership system of Rust gives you strong safety guarantees and its syntax allows you to write concise code that is easy to reason about and not as verbose as other languages tend to be. This comes at the cost of some more or less hacks to enable certain functionality.

All in all, I would argue that Rocket and Diesel are a valid combination to develop certain types of services, especially if they can be kept small enough so you do not need the features provided by spring and can focus on performance, startup times, safety or maintainability.

Another aspect where the discussed tech stack might be of interest is the integration with low-level- or system-components, like a container-runtime or micro-vms. In such places, the dependency-less nature of a single executable really shines.

Thanks for reading! If you have any questions, suggestions or critique regarding the topic, feel free to respond or contact me. You might be interested in the other posts published in the Digital Frontiers blog, announced on our Twitter account.