Lessons learned on writing web applications completely in Rust
This blog post is an update to the preceeding article “A web application completely written in Rust” and summarizes the projects’ progress over the last months. Before continuing, please consider reading the initial blog post first.
The initial idea when starting with the project four months ago was to evaluate the frontend and backend capabilities of Rust, a programming language mainly designed for systems programming. Iteratively finding out a good software architecture is one of the major goals and leads into continuous change within the project.
I’m now happy to say that the current state of the project provides a good starting point for users who want to start on full stack development with Rust.
Lots of things have changed under the hood in the last three months and new features (like HTTP redirects) have been added to the application. I will focus here on the major software-architectural parts. Let’s start with the first big change, the project structure: I decided to use cargo workspaces for frontend and backend separation instead of a single crate. This opens the possibility of producing a clearly separated internal and external API between the backend, frontend and core. This also implicates that the dedicated Cargo.toml manifests are now better separating the runtime and development dependencies.
The core protocol
Another part which has changed was the removal of the Cap’n Proto protocol for backend to frontend (and vice versa) communication. The application never used the Remote Procedure Call (RPC) capabilities of Cap’n Proto, because this part does not build for WebAssembly right now. This means Cap’n Proto was mainly used for data structure generation, which does not fit that nicely into the overall software architecture.
The main advantage of the removal is that the source code generation is not necessary any more which drastically reduced the complexity of the application. The protocol is now packed directly in Rust structures, which means that we don’t have any code generation to overcome:
The Session, including the authentication token, is now generically usable by the frontend, backend and the database driver which is a huge improvement to the previous de-/serialisation approach. This means we do not need further abstractions to database models: They can be used directly in backend and frontend. Hooray! 👏
Let’s have a look at the request and response messages:
Nothing big has changed here, but as already mentioned we are able to use the Session in both requests and responses. Another fun fact is that the Logout response is represented by an empty struct type, because there is simply nothing to be done after a successful logout from the web application.
Another new feature of the protocol is that the de-/serialisation will now be done using CBOR, whereas the data is transmitted via REST (within the body of HTTP messages) instead of a WebSocket connection. You might think that the additional overhead would have performance implications, but there were two major reasons to go for the more classic REST approach:
- Load balancing and resource sharing of the different API paths is easier (especially with actix-web) when they terminate in dedicated handler functions instead of a single WebSocket.
- The application is now ready for providing a third-party API interface to other users. The API is defined within the core part which will be used by the backend and frontend:
The change of the major protocol interface implicates larger adaptions within the backend of the application. As an example, the “login with credentials” (username and password) handler function now looks like this:
At first, we unpack the CBOR content from the HTTP request. Afterwards, a small sanity check for empty username and password will return an HTTP 401 (Unauthorized) error in that case. This is another benefit to the previous WebSocket based approach since we now have a more clear error reporting interface via HTTP codes. Furthermore, it is now possible to use the more concise future-based programming style the Rust futures crate offers. After the sanity checks we create a new session token and put it directly into the database via the Diesel object-relational mapping (ORM). In general, the request handling is now better separated and much more clearer than the initial approach. Another big “hooray”! 👏
The first major change within the frontend is that the router has now its own repository and can be used by other yew-based applications too. Routing to dedicated components of the frontend is now really easy and can be done like this (pseudo) example:
A fully working example is included within the RootComponent of the webapp.rs application.
Since we removed the WebSocket, the backend communication can now be done using the standard fetch API, where yew already includes an interface. I decided to create a macro which allows an easier usage like this:
Again, the core data structures can be reused here for convenience, which is pretty awesome and makes backend interaction within the frontend seamlessly possible!
I’d like to mention that yew is just in the development phase where it is not ready for a real production use yet. I think yew needs a higher attention from the Rust community. So please, developers, contribute to that great project! In general, Rust is on a pretty good track for replacing web application stacks in the future, whereas the major construction areas are related to testing and WebAssembly topics.
So that’s the update for now. Feel free to directly get in touch with me, ask me questions or provide any feedback in one of the published communities.
Thank you very much for reading and keep on Rusting! ❤