Moving to TCPStream, Bye Tokio!

Preethi Kumar
Adventures in Rust
Published in
4 min readApr 13, 2017

Hey people, the last time we wrote about our little web server project, we had built a simple static file server.

And you know, we were absolute newbies in Rust and were trying to get a hold of the ecosystem and the tooling while putting this together. Based on some quick research we found this promising networking library, Tokio. And we thought we could use it as a base for our project.

As Tokio’s website puts it, Tokio is —

A platform for writing fast networking code with Rust.

It sure sounded like something we’d want to use for our learning project. We dug through the guides which were basically delivered using few example apps. The first example they introduced us to, writing an echo server with Tokio, really showed how Tokio would allow us to cleanly implement a network stack complete from handing requests and responses to the protocol that’s being employed as well.

We were very excited and managed to hack together a simple static file server. But then, at one point our productivity started dropping. We hit the brakes and thought about our decision to go with Tokio and finally decided to drop the idea and go with something else. Before we tell you about the replacement, here’s our pain points. Hopefully we can learn a lesson or two from this.

  1. Hard time putting the abstractions to use — Since we were very new to Rust, we had a very hard time understanding the example code and reading through the docs and figuring out how to make Tokio’s abstractions work for us and where the boundaries lied. We basically ended up spending more time learning about these abstractions (Codec, Protocol, Service, etc) than building useful things over them.
  2. Navigating the samples and projects that use Tokio — Since we were absolutely new to Rust, navigating even a simple piece of code was a difficult endeavour. And when we tried to learn from a some projects on GitHub that used Tokio to build stuff, we ended up jumping back and forth between the code and the docs a little too often than we’d like and even though we eventually ended up liking the docs, they were not very intuitive in the beginning. Tokio required us to understand a little too much (io, core, proto, service, futures, etc..) and it ended up becoming overwhelming for us, fledgling Rust developers :)

To put it short, even after a simple static file server that we referred in our first post, we still did not have a clear understanding of how things were wired together right from socket creation to how the response was being sent out. And this was a big problem for us as the whole point of this project is to help us learn the very basics of building a server.

I’m sure if we come back to this post a few months down the line, we’re going to be bashing our amateur-selves for dropping Tokio for these reasons ;)

Enter Rust’s own std::net module

With Tokio gone, we looked toward Rust’s std::net module. After going through the docs and few examples, we felt this is what we should have used in the first place.

Just take a look at this code —

We had to spend a fraction of the time that we spent with Tokio in order to understand this and put these modules to use.

Here’s what we did next —

  1. We set up a basic server that simply responds with Hello World. Not very useful indeed, but always a good start.
  2. Planned out the initial features that we want our web server to provide
    1. Static File Serving
    2. Support CGI Scripting
    3. Reverse Proxying
  3. Integrated a HTTP Parser crate.
  4. Setup a very basic, temporary, hard-coded router.
  5. Completed the static file server. (We’re not able to serve PDFs properly for some reason.)
  6. Concurrency by spawning new threads for incoming requests.
  7. 500 error pages when there are errors ❤ ❤

Even if we could have made this progress just with Tokio, I don’t think we would have felt very confident with our code. Using std::net gave us just the bare minimum letting us build things on top.

We did a quick micro-benchmark and we got a throughput of almost 10000 requests per second, which is not that surprising :D Anyway, that’s a useless number at this point.

So yeah, that’s where we are at the moment. Our next step is to enable CGI Scripting. I am looking forward to that very much as it lets us do some really cool stuff.

Currently, all our code lives in the main.rs file and hopefully as we make more progress we can break down and modularise the code better.

In case, you want to see how our project is turning out, here’s a link to the Github repo.

--

--