Comparing NodeJs and Rust HTTP-Frameworks response times

Feb 7, 2017 · 3 min read

Last weekend I started learning Rust. I really enjoy it so far, because for me it is something completely new, coming from script and object oriented languages.
Recently a new version of Rocket, an HTTP-Framework for Rust, was released in version 0.2.0 ( Reading about the release and looking at the release notes, I got an idea.

How about making a comparison between NodeJs HTTP-Framework and the rocket framework?
I selected NodeJs because I really like it and I already use it for lots of my other stuff.
For the NodeJs side, I decided to use express (, koa ( and restify ( I selected those because I already used all of them.
The test is simple. All frameworks just return the string “Hello world”, what else?
You can find the code on my GitHub account

Test environment

For testing, I used my virtual dev machine.
Logical Cores: 4
Memory: 8gb DDR3 @ 1600mhz
Processor: Intel i7 870 @ 2.90GHz
Operating System: Ubuntu 16.04
NodeJS Version: 7.4.0
Rust Version: Rust 1.17.0-nightly (needed for rocket)

Test description

As a benchmark tool, I used wrk ( The testing command was “wrk -t 12 -c 400 -d 1m http://localhost:<port>”. This command is mostly copied only the duration was changed from 30 seconds to one minute.
For every server, I run this command five times, wrote down the values in an Excel table and created the average value for comparison.
The NodeJs servers were all started using “node index.js”, the Rust server by using “cargo run”.

The first test

Test one

Well, this was unexpected. Rocket and koa have a close match on requests per second. In the transfer per second express even beats rocket. The transfer values are in MB.
I really thought that Rust would be much faster than NodeJs handling this. So I´ve done some research to find the “problem”.

The second test

Although I didn´t think that testing and running the servers on the same machine has an impact on the test, I started my Laptop and started the benchmark there. The results were nearly the same as before.

So, back to the drawing board.

The third and last test

After some more thinking and coffee, I remembered that you can run cargo with the flag “ — release”. This compiles all libraries and optimizes the code.
In the beginning I didn´t think about that, simply because this is such a small project.
After more coffee, checking twitter and GitHub, cargo was done compiling. I repeated all rocket tests and I got the results I expected.

Test three

With the optimized version, we can see that rocket is much faster than before. Looking at the raw numbers it is ~5 times faster than before.
Instead of ~36k request in the whole testing time, we are now talking about ~1.8 million requests.


I never thought that optimizing such a small program would have such an impact on the performance. But here we have a performance boost of over 5 times the old result.
If you want to look at the real values you can go to the repo and check out the folder “wrk results”. I took a picture of each test I done, only the test from another machine aren’t there because they didn´t make a big difference.

Thanks for reading :)


Written by

Loves programming, always ready for fun new programming projects.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade