Fastify Cluster vs Rust: QR-generator API benchmark

Mayank Choubey
Tech Tonic
3 min readSep 10, 2023

--

Introduction

In this installment of our article series, focusing on the QR-generator API use case, we continue our highly regarded real-world benchmarking series. Here, we delve beyond the standard “Hello World” scenarios and rigorously evaluate various technologies in practical, real-world contexts. Throughout this series, we meticulously scrutinize their latencies and resource consumption to provide you with valuable insights.

Previously, we have explored the following real-world scenarios:
1. JWT Verification and MySQL Queries
2. Static File Server
3. Multipart Form Uploads with Files

Our benchmarking endeavors encompass a diverse array of popular technologies, including Node.js, SpringBoot, Go, Rust, NestJS, and many more. To explore the complete list of articles in this series, please refer to our dedicated compilation article here.

In this specific use case, the QR-generator API, we pivot our focus towards a head-to-head comparison between Fastify operating in cluster mode and Rust with Actix framework. Both of these technologies are well-established and require no formal introduction. Without further ado, let’s dive straight into the details of our testing setup.

Test setup

Environment

All tests were conducted on a MacBook Pro M1 equipped with 16GB of RAM. The testing tool used was a customized version of Bombardier, which supports the inclusion of random URLs within the request body. The software versions utilized for these tests were as follows:

  • Node.js v20.6.1
  • Rust 1.72.0

Code

The QR-generator application was designed to accept a JSON request body that includes a mandatory parameter known as “urlToEmbed.” This application’s primary function is to generate a QR code for the specified URL and subsequently deliver the QR code in PNG format within the HTTP response. For additional complexity, the application runs over HTTPS.

Fastify (Cluster 8)

Rust

Results

As we strive to comprehensively evaluate performance, we undertook a meticulous series of examinations. Each examination encompassed 100,000 requests, and we scrutinized their efficiency across a spectrum of concurrent connections: 10, 50, and 100. Given the resource-intensive nature of QR generation, we deliberately maintained a slightly lower request volume compared to other scenarios.

Now, let’s delve into the outcomes, which we’ve succinctly presented in a tabular format for your convenience:

A scorecard is also generated from the results using the following formula. For each measurement, get the winning margin. If the winning margin is:

  • < 5%, no points are given
  • between 5% and 20%, 1 point is given to the winner
  • between 20% and 50%, 2 points are given to the winner
  • > 50%, 3 points are given to the winner

The scorecard is:

Thanks for reading!

For a comprehensive list of real-world benchmarking, you can visit the main article here.

--

--