Is FastAPI fast enough?

They say it's on par with NodeJS and go. I was surprised after benchmarking it myself. What do you think?

Yash Patel
3 min readApr 16, 2022

All the code in this article is available on my Github repo.

For comparing benchmarking results of FastAPI, I’ve used the gin HTTP framework from go and express from NodeJS.

In my journey of enterprise software development, I came across numerous articles claiming Python to be a shining star in terms of web app development. In those, one of the most talked-about frameworks is FastAPI for python and Express for JS. It is evident when compared to other popular frameworks.

After using FastAPI for developing APIs and scaling it for IoT applications, I experienced its struggles handling more than 10000 requests/sec without increasing virtual machine instances. So I decided to benchmark this highly decorated framework against other options (Gin and Express). The results of this test came as a surprise to me because of the claim made on FastAPIs documentation.

Results of the Test

There are some important metrics that I want to point out here.

  1. Average time per request: Time difference between request reaching API server and leaving the server. (Lower the better)
  2. Requests per second: Number of requests served per second. (Higher the better)

From simple math, we can see that Express and Gin are 2.6 and 14.8 times faster than FastAPI respectively in terms of average time per request. Also, Express and Gin can serve 4.7 and 12 times more requests than FastAPI respectively in the same time period. How can these results be on par with NodeJS and Go?

These results pushed me to rethink my goto framework of choice which previously WAS FastAPI.

What does this benchmarking test do?

In order to rule out language preference load testing, I’m not using Python’s Locust, Go’s Vegeta, and Javascripts’ Loadtest. Instead, I opted for Rust’s Drill.

Since we’re testing different types of languages (Compiled / Interpreted), I will not perform any compute-intensive or I/O bound task. The goal here is to test how many requests can these frameworks handle and what are the response timings.

I have created 1 endpoint in each framework that returns a 200 OK response with the following JSON.

{
"status": 200,
"message": "Success"
}

Want to run these benchmarks yourself?

You need to clone this repository: https://github.com/ypatel-93/benchmark-fastapi_gin_express

Since I’m a Unix spoiled brat, and never touched a Windows system, this tutorial will run smoothly if you have a Linux or Mac OS. Also, you have to make sure you have python3.6+, go 1.13+, Nodejs16+, npm 7+, and Rust and Cargo 1.59+ installed.

There is a shell script in the repo named test_setup.sh, it will make necessary setups for you. You need to make it executable before running it.

sudo chmod +x test_setup.sh

Note: You will need to press ENTER for all questing asked during NodeJS setup.

In case installation fails using the setup script or you want to do it yourself, please follow the steps from the official installation guides for FastAPI, Gin, and Express.

Running Apps and Testing

There is a drill_test.sh script in the repo. Make it executable

sudo chmod +x drill_test.sh

I have configured it so that FastAPI, gin, and the express apps will run on ports 8001, 8002, and 8003 respectively.

Your terminal should print test logs as the apps are tested.

Happy Testing 😬

Thanks for reading!

--

--