Updated Benchmarks for the Top Server-Side Swift Frameworks vs. Node.js

It was August of 2016 when I first published Benchmarks for the Top Server-Side Swift Frameworks vs. Node.js. In October of 2016, I updated to barebones Ubuntu in Linux (Ubuntu) Benchmarks for Server Side Swift vs Node.js. Since then I have received a lot of great feedback, as well as some recent requests for more, and I agree that it is time for an update. I have gone back over to macOS, reformatted my 2012 base Mac mini, and updated all the blogs to their newest versions, including the complete overhaul of Vapor that is Vapor 2.


This study focuses on getting benchmarks for Server-Side Swift vs Node.js on macOS Sierra, and contains updates for Perfect 2, Vapor 2, Kitura 1, Zewo 0, and Node.js 8. Since the last study, Swift has had quite a few updates, though nothing is ready for Swift 4 just yet and I am testing on Swift 3.1.

Organization of this Post

This document is laid out in the following manner:

Results Summary

The following is a quick summary of the results, more detail is included later in this post.



Changes from Last Time

The biggest change form the last benchmarks is that I have switch back to macOS, just like the first time. Vapor has undergone a very deep and detailed re-write for Vapor 2.0, and I have also upgraded my laptop to the late 2016 rMBP with the touch bar. Since I never had any issues generating enough load to test with, this should have a very minimal impact on the results.

What was Benchmarked

Two benchmarks were performed on the same design of software in each framework. The first is a representation of a blog page, and the second is JSON data. They were designed to be as close to each other as possible, while still natively coding in the unique syntax and style of each framework. These are meant to represent real-life scenarios of what you might use a server-side implementation for. They are complicated enough to be more than just printing “Hello, World!” to the screen, but simple enough to be effective.

Source Code

As always, this is all done with public, open-source code. If you would like to read through the code to see how it was put together, checkout the improvements that have been made since my last article, or repeat the testing in your own environment configuration, you can find the full source here:



There are a few things to clarify and note:


to the executable. i.e.

.build/release/Run --env=production

Why Node.js/Express?

I decided to put Swift up against the Express framework in Node.js. It shows just how impressive Swift can be when compared to a widely used language and framework, plus they all have a very similar style and syntax.


All of the blogs were taken from the last repository and updated to the current versions of each framework.

Hosting & Environment

To minimize any differences in the environment, I took the same 2012 Mac Mini that I used for macOS benchmarks last time, formatted it, and gave it a clean install of macOS. After that, I downloaded and installed the release version of Swift 3.1 (Xcode). From there I cloned the repos, and cleanly built each of the blogs in release mode and added any missing dependencies that they required (as they yelled at me, of course). I also installed Node.js v8.1.2 (current, stable release at the time of testing). I never ran more than one at a time, and each was stopped and restarted in between tests. The test server specs are:

For development, I use a 15" 2016 rMBP TouchBar. This is my latest real-life development machine. I used wrk to get the benchmarks, and I did this over a thunderbolt 3 to 2 adapter. Wifi and all other networking was turned off on both machines, and testing was done over the Thunderbolt bridge to minimize any impact the network and bandwidth limitations could have. No network hardware was used outside a Thunderbolt 3 to Adapter and Apple’s thunderbolt cable. It is also more reliable to serve the blogs on one machine, and to use a separate, more powerful machine to generate the load, ensuring you are capable of overpowering the server. This also gives you a consistent testing environment, so I can say that each blog was run on the same hardware and in the same conditions. For the curious, the specs of my machine are:


For benchmarking, I used a ten minute test with four threads, each carrying 20 connections. Four seconds is not test. Ten minutes is a reasonable timeframe to get plenty of data, and running 20 connections on four threads is a good sized load for the blogs without breaking anything. You can achieve the same with:

wrk -d 10m -t 4 -c 20 http://ip-address:8X8X/blog

where ‘ip-address’ is the machine address you need to connect to, 8x8x is the port used, and /blog or /json is used to benchmark the desired verison.

Blog Benchmark Results

The first benchmark is the /blog route in each, which is a page that returns 5 random images and fake blog posts for each request.

What was Run

wrk -d 10m -t 4 -c 20 http://ip-address:(PORT)/blog

was run for each blog from my laptop to the test Mac Mini server over a Thunderbolt bridge.

How it was Run

Each framework was run in release mode, and was stopped and restarted before each test. Only one framework was running at any given time on the server. All activity was made to be a minimum on both machines during the testing to keep the environment as clean and consistent as possible.


JSON Benchmark Results

The second benchmark is the /json route in each framework, which is a page that returns a JSON dictionary of ten random numbers.

What was Run

wrk -d 10m -t 4 -c 20 http://ip-address:(PORT)/json

was run for each JSON project, again on the same Thunderbolt bridge setup.

How it was Run

As with the other tests, each framework was run in release mode where possible, and was stopped and restarted before each test. Only one framework was running at any given time on the server. All activity was made to be a minimum on both machines during the testing to keep the environment as similar as possible.



This time around, there is a much wider distribution of placements, which I take to mean that we have some fierce competition here. Overall I see speed as less and less of a factor, especially as everyone seems to be converging at the lowest end of what Swift is capable of. In my recent projects, I tend to gravitate towards whatever has the feature set I need for the project, not solely what is fastest. I always recommend trying each one out and going with whatever works for you and your project.

Get Involved

If you are interested in Server-Side Swift, now is a wonderful time to get involved! More and more features go live everyday, development in Swift is a breath of fresh air, and it is deliciously fast, no matter what framework you choose. You can learn more about each framework and get involved here:

Perfect: Website | Github | Slack
Vapor: Website | Github | Slack
Kitura: Website | Github | Slack
Zewo: Website | Github | Slack

Get in Touch

If you want to connect, you can reach out to me @rymcol on Twitter or LinkedIn.

Disclosures: I’m on the github teams for Vapor & Perfect because I contribute to them. I am not an employee of either, nor do my opinions reflect theirs. I’ve done my absolute best to remain completely impartial, as I develop in all four of the featured platforms, and I am heavily involved in the Server-Side Swift community. All the code is publicly available for this study, please feel free to check it out or repeat some of the tests for your environment!

Startup Addict. Technologist. Obsessed with great adventures, in business and in life.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store