Updated Benchmarks for the Top Server-Side Swift Frameworks vs. Node.js

It was August of 2016 when I first published Benchmarks for the Top Server-Side Swift Frameworks vs. Node.js. In October of 2016, I updated to barebones Ubuntu in Linux (Ubuntu) Benchmarks for Server Side Swift vs Node.js. Since then I have received a lot of great feedback, as well as some recent requests for more, and I agree that it is time for an update. I have gone back over to macOS, reformatted my 2012 base Mac mini, and updated all the blogs to their newest versions, including the complete overhaul of Vapor that is Vapor 2.

Focus

This study focuses on getting benchmarks for Server-Side Swift vs Node.js on macOS Sierra, and contains updates for Perfect 2, Vapor 2, Kitura 1, Zewo 0, and Node.js 8. Since the last study, Swift has had quite a few updates, though nothing is ready for Swift 4 just yet and I am testing on Swift 3.1.

Organization of this Post

This document is laid out in the following manner:

  • Results Summary
  • Methodology
  • Detailed Results
  • Insights & Final Notes

Results Summary

The following is a quick summary of the results, more detail is included later in this post.

7

Methodology

Changes from Last Time

The biggest change form the last benchmarks is that I have switch back to macOS, just like the first time. Vapor has undergone a very deep and detailed re-write for Vapor 2.0, and I have also upgraded my laptop to the late 2016 rMBP with the touch bar. Since I never had any issues generating enough load to test with, this should have a very minimal impact on the results.

What was Benchmarked

Two benchmarks were performed on the same design of software in each framework. The first is a representation of a blog page, and the second is JSON data. They were designed to be as close to each other as possible, while still natively coding in the unique syntax and style of each framework. These are meant to represent real-life scenarios of what you might use a server-side implementation for. They are complicated enough to be more than just printing “Hello, World!” to the screen, but simple enough to be effective.

Source Code

As always, this is all done with public, open-source code. If you would like to read through the code to see how it was put together, checkout the improvements that have been made since my last article, or repeat the testing in your own environment configuration, you can find the full source here:

Notes

There are a few things to clarify and note:

  • Each framework is now treated as release (not beta), which is partly why I did no testing on Swift 4 yet.
  • Zewo is single threaded and is meant to be run in a parallel configuration, meaning there is one process running for each logical processor ID on the machine. Likewise, Node is meant to be run on multiple instances using cluster for maximum performance, and this was implemented. Perfect, Vapor, and Kitura are threaded, and they manage their own threads.
  • All four Swift frameworks were compiled on the 3.1 release toolchain in release mode. Node does not compile.
  • Node.js was included at its current stable release version
  • If you strip out the fluff (feature set) of any given framework, and use just the bones, you will likely get faster results. I did not do any of that this time, as I wanted to develop in each framework as it is presented and how the majority of developers that encounter it will use it.
  • Vapor is no longer pure Swift as of Vapor 2, and includes at least chttp and LibreSSL / OpenSSL.
  • Vapor has a special syntax for running releases. If you simply execute the binary, you’re going to get some extra console logging that is meant to help with the development and debugging process. That has a little overhead. To run Vapor in release mode you need to add:
--env=production
.build/release/Run --env=production

Why Node.js/Express?

I decided to put Swift up against the Express framework in Node.js. It shows just how impressive Swift can be when compared to a widely used language and framework, plus they all have a very similar style and syntax.

Development

All of the blogs were taken from the last repository and updated to the current versions of each framework.

Hosting & Environment

To minimize any differences in the environment, I took the same 2012 Mac Mini that I used for macOS benchmarks last time, formatted it, and gave it a clean install of macOS. After that, I downloaded and installed the release version of Swift 3.1 (Xcode). From there I cloned the repos, and cleanly built each of the blogs in release mode and added any missing dependencies that they required (as they yelled at me, of course). I also installed Node.js v8.1.2 (current, stable release at the time of testing). I never ran more than one at a time, and each was stopped and restarted in between tests. The test server specs are:

Benchmarking

For benchmarking, I used a ten minute test with four threads, each carrying 20 connections. Four seconds is not test. Ten minutes is a reasonable timeframe to get plenty of data, and running 20 connections on four threads is a good sized load for the blogs without breaking anything. You can achieve the same with:

wrk -d 10m -t 4 -c 20 http://ip-address:8X8X/blog

Blog Benchmark Results

The first benchmark is the /blog route in each, which is a page that returns 5 random images and fake blog posts for each request.

What was Run

wrk -d 10m -t 4 -c 20 http://ip-address:(PORT)/blog

How it was Run

Each framework was run in release mode, and was stopped and restarted before each test. Only one framework was running at any given time on the server. All activity was made to be a minimum on both machines during the testing to keep the environment as clean and consistent as possible.

Results

JSON Benchmark Results

The second benchmark is the /json route in each framework, which is a page that returns a JSON dictionary of ten random numbers.

What was Run

wrk -d 10m -t 4 -c 20 http://ip-address:(PORT)/json

How it was Run

As with the other tests, each framework was run in release mode where possible, and was stopped and restarted before each test. Only one framework was running at any given time on the server. All activity was made to be a minimum on both machines during the testing to keep the environment as similar as possible.

Results

Insights

This time around, there is a much wider distribution of placements, which I take to mean that we have some fierce competition here. Overall I see speed as less and less of a factor, especially as everyone seems to be converging at the lowest end of what Swift is capable of. In my recent projects, I tend to gravitate towards whatever has the feature set I need for the project, not solely what is fastest. I always recommend trying each one out and going with whatever works for you and your project.

Get Involved

If you are interested in Server-Side Swift, now is a wonderful time to get involved! More and more features go live everyday, development in Swift is a breath of fresh air, and it is deliciously fast, no matter what framework you choose. You can learn more about each framework and get involved here:

Get in Touch

If you want to connect, you can reach out to me @rymcol on Twitter or LinkedIn.

Startup Addict. Technologist. Obsessed with great adventures, in business and in life.