I got a LOT of feedback from my original Server-Side Swift Benchmarks. One of the main areas that feedback focused on was getting benchmarks on Linux, so here you go!
This study focuses on getting benchmarks for Server-Side Swift vs Node.js on Linux, but also contains updates for Perfect 2.0, Vapor 1.0, Kitura 1.0, Zewo on Swift 3 (HTTPServer 0.14), and Node.js v4.5.0. Since the last study, many improvements have been made by all the major frameworks, as well as various syntax changes.
Organization of this Post
This document is laid out in the following manner:
- This quick intro
- Results Summary
- Detailed Results
- Insights & Final Notes
The following is a quick summary of the results, more detail is included later in this post.
Changes from Last Time
In the last study, I included data for build times, memory usage, threads, as well as benchmarks for the same. This time around, I’m only including benchmark data to keep things simple and give a follow up that is shorter to read.
What was Benchmarked
Benchmarks were performed on two versions of the same software each framework, the first is a representation of a blog page, and the second is JSON data. They were designed to be as close to each other as possible, while still coding in the unique syntax and style of each framework. These are meant to represent real-life scenarios of what you might use a server-side implementation for. They are complicated enough to be more than just printing “Hello, World!” to the screen, but simple enough to be effective.
Again, this is all done with public source code. If you would like to read through the code to see how it was put together, checkout the improvements that have been made since my last article, or repeat the testing in your own environment configuration, you can find the full source here:
There are a few things to clarify and note:
- Both Kitura and Zewo have issues building if there are any spaces in their absolute file paths. Xcode also has issues building with spaces in absolute file paths in any framework.
- Perfect, Vapor, & Kitura are now on a release version (2.0, 1.0, and 1.0 respectively). Zewo is still on a pre-release (0.14)
- Kitura was updated on October 13th, 2016 with a patch that fixed it’s JSON implementation on Linux.
- Zewo is single threaded and was run in a parallel configuration, meaning there is one process running for each logical processor ID on the machine. Likewise, Node is meant to be run on multiple instances using cluster for maximum performance, and this was implemented. Perfect, Vapor, and Kitura are threaded, and they manage their own threads. Only one instance of each was running during the benchmark.
- All four Swift frameworks were compiled on the 3.0 release toolchain in release mode. This was downloaded and installed directly from swift.org. Node does not compile or differentiate between build/release.
- Node.js was included as v4.5.0 becuase it is the default apt installation on Ubuntu 14.04.5, and, like many others, that is where I’m used to getting my installs from. At this moment I am planning to do a follow up for Swift 3.1 on Ubuntu 16 once support is official, as well as Node 6.7, but that will not be updated in this post. Hit the follow button at the top for updates!
- Vapor was updated on October 13th to with fixes that remove it’s middleware bloat properly (and a bit of usability) to make it more comparable as originally intended.
- Vapor is a pure Swift stack — meaning they have their own HTTP parsing library written in Swift, and they aren’t using a lower-level API (like the Node HTTP parsing lib that most everyone is using.) This has a performance tax with it, but they feel strongly about being pure Swift and asked me to mention it here.
- Vapor has a special syntax for running releases. If you simply execute the binary, you’re going to get some extra console logging that is meant to help with the development and debugging process. That has a little overhead. To run Vapor in release mode you need to add:
to the executable. i.e.
I decided to put Swift up against the Express framework in Node.js. It shows just how impressive Swift can be when compared to a widely used language and framework, plus they have a very similar style and syntax.
Since I already had the code from the last study, the majority of development this time was getting everything updated for the current versions of all the frameworks. Shout out to Paulo Faria & David Jones for noticing my new repo and taking the time to send pull requests that made my life just a little bit easier. Paulo helped to optimize the syntax for Zewo and convert it to 0.14. David helped to get cluster mode up and running on Node, and copied the same header and footer handling that I used everywhere else into Kitura. It is awesome to see a great community building around Server-Side Swift!
Hosting & Environment
To minimize any differences in the environment, I took the same 2012 Mac Mini that I used for macOS benchmarks, formatted it, and gave it a clean install of Ubuntu 14.04.5 Server. Ubuntu is running natively. It is not dual booting, nor is it virtualized. After that, I downloaded and installed the release version of Swift 3.0 for Ubuntu 14 and added it to my $PATH. From there I cloned the repos, and cleanly build each of the blogs in release mode. I also installed Node.js 4.5.0. I never ran more than one at a time, and each was stopped and restarted in between tests. The test server specs (though no longer running on El Capitan) are still:
For development, I use a 2015 rMBP. This is my real-life development machine and it made the most sense. I used wrk to get the benchmarks, and I did this over a gigabit ethernet. Wifi and all other networking was turned off on my machine, and Ubuntu was set to use a static IP so that I could create a direct connection to minimize the bandwidth limitations. No network hardware was used outside a Cat6 cable and Apple’s thunderbolt adapter to give my laptop a gigabit ethernet port. It’s also more reliable to serve the blogs on one machine, and to use a separate, more powerful machine to generate the load, ensuring you are capable of overpowering the server. This also gives you a consistent testing environment, so I can say that each blog was run on the same hardware and in the same conditions. For the curious, the specs of my machine are:
For benchmarking, I used a ten minute test with four threads, each carrying 20 connections. Four seconds is not test. Ten minutes is a reasonable timeframe to get plenty of data, and running 20 connections on four threads is a good sized load for the blogs without breaking anything. You can achieve the same with:
wrk -d 10m -t 4 -c 20 http://10.0.1.11:8X8X/blog
where 8x8x is the port used and /blog or /json is used to benchmark the desired verison.
Blog Benchmark Results
The first benchmark is the /blog route in each, which is a page that returns 5 random images and fake blog posts for each request.
What was Run
wrk -d 10m -t 4 -c 20 http://10.0.1.11:(PORT)/blog
was run for each blog from my rMBP over a single gigabit ethernet cable.
How it was Run
Each framework was run in release mode where possible, and was stopped and restarted before each test. Only one framework was running at any given time on the server. All activity was made to be a minimum on both machines during the testing to keep the environment as similar as possible.
JSON Benchmark Results
The second benchmark is the /json route in the stripped down JSON-Only versions of each framework, which is a page that returns a JSON dictionary of ten random numbers.
What was Run
wrk -d 10m -t 4 -c 20 http://10.0.1.11:(PORT)/json
was run for each JSON project, again on the same single gigabit ethernet.
How it was Run
As with the other tests, each framework was run in release mode where possible, and was stopped and restarted before each test. Only one framework was running at any given time on the server. All activity was made to be a minimum on both machines during the testing to keep the environment as similar as possible.
Node is being pushed all the way down in the rankings, and this time it even had cluster mode behind it. There have been some MAJOR improvements in Server-Side Swift in the last few weeks, which was already pretty darn quick. Now that we have the release verison of Swift 3.0 and release versions for most of the frameworks, some of which are very feature rich, I say it’s high time we stop thinking about Server-Side Swift as beta and start treating it as a production ready platform. I am already using it in my own client projects as a primary solution to their server-side needs. There are certain to be small areas that need improvement for some projects, but from what I have done so far, it has been more than adequate.
If you are interested in Server-Side Swift, now is the time to get involved! There is still a lot of work to be done on the frameworks, their documentation, and getting some really cool applications and tutorials out there as examples, open or closed source. You can learn more about each framework and get involved here:
Get in Touch
If you want to connect, you can reach out to me @rymcol on Twitter.
Disclosures: I’m on the github teams for Vapor & Perfect because I contribute to them. I am not an employee of either, nor do my opinions reflect theirs. PerfectlySoft Inc. agreed to fund this study for me, to promote the power of Swift over current alternatives. I’ve done my absolute best to remain completely impartial, as I develop in all four platforms, and I’m heavily involved in the Server-Side Swift community. All the code is publicly available for this study, please feel free to check it out or repeat some of the tests for your environment!