Battle of the VPNs: Which one is fastest? (speed test)
There are a ton of speed tests out there for VPNs like NordVPN or SurfShark. These are personal VPN services that allow you to browse the web securely. But what about VPNs that an IT professional might use? What if you’re building your own VPN?
Let’s say, for instance, that you need to create a secure P2P network for multi-cloud, IoT, remote access, or edge computing. Speed is important here, too.
Today, we’re taking some battle-hardened virtual networking tools and putting them through the ringer to see which one lets you build the fastest secure networks for distributed computing.
As a disclaimer, I’m the CEO of Gravitl, which created Netmaker. We wanted to see how Netmaker stacks up against its peers, so we ran these tests.
If this makes you skeptical of the results, we’re linking the entire data set at the bottom, and we encourage you to try out these tools for yourself. We’ve attempted to be non-biased, but results will always depend on test conditions, and we cant simulate everything.
How did we run the tests?
All tests were run on Ubuntu 20.04. We plan to run more tests in the future on mixed operating systems including Windows, because OS does affect speed. However, we decided to start with Linux, since that’s the most commonly encountered in these scenarios.
All measurements were taken with iperf3, which measures average megabits per second (Mbps). This gives us a sense for network bandwidth and throughput. Each iperf3 test measures speed over 10 intervals and was repeated multiple times. Across all tests, about 2,450 measurements were taken.
We used the most basic installation of each VPN possible with minimal configuration. In some tests, we adjusted MTU, since this is a significant factor.
Finally, we ran these tests across 3 environments: inter-cloud, inter-VPC (AWS), and intra-VPC (AWS). The tests were run on instances of various sizes and network performance.
Test #1 — Speed test across clouds
As an initial scenario, we ran a speed test between medium-sized machines on DigitalOcean and GCP, stationed on the east coast and west coast. The machines were 2CPU with 4GB RAM. We installed all VPNs on each machine and measured the bandwidth/throughput with iperf3.
The results here were pretty clear: WireGuard and Netmaker vastly outperformed the others, achieving almost the same speed as a direct internet connection without VPN.
All other options trailed significantly, averaging 5x to 10x slower. Of these options, Tailscale and ZeroTier were the fastest, almost doubling the speed of Nebula and Tinc. OpenVPN fell behind everyone else by a significant margin.
Test #2 — Speed tests across AWS VPCs
In the next set of tests, we set up machines within two different AWS regions (Germany and Virginia). We used multiple instance sizes (t2.micro and t3.large) to see how resource availability affected speed.
We also repeated the tests with a standard MTU of 1280 across VPNs, since MTU has a significant impact on speed, and each VPN had a different default MTU. This gave us a set of 4 tests between the VPCs.
The results were similar to our cross-cloud test. Netmaker and WireGuard led the pack by a huge margin, followed by Tailscale and ZeroTier, with Tinc, Nebula, and OpenVPN falling in last.
There were, however, a couple of interesting findings.
#1: With the wrong MTU, WireGuard is slow
MTU makes a huge difference for WireGuard. With wg-quick’s default MTU, WireGuard performed abysmally slow. However, adjusting this value made it the fastest option of all. See the difference between tests with and without adjustment in the chart.
#2: WireGuard/Netmaker went faster than the public connection
Somehow, Netmaker and WireGuard (with MTU adjustments) performed much better than even the default gateway! We thought this might be a fluke, but it was repeatable across tests. Our best guess is that WireGuard is able to circumvent some bandwidth-limiting mechanism on AWS. However, we need to do more research to verify this.
Two last notes are that:
- Instance size did not impact speed significantly in these tests.
- Besides WireGuard, MTU adjustments did not have a big impact.
Test #3 — Speed tests within an AWS VPC
As a final test, we used machines within the same VPC, allowing them to connect over a local (much faster) connection. Often, if you’re building a P2P or mesh VPN network, some of the machines will be on the same LAN. It is important that these connections are able to take advantage of the local connection speed.
The first chart shows the results on large machines, which had a 5Gbps link. The second chart shows the results on small machines, which had about a 1Gbps link.
The test on small VMs, intra-VPC, appeared to be the great VPN equalizer. While Netmaker and WireGuard still performed better than the rest, after MTU adjustments, it was a much more modest advantage, with a 20% to 100% increase in speed, as opposed to a 5x to 10x improvement.
Notably, Tailscale performed poorly in these tests, usually taking last place. We are guessing this is because it was unable to calculate the local connection accurately / consistently, and was instead using a public connection.
Another significant note is that in these tests, the default MTU was a huge advantage for WireGuard. By default, WireGuard used jumbo frames and saturated the entire connection, going nearly 5Gbps on the big machines and 1Gbps on the small machines.
While it was outside of our test conditions (so not shown in the chart), when we adjusted up the Netmaker MTU, it also saturated the link and achieved similar speeds, which again makes sense, since it just uses WireGuard under the hood.
With all of that in mind, here are our rankings for the fastest VPNs.
#1 and #2: WireGuard / Netmaker
In theory, Netmaker and WireGuard should be identical, because Netmaker just configures WireGuard under the hood.
The reality was much different depending on one key factor: Default MTU.
When we did not adjust for default MTU, WireGuard crushed everyone else in the intra-VPC test, including Netmaker.
Likewise, in the inter-VPC test, Netmaker crushed everyone else, including WireGuard!
When using the same MTU between WireGuard and Netmaker, there was very little difference. However, WireGuard still had a small but consistent speed advantage.
This leads us to rank WireGuard as #1 and Netmaker as #2.
#3 and #4: Tailscale / ZeroTier (tie)
ZeroTier was relatively fast in the cross-VPC speed test and delivered solid results in the intra-VPC test as well.
Tailscale performed consistently better than ZeroTier across clouds, but they were very slow in the intra-VPC tests.
If we were only accounting for connections over the public internet, Tailscale would come in at a clear #3, but when factoring in local connections, it’s a bit of a wash. This leads us to rank Tailscale and ZeroTier at a tie for #3 and #4.
Side note: Some readers might know that Tailscale uses WireGuard, and would have expected it to go much faster. However, Tailscale relies on “userspace” WireGuard, which is much slower than the kernel version used by WireGuard and Netmaker in these tests.
Nebula is based on the Noise protocol, which is quite fast, and we hypothesized that they would take the #3 slot after WireGuard and Netmaker.
However, for whatever reason, Nebula was relatively slow in our cross-cloud and inter-VPC tests. On the plus side, their intra-VPC speed did hold up quite well. This leads us to rank Nebula at #5.
We used the default settings for Nebula (besides adjusting MTU in some scenarios). It is possible that some advanced configuration might yield better results. This set of tests was intentionally run with minimal configuration on each VPN, so we leave this as a consideration for future tests.
#6 and #7: Tinc / OpenVPN
Tinc and OpenVPN trailed the pack, which should be expected since they rely on much older protocols. However, they did hold up surprisingly well in certain scenarios: They got about the same results as Tailscale in the intra-VPC test, and about the same results as Nebula in the inter-VPC test. Still, this was not enough to keep them out of last place.
Tinc did perform consistently better than OpenVPN in all tests, leading us to rank Tinc at #6 and OpenVPN at #7.
The big takeaway here is that WireGuard (and, by proxy, Netmaker) is significantly faster than everything else.
However, speed is not the only factor.
We did not rank usability here, but Tailscale and ZeroTier come with very user-friendly SaaS installations and a GUI, which makes setup a breeze.
Meanwhile, Tinc, Nebula, OpenVPN, and WireGuard all require much more involved installations using the command line, which might scare off more casual users.
Netmaker has a GUI and a user-friendly installation process like Tailscale or ZeroTier, but with the advantages of WireGuard’s speed. However, Netmaker is also self-hosted, which requires an additional server, and some users may not be comfortable managing their own VPN server.
A final consideration is enterprise-readiness. Tinc and OpenVPN have been around for a long time, which is why they are more trusted in the professional space.
We expect this to change over time as people get used to WireGuard, but for now, if you need to know it’s safe and corporate-ready, Tinc and OpenVPN are still often the way to go, despite their speed.
While running these tests, we got some ideas for factors to consider in future tests:
- Network size: We did not test at scale. A network of 100, 1,000, or 10,000 machines might affect the results.
- Operating System: We only tested Ubuntu 20.04. VPN implementations behave differently on different OS’s, and in particular, Windows needs to be tested.
- VPN tuning: Each VPN has many configuration options that can speed up or slow down the network. If we dive deep into Nebula or OpenVPN configuration, they might run faster.
- Measurements: There are additional types of measurements we can take besides using iperf3.
We will put out more speed tests in the future with these considerations in mind. Do you have suggestions for future tests, or have you run speed tests of your own? What did you find?
Let us know in the comments.
Finally, as promised, here’s a link to the Google Sheets containing the test data.