QUIC and HTTPS Load Balancer

Colt McAnlis
Nov 9, 2018 · 4 min read

With the boom of mobile devices across the world, we’ve been seeing a lot of areas where smartphone adoption outpaced the ability for the telecom providers to update their networks. The result is a less-than-fast experience for the users, and a deep need to re-evaluate how our current internet protocols are working on the modern web.

This is some of the fundamental reasoning behind Google’s development of the QUIC protocol.

QUIC (Quick UDP Internet Connections) is a modern transport layer protocol that provides congestion control similar to TCP and security equivalent to SSL/TLS, along with improved performance (reduced connection and transport latency, higher throughput etc).

And recently the networking group in Google Cloud Platform released QUIC support for the HTTPS load balancers, which means it’s time to take it for a spin, and see what kind of performance we can get ;)

QUIC recap

A good way to decrease connection latency for an efficiently routed connection is to make fewer round-trips. Much of the work on QUIC is concentrated on reducing the number of round-trips required when establishing a new connection, including the handshake step, encryption setup, and initial data requests. QUIC clients would, for example, include the session negotiation information in the initial packet. This compression is enhanced by QUIC servers, which publish a static configuration record that is concisely referred to. The client also stores a synchronization cookie it received from the server, enabling subsequent connections to incur zero overhead latency (in the best case)

One of the motivations for developing QUIC was that in TCP the delay of a single packet induces head-of-line blocking for an entire set of SPDY streams; QUIC’s improved multiplexing support means that only one stream would pause

Originally developed back in 2012, the protocol works by using a set of multiplexed connections between two endpoints using UDP, focusing on SSL support, and reduced application latency.

Setting up the test

If you’ve already got an HTTPs LoadBalancer up and running, enabling QUIC is very straightforward.

In the cloud console, simply go back to your Load Balancer, and enable the “QUIC negotiation” button in the frontend dialog.

Much like BBR, QUIC works best in connectivity that’s challenged, so much like BBR we’ll allow our clients to simulate a bad network connection using a tc command:

Our test is to fetch 10mb worth of files of various sizes from a GCS bucket through the load balancer, and chart the throughput. We’ll have one client connecting through an HTTP front-end (no QUIC support) with the other client connecting through QUIC.

The results

The results are pretty clear in this test, QUIC sees a 1.3x improvement in overall performance vs standard HTTPs on the load balancer, which is pretty nice for just setting one flag!

Where to use QUIC?

As mentioned, QUIC really shines in low-connectivity environments, where packet loss and delay are high. Much like BBR, there’s no downside to turning it on. Your QUIC enabled clients on high-performing networks will get a small boost, here and there, but you’ll see your biggest gains in the regions where connectivity needs some help.

Likewise, remember that your HTTPS Load Balancer is already getting the benefits of BBR for the Google Front Ends, so turning on QUIC just helps that even more!

Now it’s worth noting that QUIC requires a client which supports the protocol, so unless you’re writing your own client with support, make sure you check out what browsers support the protocol, to make sure you’re hitting your target market.

Colt McAnlis

Written by

DA @ Google; http://goo.gl/bbPefY | http://goo.gl/xZ4fE7 | https://goo.gl/RGsQlF | http://goo.gl/4ZJkY1 | http://goo.gl/qR5WI1