A QUIC Intro to HTTP/3

David Gerchikov
ironSource Tech Blog
7 min readDec 17, 2020

While IETF working groups of QUIC and HTTP/3 reached a significant milestone and the development of these protocols is about to finish (test success level of various client\server implementations in QUIC Interop Runner may serve as strong evidence of that), it’s a perfect time to get to know these shiny brand new technologies which will play a major role in Smart-Home, IoT, and the 5G era.

If you have no prior knowledge of UDP, TCP, and HTTP protocols, the following articles may serve as a good starting point before reading this one:

In my opinion, before you get to know new technology, you should ask the following questions:

  • What problem does it solve?
  • Where can it be useful?
  • Why should I care?
  • Will it be popular?

The advantages of popular technology are clear. They usually have an active community or a large customer base which promotes development, features, better testing and a proven track record in live environments. Super popular technologies end up at the center of an ecosystem with additional complementary projects.

Take for example GraphQL and Falcor both technologies solve the same problem but many teams choose GraphQL in favor of the Falcor even though Falcor is really straightforward and easy to learn — even Netflix the original Falcor developer in many cases choose GraphQL as it is illustrated in this post: Our learnings from adopting GraphQL mainly because of the reasons described above.

In this article, we will answer the following questions: Why should you care about HTTP/3 and QUIC? What problems do they solve? Are they useful? Will they be popular?

We will start our journey from short overviews of TCP and past versions of HTTP, and then proceed to a short introduction of QUIC and HTTP/3 protocols and conclude with a simple HTTP/3 client/server implementation example in golang.

A Quick Dive Into TCP

The HTTP spec does not mandate using TCP as its transport layer — if you wanted to deliver HTTP requests/responses over different protocols such as UDP or maybe one you invented on your own, you could. In practice, HTTP uses TCP mainly because of its ability to deliver packets reliably across an unreliable network by implementing mechanisms such as a three-way handshake, slow-start, and others (see Building Blocks of TCP for more information). However, TCP reliable delivery does not come for free — it comes at the expense of performance. It’s also important to note that TCP is designed for long-lived connections and HTTP requests are short-lived.

The history of HTTP

  • HTTP/1.0 — was introduced in 1996 as an extension to HTTP/0.9. Besides its simplicity, it has a few weaknesses and limitations. Two of them stand out among them all:
  1. Connection design: — HTTP/1.0 was able to send one HTTP request per TCP connection which was closed as soon as the response was received by the caller. You may notice that it poorly fits TCP design — especially because of HTTP’s short lifespan, and as a result, in most cases, doesn’t pass TCP short-start, which causes poor link capacity utilization and traffic burstiness.
  2. Concurrent request — HTTP/1.0 has the ability to establish parallel TCP connections but TCP connection establishment is an expensive process mainly because of the handshake roundtrip. Additionally, in most cases, browsers impose a limit of 6 parallel connections per domain to prevent DoS Attacks.
taken from MDN Web docs
  • HTTP/1.1 — was introduced in 1999 to overcome the HTTP/1.0 concurrent requests problem. It came up with two concepts, the first was called persistent connection which allowed chaining requests using the same TCP channel and the second was called HTTP pipelining which basically allowed multiple HTTP requests on the same TCP connection without waiting for each response. These concepts reduced web page load time, but at the same time also introduced problems like the “head of line problem” (HOL).
taken from MDN Web docs
  • HTTP/2 — was announced in 2015 with latency reduction as its primary goal. By introducing new features such as: request prioritization, server push, an efficient header compression mechanism (HPACK) and full request/response multiplexing on a single TCP channel. The latter was supposed to solve the HOL problem but instead moved it from application to the transport layer. Now packet loss on one of the HTTP/2 streams will block all other streams until retransmission of these lost packets will take place. In practice, it improved performance on good quality connections and made it perform even worse than HTTP/1.x on low-quality connections with more than 5% packet loss.

It’s time to get to know QUIC && HTTP/3 🥁

QUIC — the new approach to network package delivery

QUIC was introduced by Google in 2013 and later became available for standardization in IETF. Today, there are two flavours of the protocol, IETF QUIC and gQUIC; the second one was already used to connect Chrome users to Google edge services on a significant part of Google’s traffic. The motivation for defining a new protocol comes from several reasons:

  1. Web communication latency reduction (e.g Solution to TCP HOL problem in HTTP/2)
  2. TCP evolution is slow mainly because of Protocol Ossification reasons which means that all kinds of middleware equipment all over the network such as switches, NATs, routers, packet deep inspection tools, etc have infrequent updates (if any) and heavily rely on current implementations of TCP — even small changes in the protocol may cause these boxes to ignore our new TCP version packets and as a result break the protocol and make it useless.
  3. Although there are other alternatives, Kernel TCP Stack is the preferred choice (for reasons described well in this blog post: Why we use the Linux kernel’s TCP stack). Being a Kernel module makes the TCP implementation inseparable from the OS and Kernel version, therefore its update requires a Kernel upgrade which may be a problem for many enterprises.

Initial decision embedding TCP Stack to the Kernel in my opinion seems strange as it violates the initial purpose of the Kernel as a layer between the hardware and the user. It would seem that the property of how to send a packet through the network is more related to the application rather than the OS. Interestingly enough, this decision is influenced by the spirit of the period where monolithic structure was the mainstream, it’s also interesting to see how QUIC developers take a more modern “separation of concerns’’ minded approach. Surprisingly, QUIC uses UDP as its underlying layer and moves all well known TCP mechanisms such as Congestion Control, Flow Control, Retransmission of lost packets to User-Space making it the property of the application. Usage of TLS 1.3, 0-RTT, and 1-RTT — early data transfer makes it high performing and secure and the use of UUID for stream identification (instead of IP address) makes network to network roaming easy. All these features make QUIC fit well for today’s needs.

HTTP + QUIC = HTTP/3

As you may have noticed, HOL and Request parallelization problems are rooted in the beginning of the HTTP evolution. The biggest innovation of HTTP/3 is the fact that it’s the first protocol that uses QUIC as its transport layer. QUIC’s ability to separately handle each connection stream — allowing independent transfer of different HTTP/3 frames — de facto solves the HOL problem. Due to QUIC’s dynamic design, the amount of parallel streams is dynamically determined by the receiver, allowing the sender to make as many parallel HTTP requests as the peer is able to handle. This and other features such as 0-RTT and 1-RTT make HTTP/3 obtain the best network utilization as possible no matter of its type or package loss percentage.

Following this radical change, IETF made a huge effort to preserve existing concepts such as headers, cookies, and request/response patterns. This will allow app developers to adopt HTTP/3 without drastically changing their application code or business logic. Despite the similarity, there are a few differences:

  1. Unlike HTTP/2, there is no Request Prioritization in HTTP/3 as it’s complicated and barely used.
  2. Unlike previous versions HTTP/3 doesn’t exist in the encrypted version — this is because the encryption is handled by QUIC.
  3. Alt-Svc header: — HTTP/3 is served over QUIC and the need for backward compatibility makes it clear that protocol negotiation cannot be performed over TLS ALPN extension as it’s done in HTTP/2 case (whereas we know both HTTP/2 and HTTP/1.1 are served over TCP). Therefore, a different approach should be taken. In HTTP/3 case, client access server domain through HTTP/1.1 or HTTP/2, and server response includes alt-svc header as follows:
Alt-Svc: h3="<optionally-domain>:<port>"

Indicating that HTTP/3 is available on the following domain:port, the client may establish a QUIC connection and continue communication with the server over HTTP/3.

Currently, there are a dozen QUIC client library implementations some of them having HTTP/3 support, and some of them not. Here’s a simple Client\Server example:

Client:

Server:

Summary:

Along with IETF working groups of QUIC and HTTP/3, and with support of the open source community, this tech is influenced by big vendors such as Google, Microsoft, Apple, Facebook and Mozilla, they invested a huge effort in these protocols’ development fact that changes the question from “will it become mainstream?” to a question of “when will it become mainstream?”. And as I motioned at the beginning, popularity is the key to a successful technology. Diving into this tech makes you well prepared for the foreseeable future.

References

Introduction to HTTP/2: https://developers.google.com/web/fundamentals/performance/http2

High Performance Browser Networking: https://hpbn.co/#toc

HTTP/3 explained: https://http3-explained.haxx.se

Why we use the Linux kernel’s TCP stack: https://blog.cloudflare.com/why-we-use-the-linux-kernels-tcp-stack/

Our learnings from adopting GraphQL: https://netflixtechblog.com/our-learnings-from-adopting-graphql-f099de39ae5f

--

--