Featured image of HTTP + QUIC

Evolution of HTTP: HTTP/3 Deep Dive

Maryann Gitonga
9 min readJun 19, 2023

A few months ago, Pinterest made a significant shift by adopting HTTP/3 for their primary production domains and upgrading their client apps’ network stack to align with this new protocol. This development prompted my deep dive into the evolution of HTTP, from its inception to the latest version embraced by Pinterest.

What is HTTP?

HTTP (Hyper Text Transfer Protocol) is an application layer protocol that facilitates the transfer of diverse content like web pages, text, images, media, and binary files. It enables communication between two key entities: the client and the server. Typically, our web browser acts as the client, making requests to the server on our behalf, while other applications can also fulfil this role.

Over time, HTTP has undergone significant upgrades to accommodate the exponential growth of the internet and the increasingly diverse nature of web content. The evolution of HTTP spans from its initial versions, HTTP/1.0, HTTP/1.1 and HTTP/2, to the most recent HTTP/3. With each iteration, new features have been introduced to address contemporary requirements and rectify limitations from previous versions.

Before delving into the intricacies of HTTP/3 and QUIC, it is crucial to understand the foundation on which they stand.

HTTP/1

HTTP/1 is built on the Transmission Control Protocol (TCP), where each request is sent using a separate TCP connection, even if they are directed to the same server. This process involved the client establishing a TCP connection, sending a request, and then receiving a response from the server before closing the connection. However, this approach led to inefficiencies and increased latency when a single task required multiple requests. Opening numerous TCP connections added significant overhead and resulted in inefficiency due to TCP mechanisms like slow-start, which delayed data transmission as each new connection required a conservative ramp-up of the sending rate. Additionally, buffering posed challenges when dealing with large data, as the server needed to accumulate the entire response before sending it back to the client.

An illustration of HTTP/1 over TCP: Multiple TCP connections for multiple requests of a single task
HTTP/1 over TCP: Multiple TCP connections for multiple requests of a single task

HTTP/1.1

Multiple requests & responses over a single TCP connection

To address the issues of multiple TCP connections in HTTP/1, HTTP/1.1 introduced a keep-alive mechanism, allowing the reuse of connections for multiple requests by including a “Connection: Keep-Alive” header. This persistent connection approach reduced latency by eliminating the need for repeated TCP handshakes and other TCP mechanisms.

Transmission of multiple requests without waiting for responses

HTTP/1.1 also introduced HTTP pipelining, enabling clients to send multiple requests without waiting for each response individually. However, HTTP pipelining suffered from head-of-line (HoL) blocking, where subsequent requests had to wait for the completion of previous requests, leading to potential delays. Moreover, the strict ordering requirement of responses further impacted the efficiency. Due to these challenges, HTTP/1.1 support was eventually phased out from many web browsers.

head-of-line blocking

HTTP/2

Request-Response multiplexing in a single TCP connection

HTTP/2 introduced a binary framing layer that divides communication between the client and server into smaller “chunks.” This approach creates an interleaved bidirectional stream of communication, allowing independent frames to be transmitted and received out of order. This effectively resolved the issue of head-of-line blocking at the application layer.

HTTP/2 pipelining prevents head-of-line blocking at the application layer

In addition to the framing layer, HTTP/2 enables better control over concurrent streams. Both the client and server can specify the maximum number of simultaneous streams the other peer can initiate. This flexibility empowers peers to adjust the number of concurrent streams dynamically, either reducing or increasing it as required.

Optimal resource allocation & congestion management

HTTP/2 also introduced a flow-control scheme to ensure non-blocking streams. Each stream operates independently, without the need for strict ordering during transmission or reception. A crucial component of this scheme is the flow-control window, which indicates the client’s threshold for the amount of data it can receive without overwhelming its buffers. If a frame encounters any obstacles, such as congestion or packet loss, it does not hinder the processing of subsequent frames. The client can continue to receive other frames that fall within its flow-control window, fostering parallelism and enhancing overall efficiency.

Efficient request & response metadata processing

Another key feature of HTTP/2 is the header compression which focuses on compressing the request and response metadata, including header fields and cookie data. The compression is achieved through the utilisation of the HPACK compression format, which effectively reduces the performance overhead associated with transmitting the metadata. The metadata undergoes encoding using Huffman coding, a lossless data compression technique. To enable efficient encoding, both the client and server maintain an indexed list of header fields that have been encountered before, and this list is continuously updated. This indexed list serves as a reference, allowing for more compact representation of frequently transmitted values.

Proactive resource delivery to the client

Furthermore, HTTP/2 introduced a valuable capability known as server push. With server push, servers can proactively send updates to clients whenever new data becomes available, eliminating the need for clients to repeatedly poll for updates. This push-based approach optimises the delivery of resources and further improves the overall efficiency and responsiveness of web applications.

HTTP/3

While HTTP/3 retains familiar syntax and semantics inherited from HTTP/2, its significant deviation lies in the underlying protocol, QUIC. QUIC operates on UDP instead of TCP, bringing about a change in the stacking order of protocol layers built on top the Internet Protocol.

Illustration of protocol stacking: HTTP/2 vs HTTP/3
Protocol stacking: HTTP/2 vs HTTP/3

QUIC

QUIC being based on UDP and implemented at the user level, offers the advantage of not requiring modifications at the kernel level (the Internet Protocol and UDP levels). This facilitates its easy adoption and deployment since the underlying protocol is already widely known and implemented by nearly all devices on the Internet. The key enhancements brought about by QUIC are as follows:

Multiple streams at the transport layer

QUIC introduces the concept of multiple byte streams at the transport layer, along with per-stream packet loss handling. This means that QUIC streams are treated as separate entities, and any packet loss affecting one stream does not impact others. QUIC also implements individual flow control mechanisms for each stream. This effectively addresses the issue of head-of-line blocking at the transport layer.

Connection migration

A connection over the internet is defined by four parameters known as the 4-tuple: client IP address, client port, server IP address, and server port. In TCP, if any of these parameters change, the connection becomes invalid and requires re-establishment, causing downtime. For instance, during a network switch, such as moving from the office to the parking lot, one might experience a brief blackout during a live video conference.

To overcome this limitation, QUIC introduces a concept called the connection ID. In addition to the 4-tuple, each QUIC connection is assigned a unique connection ID, enabling the identification of a connection between two peers. This means that even when moving across different networks, the connection can be maintained between the same known peers. The connection ID is defined within the QUIC transport layer itself, allowing it to remain unchanged when transitioning between networks. This allows connections to be seamlessly and quickly moved between networks while maintaining reliability.

To safeguard user privacy and security, the connection ID changes whenever a client shifts to a new network. This prevents hackers and eavesdroppers from tracking a user’s movement across networks and inferring their approximate physical locations. The client and server agree upon a shared list of randomly generated connection IDs that all correspond to the same connection. Both the QUIC client and server possess knowledge of the mapping between the connection IDs and connections, enhancing robustness against network changes.

Flexible congestion control

TCP employs a strict congestion control mechanism that reacts to congestion by reducing the congestion window size by half.

QUIC’s congestion control algorithm, however, is designed to be more responsive and efficient in handling network congestion. Instead of immediately halving the congestion window like TCP, QUIC adjusts its congestion control parameters based on real-time network conditions. This dynamic approach allows QUIC to better adapt to varying network scenarios and optimise the utilisation of available bandwidth.

Fully integrated encryption

When using HTTP/2, you have the choice between the unsecured route (HTTP) or the secure route (HTTPS). With HTTPS, your HTTP plaintext data is encrypted using TLS (Transport Layer Security) before being transmitted over TCP.

In contrast, QUIC, the underlying protocol of HTTP/3, always ensures encryption by encapsulating TLS. This means that almost all packet header fields in QUIC are encrypted, including some of the packet header flags. As a result, intermediaries no longer have access to the transport-layer information in QUIC, significantly enhancing its security and privacy features.

Improved header compression

HTTP/3 introduces a new header compression mechanism called QPACK, which is a modification of HPACK used in HTTP/2. Unlike in HTTP/2, where TCP ensures the headers arrive in order due to its in-order packet delivery, QPACK allows HTTP headers to be received out of order across different QUIC streams.

To handle this, QPACK employs a lookup table mechanism for both encoding and decoding the headers. This mechanism enables efficient compression and decompression of the headers, even when they arrive in a non-sequential fashion.

This optimises the transmission of header data, reducing the overhead associated with header compression and facilitating faster and more streamlined communication between clients and servers.

HTTP/3 Use Cases

Now that we have explored the capabilities of QUIC (and HTTP/3), we can envision several potential use cases where this protocol can excel in the near future. Here are a few ideas:

  1. Real-time applications: HTTP/3’s improved congestion control make it an ideal choice for real-time communication applications such as video conferencing, voice-over-IP (VoIP), and live streaming. The reduced connection establishment time and efficient data transmission enable seamless, high-quality, and uninterrupted communication experiences.
  2. Mobile Networks: With its ability to handle seamless connection migration, HTTP/3 is well-suited for mobile networks. As users move across different networks or experience intermittent connectivity, HTTP/3’s robustness and adaptive mechanisms ensure uninterrupted service delivery, improved performance, and enhanced user experience on mobile devices.
  3. Content Delivery: HTTP/3’s multiplexing and parallelism capabilities, along with its improved congestion control, make it a valuable asset for content delivery networks (CDNs) and large-scale media distribution. It enables faster and more efficient delivery of multimedia content, including images, videos, and streaming media, to end-users, enhancing content accessibility and reducing buffering time. Companies like Pinterest have already harnessed the power of HTTP/3 to enhance their content delivery capabilities.
  4. IoT and Edge Computing: With the rapid increase of Internet of Things (IoT) devices and the increasing need for edge computing, HTTP/3 offers advantages in terms of lower latency, reduced overhead, and enhanced security. It can facilitate efficient communication between IoT devices, edge servers, and cloud services, enabling seamless data exchange and real-time interactions in IoT ecosystems.

These are just a few examples of potential use cases for HTTP/3. As the adoption of this protocol continues to grow, we can expect to see its benefits leveraged in various domains to enhance performance, security, and user experience in the evolving landscape of internet-based services and applications.

The intricacies of HTTP/3 and QUIC extend far beyond what we have covered in this blog. To delve deeper into this new protocol, there are comprehensive resources available that provide an in-depth breakdown such as the 3-part series blog by Robin Marx.

To gain a practical glimpse into the potential future use cases of HTTP/3, I recommend exploring this blog by Jeff Posnick on WebTransport, a web API utilising HTTP/3. The protocol is still in its early stages of implementation, with pioneering organisations like Pinterest leading the way in this transformative journey. I also recommend reading Pinterest’s official blog, where they share their experiences and thoughts on the adoption of this innovative protocol.

I hope this blog has provided you with valuable insights into the world of HTTP and inspired you to further explore this evolving space. 😁

--

--

Maryann Gitonga

A Software Engineer who loves trying out different tooling & engineering concepts in Back-end Engineering & DevOps. An explorer in the ML world too!