HTTP/3: Shiny New Thing, or More Issues?

Tech Internals Conf
Tech Internals Conf
17 min readFeb 15, 2024

Nick Shadrin, NGINX

Basics

All HTTP protocols — HTTP/1, HTTP/2, and HTTP/3 have the same idea and the same concepts of the request and response. These semantics stay the same across all of HTTP protocols. You still have a request, headers, methods, protocol versioning, response code, etc.

Request and Response

This has been happening since 1996 with HTTP/1.0 and has continued through all subsequent versions of the protocol. The development of the new protocol version between HTTP/1.1 and HTTP/2 took over a decade. However, the development of the HTTP/3 protocol and its features, which use a UDP-based transport for HTTP, began even earlier than the release of the HTTP/2 protocol. Google’s development team worked extensively on HTTP/2, which includes multiplexing, priority features, and header compression. These features are helpful, but they can also cause issues in the environment.

HTTP/3 is already used by around a quarter of the internet, while HTTP/2 usage is declining — from 39.8% to 35.6% over the year by February 1.

At the same time, there is a huge split between those who use an external provider, such as CloudFlare, Akamai, or CloudFront, for a shield in front of their servers, and those who own the actual NGINX server or another reverse proxy that faces the Internet. The choice and utilization of protocols depend on whether you own your front end. For instance, Google.com, Facebook.com, Youtube.com, Netflix.com, Instagram.com, Live.com, Pinterest.com, Bing.com, Blogspot.com, and Mozilla.org fully own their front end and have the freedom to modify the protocol as they see fit. Similarly, those who own the NGINX server have the same level of control.

The ability to use the HTTP/3 protocol is very high, it is currently supported by all the major browsers.

It’s not supported by very old browsers, and some specialty clients. If you are developing a website for a browser, this is applicable. However, if you are developing a mobile app or API for clients other than browsers, these client libraries may choose and use the protocol differently than Chrome, Firefox, or other conventional browsers.

Here are a few conclusions. HTTP/1 is still an available protocol and a lot of systems, be they weird clients, bots, search engines, your apps and API clients are still HTTP/1-friendly. When you are developing something that is sitting in front of a CDN, your clients are actually nodes of that CDN, and if that CDN downgrades to HTTP/1, use of HTTP/2 and HTTP/3 will not be available. Finally, internal connectivity between your servers is very likely to stay on HTTP/1, and you will still keep HTTP/1 on.

HTTP/3 features and benefits

HTTP/3’s major feature is the change in transport. Unlike TCP-based protocols like HTTP/1 and HTTP/2, HTTP/3 uses the UDP protocol which brings both benefits and challenges…

NOTE. QUIC is a system that enables encrypted streams to be transmitted over the network using basic UDP transport, while also providing connection handling over UDP. HTTP/3 utilizes the underlying QUIC technology for its HTTP semantics.

— Fast connection establishment is a major consideration in HTTP/3 traffic.

— While the HTTP/2 standard allows for non-encrypted use, browsers only implement it with encryption. If you want or need to use HTTP/2 without encryption, it is not realistically possible. Therefore, you must use it with encryption. HTTP/3 mandates encryption as a standard.

— Connection migration. In QUIC, there is a different concept of defining the connection as compared to TCP.

— No HOL blocking. Head-of-line blocking is a concept applied in HTTP and TCP. In QUIC, there is no TCP layer of HOL. However, retransmission only applies to frames of a particular stream only. This means that any unaffected stream continues without interruption. Therefore, there is no need to establish a full TCP connection or maintain proper packet structure for all packets.

Protocol negotiation

Selecting a protocol when both client and server support multiple options is a crucial topic. The negotiation process must be carefully considered. One major example of protocol negotiation is transitioning from HTTP to HTTPS traffic.

Negotiation can occur at different levels, such as the protocol level with various types of 3XX redirects, or at a higher level, such as on the web page. However, this may not be applicable to certain API traffic and apps.

Another way to negotiate protocols between HTTP and HTTPS is through the use of strict transport security (HSTS) headers. HSTS headers specify that a particular web domain can only be accessed through HTTPS and not HTTP. However, it is important to note that this is merely a suggestion for browsers or clients, as defined by the header, and it does not prevent other actions from being taken.

The Upgrade header provides an alternative method for changing and negotiating protocols beyond just websocket traffic. By utilizing the functionality of the upgrade header, various protocols can be implemented. It can be used to switch from HTTP to other protocols when you use a custom server and client.

Translating from HTTP/1 to HTTP/2 is different because they both use the same TCP connection. There is no need to create a new connection, as with switching from port 80 to port 443, because both protocols use Port 443. The protocol negotiation for HTTP/2 can be done through extensions to the encryption process, since it, realistically, only uses encrypted traffic. TLS extensions include ALPN (application-layer protocol negotiation), which sends a few characters during the handshake to define the supported protocols by the client and server. This ensures that the higher version of the protocol is chosen for the connection. Again, during the establishment of the encrypted connection, the content of the connection is not yet known. The TLS extension defines the content and the format of data. After the definition is established, HTTP/2 data is sent instead of HTTP/1.

Understanding the HTTP/3 protocol can be challenging due to the use of UDP traffic instead of TCP. Despite not establishing a TCP connection and not residing within the same connection, the schema for URLs remains “https://”. This means that 301 or 302 redirects cannot be performed.

The Alt-Svc header is a standard that defines a special format for indicating alternative protocols or versions of a protocol. This header can be used to specify how compliant clients or browsers should connect, depending on the desired outcome. The connection can be defined using an alternate port or even a domain name with a port. The official standard allows for the use of any port. In practice, it is not recommended to use non-standard ports on the public Internet due to potential issues with firewalls and local networks. The general recommendation is to use port 443 for this type of connectivity, which is in this case a UDP port instead of a TCP port. This way, when checking for open ports using netstat, the port will be identified as UDP port 443.

Protocol internals

Inside the protocol, we have a faster connection establishment.

Here, we are using an HTTP request over QUIC. The request data can be sent immediately in the first datagram, but some caveats exist regarding the certificate details and establishing connectivity. However, using this method allows for faster access to the first bit of data compared to using the TCP-based protocol.

Standards related to QUIC and HTTP/3

— RFC 9000 QUIC: a UDP-Based Multiplexed and Secure Transport.

— RFC 9001 Using TLS to Secure QUIC.

— RFC 9002 QUIC Loss Detection and Congestion Control.

— RFC 9114 HTTP/3

— RFC 9204 QPACK: Field Compression for HTTP/3.

QPACK

In TCP-based connectivity, a tuple of outgoing and incoming IP addresses and ports is typically used. However, this protocol introduces the concept of a Connection ID, which is not tied to a specific IP address or port of the client. This means that the same Connection ID can be used by the server to receive data from different source IP addresses and ports. The server must combine the different data bits to determine if the connection ID matches.

In a conventional TCP server, the operating system manages the data delivery to a certain socket, and the web server developer can deal with that socket without any extra logic. With Connection ID in HTTP/3, the concept of a connection is spread across multiple processes, and the web server developer must create the logic of combining different pieces of data from various processes into one logical stream.

The Connection ID concept allows for client migration. For instance, if you are downloading something and your Wi-Fi signal drops as you walk outside, your device will automatically switch to your 4G or 5G connection. In an ideal world, you would still use the same connection ID and seamlessly transfer the same client from one physical media, such as Wi-Fi, to another, such as 4G. We are yet to see a practical working example of this.

HTTP/3 streams

HTTP/3 has a similar wire format to HTTP/2, with streams. It has bidirectional client streams for normal data flow, as well as special streams, such as the one related to header compression using the QPACK algorithm for encoding and decoding header data.

In HTTP/1, the request line — the first line of the HTTP request — includes the GET method, URLs, and the protocol version. This data comes before the headers. In HTTP/2 and HTTP/3, these pieces of data, known as pseudo headers, are defined in the same way as other headers. The pseudo header for the URL is defined in the same way as the header for a cookie. In HTTP/1, it was not considered a header but rather a part of the request line. This newer approach in HTTP/2 provides consistency between the data exchanged between the server and client, regardless of the type of data being exchanged.

Similar to HPACK for HTTP/2, HTTP/3 also uses field compression. The URL, method, and other headers are defined in a static table of 99 commonly used headers. The static table is a list of well-known headers that are known by all clients and servers and is not changed. The dynamic table behaves differently in HTTP/2 and HTTP/3. In HTTP/2, the table is filled based on the traffic passing through the connection, while in HTTP/3, the table can be pre-filled separately from data streams, resembling a server push.

In the example below, we set up the dynamic table in a separate stream from where the actual request and response will be created. This can be done outside of where the actual request will occur. If the server knows what requests to expect from the client, the data can be pre-filled. In the request stream, once the data is filled in, it only needs to be referenced.

Here, the highlighted data on the left is equivalent to the highlighted data on the right in terms of how it is supposed to be understood by the server and the client. This is an example of compression achieved through encoding and decoding using a specific table.

The level of compression achieved is significant. When does it make sense? If you have a workload that involves transferring large files, such as videos or lengthy documents, the size of the headers is not a major concern since they make up a minimal portion of the overall file size. However, if you are dealing with API-heavy chatting applications, you may encounter headers that are several kilobytes in size just to send a few bytes of data in response to your APIs. If you frequently perform this action, the large amount of data being transmitted can cause unnecessary network traffic.

For applications that rely heavily on APIs, it is important to consider whether your clients are using browsers that support HTTP/3, or HTTP/1 where this cannot apply.

Challenges

Infrastructure challenges

The idea is similar to that of HTTP/2. The concept of having a client and server without any intermediary works well during application development. However, in real-world scenarios, a reverse proxy is often present along with connectivity to the backend, which typically uses HTTP/1.

The real world architecture looks more frustrating.

One of the major frustrations here is that boxes are not yours. Even if you own the frontend box, there may be other boxes in the middle or on the client side that you do not own. This is especially true if your client is not a browser. In such cases, you might not own the methods of connectivity from the client.

Web systems are not well-equipped to handle UDP traffic, as the focus for the past few decades has been on TCP optimizations. However, there is a growing need to enable UDP traffic on larger Internet entities. Additionally, the negotiation into the HTTP/3 protocol, specifically h3, is a delicate concept. If there is an issue with the UDP layer between the client and server, fallback to HTTP/2 or HTTP/1 may be necessary. Therefore, when designing the system and network, it is important to account for the possibility of network fallback from the UDP protocol.

Tooling challenges

There is a debate between binary protocols and plaintext protocols. In my opinion, plain text protocols are superior because they are human-readable and can be easily analyzed using tools like Telnet, Wireshark, or TCP dumps. However, when the protocol becomes encrypted and heavily binary, it can be difficult for humans to troubleshoot without the use of complex debugging tools and decoding features that allow the protocol to be decoded into a human-readable form. Thus it is easier to work with a plaintext version.

How do those tools work? There are methods of debugging binary protocols, but they can be challenging, especially when dealing with real-life issues. The same applies to monitoring and visibility of your protocol through third parties, such as CDN providers and local proxies. These parties may have less visibility into network activity due to the use of UDP traffic instead of TCP connections.

Security challenges

There is also a set of security challenges. For example, the UDP protocol is not trusted by many Internet entities. So, it might happen that when you connect to Wi-Fi in a coffee shop, you only have TCP traffic and not UDP, except for some ports for DNS protocol which has its own set of problems.

There is the same problem of designing the security devices on all the security levels — on all CDNs, application firewalls, DDoS protection features. Here, the concept of deeper understanding of traffic still applies.

HTTP/3 configuration tutorial

Now, you can use the HTTP/3 protocol in the mainline packages of NGINX. The experimental stage is over as we have finished merging this protocol into the mainline branch.

However, there are some limitations. It’s not possible to build HTTP/3-friendly servers on older Linux distros. Because of that embedded encryption, we have to use the newer SSL libraries in order to compile the encryption into the system correctly. Those older distros will not be supported.

When you are compiling your own NGINX, look for the “configure” parameter — with-http_v3_module. In addition, there will be more configuration parameters that are defining which encryption library you are using for this protocol. The new versions of NGINX have the variable http3 that you can use in log format and see in the logs if the request was made over HTTP/3. You can even use that variable for some more complicated logic.

nginx.conf

To enable HTTP/3 correctly, you take your listen directive and enable/add the quic and reuseport parameters. There is a new http2 directive set in the very recent version of NGINX. We moved the http2 parameter outside of the listener socket and put it in the server block, which means now we are able to define which server block will support HTTP/2. Before you were only defining that for the whole entirety of the socket, now it is server-based.

In the main (“/”) location of those servers, you have to add the alternative service header “Alt-Svc”. SSL certificate and key remain basically the same and are used to ensure the proper configuration of encryption.

Defaults

When you are configuring NGINX in a very simple form and not setting up anything, the listen directive with the port will not enable quic and reuseport by default, but you need them to use HTTP/3.

For the http2 directive, the default is OFF which means that you are setting up your server with a simple “listen 443 ssl” and whatnot your HTTP/2 will not be enabled.

For the http3 directive, the default is ON, however the listen socket doesn’t have the quic transport for HTTP/3 which means effectively by default it will be disabled.

When you are configuring nginx with the listen 443 quic reuseport, that means you don’t have to define http3 as a separate directive as it is enabled by default by enabling quic and reuseport on the listener socket. For some servers, you can disable it by using http3 off, if they are using the same sockets there.

NOTE. I recommend using http2 and http3 variables and checking the logs to see how they are being used on the network and if clients are connecting through this protocol version. It is important for us to understand how it works and collect data on the percentage of usage and the benefits of the protocol. Conducting these visibility checks is helpful to understand your users.

Conclusions

I cannot recommend using HTTP/3 everywhere in the world and cannot always say that HTTP/3 is bad as it is helpful in some cases. We are trying to give you some context on how to choose the protocol versions and make sure it actually fits in your network.

You need to test it well, not just by connecting from your client or your browser straight to the server, but by enabling it through the actual infrastructure going through the entirety of the boxes in between. That also includes the test for the clients — clients which are browsers, clients which you don’t control, clients using HTTP/1, or the clients that you might create from scratch.

So, prepare for the unknown. This is a new thing. We don’t have a lot of data on how it will actually be used in practice.

Questions & Answers

Question. Does the HTTP/3 support Mutual TLS connection?

Answer. Mutual TLS connection in terms of the client certificates? I have never checked it on the client certificates.

Note from further research: Yes, it does. Same configuration and same features as the previous protocols.

Question. In terms of migration and WiFi connections in coffee shops, how soon will the global infrastructure and small business in particular upgrade hardware to enable HTTP/3?

Answer. So, there’s the functionality defined in the protocol that is theoretically and academically beautiful. However, in the real world you have a bunch of boxes in the middle and a bunch of people configuring those boxes in a weird way, and you are not controlling that part of the infrastructure. If providers of those boxes implement our functionality, it will take a few years to implement that correctly in all of those consumer level-boxes.

The lifetime of those boxes in the wild can easily be 5–10 years. Once these will be updated with the newer boxes, provided the creators of those boxes actually listened to the promise of HTTP/3 and believed it’s going to be the best thing for the world, in about a decade we might see this functionality. But we might never see, as we might see something like HTTP/4 or people might abandon HTTP/3 just as they abandoned server push.

Question. In terms of controlling the infrastructure, can transition to HTTP/3 start from the edge servers, your reverse proxies, and then your logic servers, thus enabling HTTP/3 there first because you have much more control?

Answer. It’s absolutely correct if you own the front level of your infrastructure, if you are the actual owner of that reverse proxy in front. However, if your security department makes you use some application firewall in front of the firewall you own, you’ll have to put another box in front of that box and move the box that supports the protocol further to the edge of your environment. If you are able to do that then that’s where you should start supporting it. If there is a mandate, the security requirements, or some networking requirement, which prevent you from owning the frontend, then you might not ever use the protocol.

Question. You mentioned that we can’t pick any port on the server side for the HTTP/3 QUIC protocol. If somebody does decide to use some other port, wouldn’t that fall apart because the client or the browser would expect 443? Do we need to have negotiation on HTTP/2 or can we use some higher level like DNS to tell it on what port to expect?

Answer. It is important to understand that the Alt-Svc header is the official way of negotiating the HTTP/3 protocol. You must provide a port number. When you are providing that number, it is equally the same from the point of view of the protocol negotiation. It makes sense to use port 443 because it is more likely that your local coffee shop with Wi-Fi access point will allow you to use port 443 and it is very likely that the coffee shop will disable weird ports. However, inside your own network or Intranet, there might be cases when port 443 UDP is busy with something else. For example, WebRTC might already use it.

Question. Are there any known scenarios or specific use cases where HTTP/2 outperforms HTTP/3 in terms of overall performance metrics? If so, what factors contribute to this? What are the three doors between the protocols in such situations?

Answer. We don’t have an ideal comparison scenario because there are different kinds of traffic and loads. There is a test between HTTP/2 and HTTP/1, which shows how HTTP/1 one can be significantly faster than HTTP/2 because HTTP/2 is not very good when the network has a lot of packet loss and jitter. When you have multiple connections while experiencing a weird packet loss, HTTP/1 might actually recover better per connection instead of recovering a single connection.

We don’t have such data for HTTP/3. We will look into it when it is more broadly adopted.

Question. You have mentioned a feature of HTTP/3, which is actually rooted in QUIC about opening new streams whenever the clients as we may define them change the location, change the Wi-Fi network for mobile network and so on. I think it is called linkability resistance of QUIC like adding new connection IDs and client IDs whenever something is slightly changing. This might even happen from the same device and at the same IP address, when, for example, a new component of the client-side application opens a new connection to the backend. Should we consider larger connection tables for the purposes of load balancing and other stuff? I’m not even talking about DDoS mitigation, where for QUIC and HTTP/3 in high load we need much bigger connection tables, that is more memory. At the same time, the current QUIC and HTTP/3 implementation libraries are very CPU-heavy in comparison to what we have by now in the old protocol stack.

With all these things in mind, are we ready globally for HTTP/3 migration or should we do it slowly to take effect and wait for a couple of new hardware production cycles in order for the infrastructure to catch up with the new requirements?

Answer. I think another cycle for the hardware will be very much needed for the overall Internet to understand QUIC correctly. I absolutely agree with the larger connection tables and with the understanding of the concept of connection ID instead of IP addresses. However, in your own infrastructure the main load is usually in the backend logic. If you are using the same kind of hardware for the frontend and network, as well as your backend load, you are able to carve out resources differently. If you are using some specialized hardware, some of those old Cisco routers or F5 boxes, the physical upgrade or some significant firmware updates might be actually needed.

--

--

Tech Internals Conf
Tech Internals Conf

Our conferences are a space for learning, networking and exchanging practical cases.