gRPC over HTTP/3

Jean-Marie Joly
SafetyCulture Engineering
7 min readJul 14, 2020
Photo by Clarisse Croset on Unsplash

In this article, we discuss how to serve HTTP/3 traffic while focusing specifically on gRPC and gRPC-Web.

HTTP/3

HTTP/3 aims to significantly improve HTTP/2 in terms of performance. HTTP/3 is based on the QUIC Transport Protocol, which is built atop UDP. The idea behind using UDP is to remove the head-of-line blocking phenomenon present in TCP.

In short, HTTP/3 is a relatively straightforward adaptation of HTTP/2 running over QUIC, which actually does the heavy lifting. QUIC is a remarkable protocol in two aspects. The first one is the low-latency connection establishment, e.g. that makes connecting to a website faster.

At SafetyCulture, we have estimated that using HTTP/3 would save thousands of hours in latency to our combined user base. This is the conclusions we have come to based on some micro-benchmarks over WiFi internet connections. Overall, the latency improved by 20 to 40 ms when using HTTP/3.

Our findings are actually aligned with the outcomes of Cloudflare research while comparing HTTP/2 and HTTP/3. The interesting aspect of this extensive study is that the picture is not so bright, in fact HTTP/3 can perform worse than HTTP/2 depending on the size of the payload.

However, these studies do not take into account substantial latency improvement when migrating between mobile and WiFi networks. This improvement could have a massive impact on our users since SafetyCulture’s main product, named iAuditor, is used in a wide variety of places with disparate quality of internet access (e.g., Ethernet, WiFi, mobile networks). We are still assessing ways to gauge those benefits for our customers.

The second major advantage of QUIC is the connection migration and resilience to NAT rebinding. As an example, WiFi to mobile network migration is seamless, i.e. there is no need to renegotiate a connection. This is called 0-RTT.

Image by Google

As of writing, HTTP/3 is still an IETF draft. The major downside is the lack of support and awareness for HTTP/3 and QUIC. Since UDP has not been as popular as TCP so far, internet routers may very well treat that traffic differently. This results in a rather long tail of latency and an elevated rate of connection failures (close to 1%).

At SafetyCulture, we use gRPC and gRPC-Web for mobile and Web applications, respectively. This is why introducing HTTP/3 also requires adjusting our existing infrastructure and applications to fully support gRPC protocols.

gRPC-Web

What is gRPC-Web? In short, it is a JavaScript implementation of gRPC for browser clients. The protocol itself is a binary serialization of Protobuf data. gRPC-Web can be used over HTTP/1.1 as well as HTTP/2. Conversely, regular gRPC only runs on top of HTTP/2. Because gRPC and gRPC-Web protocols are different, some translation must happen so that gRPC-Web client requests are readable by gRPC servers. At SafetyCulture, Envoy Proxy converts gRPC-Web requests to gRPC and back.

gRPC over HTTP/3

Ultimately, we are trying to run both gRPC and gRPC-Web over HTTP/3 to benefit from QUIC’s performance. gRPC-Web over HTTP/3 is an easy win since modern Web browsers do support HTTP/3, whereas gRPC over HTTP/3 will require fundamental client-side adjustments. In this article, we want to leave doors open for both protocols.

Since Envoy doesn’t support HTTP/3 yet, we created a proof of concept proxy to gauge the potential performance improvement of HTTP/3. That proxy was implemented in Go and leveraged quic-go to enable HTTP/3 capability. The proxy could run in parallel with Envoy, translating HTTP/3 requests to HTTP/2, without interfering with any existing bindings on the existing systems.

However, creating even a very basic HTTP/3 proxy is not a trivial task if it aims to be production-ready. The accuracy of proxying requests, performance, and proper observability significantly increase the complexity of the task, while other major organizations are already enhancing some existing proxies for HTTP/3. That is nonetheless a very relevant exercise to understand the HTTP protocol in general as well as the HTTP/3 nitty-gritty.

Solutions

In this article, we detail how to enable HTTP/3 support for an existing proxy or server, focusing specifically on gRPC.

Many solutions are actually available. The most obvious option is to use a CDN provider that supports HTTP/3. For instance, if you utilize Cloudflare, toggle the HTTP/3 button, and you’re done! At least for gRPC-Web… That actually enables HTTP/3 traffic to Cloudflare servers, which is then translated into HTTP/1.1 traffic when going upstream.

In other words, the previous traffic flow is not suitable for regular gRPC traffic from network clients since it requires (at least) end-to-end HTTP/2 support. Besides, this makes the gRPC-Web request flow too convoluted: requests will go through all existing and major versions of the HTTP standard before reaching the upstream server. (Envoy communicates via HTTP/2 to upstream gRPC servers.)

Naive approach via CDN provider

A workaround is to use Cloudflare Spectrum, which allows for TCP/UDP proxy configuration. But, this means you still have to run an HTTP/3 proxy or server as your origin.

A quite singular solution is Caddy Server. It is an open-source HTTP proxy written in Go and supports HTTP/3 thanks to quic-go. But the HTTP/3 support is still experimental and doesn’t allow you to serve HTTP/3 traffic only, it must also be used as a proxy for regular HTTP traffic. This means it would be adding an additional HTTP hop on the request path, which increases the system’s complexity and risk of failure.

A non-intrusive solution comes from Nginx and Cloudflare. In that scenario, Nginx can run in parallel with an existing proxy or server by only allowing HTTP/3 traffic, via a UDP socket. Then, HTTP/3 requests are merely forwarded to the existing proxy or server via HTTP/1.1 or HTTP/2.

Dedicated HTTP/3 proxy approach

Cloudflare and Nginx

Cloudflare and Nginx both proposed HTTP/3 adaptations of Nginx. This example of compilation from Cloudflare introduces a patch to Nginx in order to enable HTTP/3 capability. After compiling Nginx, you can run it directly with the configuration example below to support HTTP/3 only.

error_log /var/log/nginx/error.log;events {
worker_connections 1024;
}
http {
access_log /var/log/nginx/access.log combined;
server {
# Enable QUIC and HTTP/3.
listen 443 quic reuseport;
ssl_certificate server.crt;
ssl_certificate_key server.key;
# Enable recent TLS versions (TLSv1.3 is required for QUIC).
ssl_protocols TLSv1.2 TLSv1.3;
location / {
# Enable HTTP/1.1 to upstream service
#proxy_ssl_protocols TLSv1.2 TLSv1.3;
#proxy_ssl_server_name on;
#proxy_ssl_name $host;
#proxy_http_version 1.1;
#proxy_set_header Connection "";
#proxy_set_header Host $host;
#proxy_pass https://127.0.0.1:443;
# Enable HTTP/2 to upstream service
grpc_ssl_protocols TLSv1.2 TLSv1.3;
grpc_ssl_server_name on;
grpc_ssl_name $host;
grpc_set_header Host $host;
grpc_pass grpcs://127.0.0.1:443;
}
}
}

In this configuration example, Nginx connects to the upstream proxy or server via localhost, on TCP Port 443. For instance, Nginx may run in a sidecar container while sharing the network namespace with the existing proxy or server. This has the advantage of preserving good performance.

Depending on the HTTP version you wish to run on the upstream, you will have to choose between proxy_pass and grpc_pass to respectively enable HTTP/1.1 or HTTP/2. The latter must be enabled for gRPC.

Finally, you must advertise HTTP/3 support in the response header of your vanilla HTTP proxy or server. This can be done via the Lua filter in Envoy by using the envoy_on_response primitive.

alt-svc: h3–27=":443"; ma=86400, h3–28=":443"; ma=86400, h3–29=":443"; ma=86400

Infrastructure

You must configure your firewall and load balancer to receive traffic on UDP Port 443. If you’re using AWS, you may face some limitations while leveraging a Network Load Balancer (NLB).

In short, AWS doesn’t allow traffic to go to a different target group if it uses the same TCP/UDP port number, which is 443. In this scenario, you must use a TCP_UDP target group and listener. The only downside is that the health check probe can only be configured for TCP, which leaves the Nginx server without proper monitoring at the NLB level. This can be easily fixed by running direct HTTP/3 checks against Nginx (e.g., cURL).

How do I test my setup?

Well, there are a few clients that support HTTP/3. You can even use any of the options below to test gRPC-Web.

  • Google Chrome Canary with --enable-quic --quic-version=h3-27
  • Firefox Nightly with network.http.http3.enabled set to true in about:config
  • Cloudflare Quiche HTTP/3 client
$ cargo build --examples
$ RUST_LOG=info target/debug/examples/http3-client https://example.com/
$ src/curl --http3 -i https://example.com/

Here’s what gRPC-Web requests over HTTP/3 look like in Chrome Canary:

gRPC-Web over HTTP/3

Next steps

Serving gRPC and gRPC-Web over HTTP/3 is only the first step to supporting end-to-end gRPC over HTTP/3. This requires changing the transport layer used by the client’s middleware. In the case of SafetyCulture, this is still a work in progress and may be the subject of a future article.

--

--