HTTP/2 for Modern Webapps

Avichal Pandey
6 min readJan 26, 2017

--

This post will emphasise on benefits of upgrading to HTTP/2 for a large scale modern web application. This post will also cover a tutorial to configure HTTP/2 in nginx, AWS ALB and AWS Cloudfront.

Enlisted below are the key features that HTTP/2 introduced that we should look forward to in order to achieve maximum performance gain in the web layer of our applications.

  1. Binary protocol
  2. Header compression
  3. Single multiplexed TCP connection
  4. Stream Prioritisation
  5. Server Pushes

We should take a note that not all of the features are implemented in all the popular webservers yet. In part one of the post we will look at the top four features enlisted above. Server Pushes will be covered in the next post. Most of the popular web servers such as nginx, apache HTTP server etc now support these four features. Couldflare has recently announced its support for server pushes along the with other features. Before going to each of these features in detail lets look at how HTTP/2 differs from HTTP/1.x and how this difference laid the foundation for the features that are enlisted above. HTTP/2 introduced the new binary framing mechanism for data exchange. To understand the binary framing lets first try to understand its basic terminology.

Frame: It is the smallest unit of communication in HTTP/2. Each frame contains a frame header and stream id of the stream it belongs to.

Message: A message is a sequence of frames that maps to a logical request or response.

Stream: A bidirectional flow of bytes within the HTTP/2 connection. A stream consists of one or more messages.

This image is taken from the book High Performance Browser Networking

HTTP/2 breaks down the HTTP packets into binary-encoded frames. In HTTP/2 multiple requests uses single TCP connection that consists any number of bidirectional streams. Each stream has a unique identifier and optional priority information associated with it. A message consists of one or more frames and it is mapped to a logical message such as a request or a response. And a frame carries a specific type of data — e.g., HTTP headers, message payload etc. Frames from different streams are interleaved and then reassembled by using their stream identifier. This binary framing scheme is the foundation of HTTP/2 which enables it to provide the the rich features we enlisted above. Lets try to understand the features one by one.

Binary protocol

Binary format is not human readable but it is efficient to parse and it is more compact on wire for obvious reasons. Read more about it here.

Header compression

In HTTP/1.x a lot of bandwidth is wasted on transmitting headers upstream and downstream. HTTP/2 introduced its own header compression mechanism its compression algorithm is called HPACK. In this way HTTP/2 utilises the available bandwidth more efficiently and as a result improves the resource download time. This post doesn’t delve into how the compression works but if you are interested to know more about it go ahead with this RFC7541 document.

Multiplexing

This aspect of the new protocol will give the most in terms of optimisation benefits. By using the streams and messages introduced in HTTP/2 resources from one host are all downloaded parallely using a single long living TCP connection. This way the underlying TCP connection is utilised more efficiently as compared to opening a new TCP connection for downloading every HTTP resource. HTTP/1.1 tried to solve the problem of TCP connection reuse by introducing keep-alive header and pipelining of all HTTP resources through a single TCP connection. This solution reduced the time taken to establish a HTTP connection since an existing TCP connection was reused if available But that itself introduced a new problem of HOL blocking where a large sized resource blocks the other small sized resource to be transferred on the shared TCP connection until it is completely transmitted. HTTP/2 solves TCP connection reuse and HOL blocking at the same time by multiplexing different resources on same underlying TCP connection. These frames arrive out of order on the socket held by the web browser, using stream ids present in the frames web browser reassembles the original HTTP packet and render the webpage.

Stream Prioritisation

Multiplexing did a phenomenal job of fixing HOL blocking problem but it introduced a new problem. Since all the streams are using the same underlying TCP connection there can be contention for the bandwidth. To avoid the contention HTTP/2 introduced stream prioritisation. Also, it is possible that there exist a dependency between various frames that are interleaved by the new HTTP/2 protocol. In order to avoid a performance hit we must ensure frames are sent by server in such a way that it respects the dependencies among the frames. HTTP/2 standards allows streams to have an associated wight and dependency. Each stream may be assigned an integer weight between 1 and 256 and each stream may be given an explicit dependency on another stream. With the help of weight and dependency client creates a priority tree and communicate it to server. Using this “priority tree” server allocates available bandwidth to streams. This solves the problem of contention. We should also note that priority tree is just a transport preference and not a requirement. Client cannot enforce prioritisation on the server.

Revisiting http/1.1 optimisation techniques

In order to take full advantage of HTTP/1.x we did a lot of work arounds in our application and deployment infrastructure.
For example we concatenated our static files into one to minimise the number of resources that we need to download in order to render the page similarly we used image sprites to load a single image. In order to achieve parallelism we used domain shading since browser would only allow 6 to 8 connections with one domain. HTTP/2 multiplexing addresses all these concerns without any extra efforts so we can safely remove all those work arounds. With the use of server pushes in HTTP/2 we no longer need to inline our css and images. We can just ask the server to push those assets later. More about server pushes will be covered in the next post.

Configuring nginx

In your nginx.conf file find the listen directive and make following changes:

listen 443 ssl;

to

listen 443 ssl http2;

Some caveats with nginx:

  • You cannot use it without ssl.
  • It is only available for nginx version >= 1.9.5

Configuring AWS ALB

If you use AWS ELB as your load balancer its probably the time to migrate. AWS has recently launched a next generation load balancers called ALB or Application Load Balancer. Here is a fantastic blogpost by AWS introducing it. AWS does not provide support for HTTP/2 in ELB however support for HTTP/2 is available out of the box in new ALB. Also, ALB is cheaper, the hourly rate for the use of an Application Load Balancer is 10% lower than the cost of a ELB.

Configuring Cloudfront CDN

HTTP/2 is enabled by default for all new Amazon CloudFront distributions. For existing distributions HTTP/2 can be enabled in distribution configuration. AWS doesn’t charge anything extra for using this HTTP/2. If the client does not have a support for HTTP/2 it will still be able to communicate using HTTP/1.1. You can read more about it here.

In the next part of this post we will look into server pushes in detail and understand how it can help us to optimise our web applications.

Further reading

--

--