Introduction to HTTP 2

Mohammad Shaved
Jul 26 · 7 min read

A Brief History Of HTTP

HTTP is an old protocol, initially defined in 1991, with the last major revision HTTP/1.1 published in 1999. Websites in 1999 were very different to the websites we develop today. The amount of data now required to load the home page of an average website is 1.9 MB, with over 100 individual resources required to display a page. Resource being anything from an image or a font to a JavaScript or CSS file. HTTP/1.1 does not perform well when retrieving the large number of resources required to display a modern website.

Brief Introduction of HTTP/2

HTTP/2 will make our applications faster, simpler, and more robust by allowing us to undo many of the HTTP/1.1 workarounds previously done within our applications and address these concerns within the transport layer itself.

The primary goals for HTTP/2 are to reduce latency by enabling full request and response multiplexing, minimise protocol overhead via efficient compression of HTTP header fields, and add support for request prioritisation and server push.

HTTP/2 does not modify the application semantics of HTTP in any way. All the core concepts, such as HTTP methods, status codes, URIs, and header fields, remain in place which means all existing applications can be delivered without any modification.

Features of HTTP/2

Binary framing layer

HTTP/2 introduces new binary framing layer, which dictates how the HTTP messages are encapsulated and transferred between the client and server.

HTTP/1.1 uses newline delimited plaintext protocol, while HTTP/2 communication is split into smaller messages and frames, each of which is encoded in binary format.

Streams, messages, and frames

To Understand new binary framing mechanism which is used to exchanged data between the client and server, we need to get familiarise ourselves with the HTTP/2 terminology:

  • Stream: A bidirectional flow of bytes within an established connection, which may carry one or more messages.
  • Message: A complete sequence of frames that map to a logical request or response message.
  • Frame: The smallest unit of communication in HTTP/2, each containing a frame header, which at a minimum identifies the stream to which the frame belongs.

The relation of these terms is described below:

  • All communication is performed over a single TCP connection that can carry any number of bidirectional streams.
  • Each stream has a unique identifier and optional priority information that is used to carry bidirectional messages.
  • Each message is a logical HTTP message, such as a request, or response, which consists of one or more frames.
  • The frame is the smallest unit of communication that carries a specific type of data, e.g. HTTP headers, message payload, and so on. Frames from different streams may be interleaved and then reassembled via the embedded stream identifier in the header of each frame.

Request and response multiplexing

With HTTP/1.1, client uses multiple TCP connections to make multiple parallel requests to improve performance. With HTTP/1.1 only one response can be delivered at a time (response queuing) per connection.

With HTTP/2, new binary framing layer removes these limitations, and enables full request and response multiplexing, by allowing the client and server to break down an HTTP message into independent frames, interleave them, and then reassemble them on the other end.

One connection per origin

HTTP/2 no longer needs multiple TCP connections to multiplex streams in parallel as it uses new binary framing mechanism, each stream in HTTP/2 is split into multiple frames, which can be interleaved and prioritised. Hence with HTTP/2 only one connection per origin is required, which offers numerous performance benefits.

Server push

Another powerful new feature of HTTP/2 is the ability of the server to send multiple responses for a single client request. That is, in addition to the response to the original request, the server can push additional resources to the client without the client having to request each one explicitly.

A typical web application consists of multiple resources, which are discovered by the client by examining the document provided by the server. Then we can eliminate the extra latency and let the server push the associated resources ahead of time. The server already knows which resources the client will require, that’s server push.

Header compression

Each HTTP transfer carries a set of headers that describe the transferred resource and its properties. In HTTP/1.1, this metadata is always sent as plain text and adds anywhere from 500–800 bytes of overhead per transfer, and sometimes kilobytes more if HTTP cookies are being used. To reduce this overhead and improve performance, HTTP/2 compresses request and response header metadata using the HPACK compression format that uses two simple but powerful techniques.

  1. It allows the transmitted header fields to be encoded via a static Huffman code, which reduces their individual transfer size.
  2. It requires that both the client and server maintain and update an indexed list of previously seen header fields (in other words, it establishes a shared compression context), which is then used as a reference to efficiently encode previously transmitted values.

Pros and Cons

Pros :

  • HTTP/2 is binary, instead of textual.
  • HTTP2 enables a more efficient use of network resources and a reduced perception of latency by introducing header field compression.
  • HTTP/2 is fully multiplexed. We can make multiple parallel requests to improve performance within a single TCP connection. which in turn leads to better utilisation of available network capacity.
  • With the new feature of server push, the server already knows which resources the client will require without the client having to request each one explicitly like CSS or JS files.
  • Overall with HTTP2 we can decrease the load time of our application drastically.

Cons :

First of all, there really isn’t an alternative available today that’s superior to HTTP/2. But, as an IT pro, you should still know the protocol’s weak points. Some experts believe that these issues may be fixed in the future with the release of the “HTTP/3” protocol, but for now, these are a few of the cons.

  • Encryption is not required.
  • Cookie security is still an issue.
  • It’s not very fast, and not super modern.

Enabling HTTP/2 on Azure App Service

HTTP/2 is disabled by default for all customers. However, if you would like to enable HTTP/2 for your site, follow the steps below:

Open Azure portal, go to your app service and search for the “Configuration”.

In Configuration go to the tab “General settings”

In General settings there will a drop down called “HTTP Version”. Select 2.0 version and save.

Restart your app service , You’re done!

Note : Most modern browsers only support using the HTTP/2 protocol over SSL, while non-SSL traffic continues to use HTTP/1.1.App Service makes it easy to get up and running with SSL. Learn how to configure a new SSL cert for your app, or learn how to bind an existing SSL cert to your app.

Backward Compatibility with HTTP/1.1

HTTP/2 is backwards compatible, browsers that do not support HTTP/2 will fallback to using HTTP/1.1. we will test backward compatibility with curl request.

Curl Request with HTTP/1.1

curl -I — http1.1 <hostname>response :HTTP/1.1 200 OK
Content-Length: 3318
Content-Type: text/html
Last-Modified: Wed, 15 May 2019 11:19:57 GMT
Accept-Ranges: bytes
Date: Wed, 15 May 2019 18:18:48 GMT

Curl Request with HTTP/2

curl -I — http2 <hostname>response :HTTP/2 200
content-length: 3318
content-type: text/html
last-modified: Wed, 15 May 2019 11:19:57 GMT
accept-ranges: bytes
date: Wed, 15 May 2019 18:22:09 GMT

As we have seen in the above example , when we are sending curl request to HTTP/2 enabled server with HTTP/1.1 protocol then we are getting response in HTTP/1.1 but when we are sending request with HTTP/2 protocol then we are getting the response in HTTP/2. This test shows that HTTP/2 is backward compatible with HTTP/1.1

Performance improvement with HTTP/2

I have created a simple page making multiple Rest api’s call, which I will use to demonstrate performance improvement with HTTP/2.

Code Snippet


JS controller

In the above code in controller we have created a someFunction() method where we are calling a resource (Rest API deployed on azure app service) and then we are calling someFunction() method multiple times using q.all() method to test the load time.

Page load time with HTTP/1.1

We can see in waterfall diagram how requests are made through multiple batches (TCP connections) leading the page load time to 9.408 seconds.

Page load time with HTTP/2

We can see in waterfall diagram how requests are made through 2 batches (TCP connections) decreasing the page load time to 4.196 seconds. which is a significant difference in loading time when compare to HTTP/1.1

As we have seen in the above comparison, with HTTP/2 we can boost our web page performance drastically. Currently, HTTP/2 is widely supported by almost all the web clients so its implementation is painless. Although implementation of HTTP/2 protocol is easy, you should keep in mind that with HTTP/2 you will probably have to change the application mechanics (like serving resources to the client) to use the full potential of this protocol.


Using technology, data and design to change the way the world shops. Learn more about us -

Mohammad Shaved

Written by

Development Engineer III at WalmartLabs, Bangalore


Using technology, data and design to change the way the world shops. Learn more about us -

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade