How HTTP/2 reduces Server CPU and Bandwidth
Firstly, English is not my main language, feel free to correct me or ask me to paraphrase anything that might cause confusion.
There are already several studies on webpage-load speed of HTTP/2. Not much about server resources yet, I decided to create simple experiments by load-testing a server and found out that HTTP/2 does improve the usage of CPU and bandwidth compared to the legacy HTTP/1.1 with encryption and is EVEN ABLE TO COMPETE with HTTP/1.1 without encryption. This experiment can show that HTTP/2 does not only benefits page load time but, by protocol mechanics itself, also reduces usage of CPU and bandwidth too.
The experiments was done by load-testing a server against static webpage and the results were collected in CPU usage (in %), ingress bandwidth (in KB/sec) and egress bandwidth (in KB/sec). Then I compared these results to legacy HTTP/1.1 with and without encryption.
Note that this is a load-test against static webpage, the comparison’s proportion of CPU usage, ingress and egress bandwidth might be over-benefited; however, I am going to analyze to show you that the reduction in resources usage, either a lot or very small proportion, should be universal in every servers.
Actually there is a problem with memory usage, I could not find the way to analyze memory appropriately. Any suggestion is appreciated here. (PS. I collected via ‘collectl’)
TLDR: Read conclusion at the end.
Comparing between protocols: HTTP/2, HTTP/2 with high number of streams, HTTP/1.1 with TLS, HTTP/1.1 without TLS. I also varied the variables in test; for instance, simulate number of users, page size, number of keep-alive requests.
Estimating stats from httparchive.org, the setting for each protocol will be:
HTTP/2: 1 connection, 15 streams each and 5 requests per streams
HTTP/2, high streams: 1 connection, 75 streams each and 1 requests per streams.
Both HTTP/1.1, with TLS and without TLS: 15 connections, 5 requests per connection.
The actual average page size from stats is 30 KB, however my device is limited by 1 Gb Ethernet so I, unwillingly, have to adjust page size to 1 KB. I will show separated experiment on page size later.
I will put in-depth information/analysis into grey box so some people can skip them if they wish.
Number of Users per seconds
As you can see, HTTP/1.1 without TLS uses lower CPU by more than half and about 1.5 times in bandwidth from HTTP/1.1 with TLS. However, HTTP/2 (both red and yellow), even with forced-TLS, surprisingly use even lower CPU and bandwidth than HTTP/1.1 without TLS. I would say this is because HTTP/2 receives benefit from using less TCP connection and also from being a binary protocol. HTTP/2 bandwidth usage is also affected from header compression, HPACK. Notice that HTTP/2 with 75 streams doesn’t differ much from 15 streams, only ingress bandwidth is slightly lower.
Actually, the CPU slope differences between HTTP/1.1 and HTTP/2 should only be analyze after 60 users since the slope of HTTP/1.1 becomes constant around 60+ users. I think slope between 10–60 users is Apache2 caching phenomena. Perhaps HTTP/2 has better stream management ? haha. If you happen to know why, mail me plz :)
By the way, if we only focus on 60+ users (Black slope that is now constant for HTTPS/1.1), the slope of HTTPS/1.1 is about 1.8 times of HTTP/2. I couldn't test more than this since my server with HTTP/1.1 is starting to fail at 100+ users already.
Keep alive requests
Varied number of keep-alive requests. Because HTTP/2 receives benefit from using less TCP connection, what will happen if we set higher number of keep-alive requests per connection.
Also, I have tested both ingress and egress bandwidth and they always have the same alignment, however egress bandwidth shows less observable results (less different ratios between HTTP/2 and HTTP/1.1) because of the response sizes. I will skip the egress bandwidth in the rest of the experiments.
HTTP/2 CPU usage has almost the same slope with HTTP/1.1 now. (They might intersect at some point but I couldn’t make it from my server.) Also 180 keep-alive requests is crazily high, most servers should just use more streams for HTTP/2 or open new TCP Connection for HTTP/1.1. I assume here that HTTP/2’s CPU usage mainly receives benefit from bypassing TCP handshake and SSL key exchange, other than that it’s doing no different than HTTP/1.1. By the way, HTTP/2 bandwidth usage is much lesser since the header compression is more effective with more requests. The compression rate reported from h2load program almost hit 85%.
Actually, from RFC, HTTP/2 has much more features than HTTP/1.1 that might cause higher usage in CPU (e.g. flow control, etc). However it seems that HTTP/2 also receives benefit from being binary protocol.
Varied page sizes to fill the gap of last experiment. Not perfectly fill of gap, I know, but hopefully it’s better than nothing.
I varied the page size from 200 bytes up to 30 KB. The connection/streams setting is the same as last test; however at constant of 10 users.
There isn’t much difference in CPU usage and bandwidth trend-line other than that HTTP/2 always uses less CPU and bandwidth than HTTP/1.1.
The minor differences here are from TCP window size (affects both HTTP/1.1 and HTTP/2) and HTTP/2 fragmentation behavior. I will explain this later in HTTP/2 Stream and Connection v2.
It’s quite clear that HTTP/2 has better server performance than HTTP/1.1 most, if not all, of the time. Let’s move to next experiment that might help you decide your HTTP/2 server configuration on stream and connection.
HTTP/2 Stream and Connection
Try all of the combinations between HTTP/2 streams and connections configuration. Because the combination of stream and connection varies the requests number, I have to normalize the result of CPU and bandwidth into “per request”.
There is no keep-alive in this experiment. At “1 stream”, the CPU usage per request is around 16 times of that the “25 streams”. Because of that 16 times, it is the main reason I have to ignore “1” stream in these graphs, otherwise it would shrink all of the other results.
If we make the connection constant, at any point, the higher streams will result in less CPU and bandwidth per request. I don’t think there is any controversial here. Higher streams will directly increase effectiveness of TCP connection usage and header compression. This also applies to egress bandwidth, however egress bandwidth is overwhelmed by the size of resources so we won’t see much advantage there.
If you notice, fixing stream as constant, the CPU usage per request at 50 connections is much less than that of 100 connections. I believe that’s another Apache processing flow or some caching system. By the way, my analysis still apply here: no matter what number of connections your server will have, at the same number of requests, higher streams is still better.
HTTP/2 Stream and Connection v2
Fixed request per seconds at 18,000 then varied the possible combination of streams and connections.
1. 600 connections x 1 stream
2. 300 connections x 2 streams
3. 200 connections x 3 streams
11. 20 connections x 30 streams
12. 10 connections x 60 streams
Since I have covered almost everything, you can see that CPU and bandwidth trend are consistent with previous analysis. Again, egress bandwidth has much lower advantage and hard to notice.
As promised, this experiment will be about packet fragmentation. If you notice in ingress bandwidth graph, higher payloads use more bandwidth than lower one. This strange behavior doesn’t exist in HTTP/1.1: the size of request DOES NOT increase by payload size. (e.g. GET /index.html; no matter what size of index.html is, either 1kB or 1MB, the request is still GET /index.html) Then what really happened here ?
Take a look at the pictures below to understand the fragmentation and pipe-lining behavior of HTTP/2.
- Yellow area means they are in the same TCP packet.
In case 1, this happens in HTTP/2 that page a, b, c, d, e are small enough to be able to process very fast. The responses could be answered within 1 TCP packet. This case also applies to HTTP/1.1 pipe-lining (since HTTP/1.1 pipe-lining has to wait all the responses to be completed and sent at one.)
In case 2, when page size of ‘a’, ‘b’, ‘c’, ‘d’, ‘e’ are large or require high processing time, HTTP/2 might fragment these responses into 2 (or more) TCP packet. This is to prevent head-of-line blocking so the client could just receive page ‘a’ and ‘b’ and proceed to render without having to wait for all of the responses. Imagine client also has ‘f’, ‘g’, ‘h’, ‘i’, ‘j’ requests in queue; after receiving response on ‘a’ and ‘b’, client only has 2 available streams so client could not send request ‘f’, ‘g’, ‘h’, ‘i’, ‘j’ all at once. Client can only send request ‘f’ and ‘g’ then waiting for response on ‘c’, ‘d’, ‘e’ (It is possible for ‘f’, ‘g’ to finish before ‘c’, ‘d’, ‘e’) before send request ‘h’, ‘i’, ‘j’ later. This is how fragmentation happens thus slightly increases ingress and egress bandwidth.
In regular HTTP/1.1, each requests have their own TCP packet so these fragmentation phenomenon doesn’t exist. This means that even HTTP/2 fragments all requests into different TCP packet, the efficient will be equal to HTTP/1.1, no worse.
There are also HTTP/2 prioritization and server push. They are application based so I don’t do any experiment on them. However, you can see that with proper prioritization, not only latency of page load time is decreased, you can also decrease bandwidth from reducing fragmentation here. With proper server push configuration, you can slice off some requests from client which will further reduce latency and bandwidth too.
My overall analysis (Geek version)
After my analysis, I reached the conclusion as follow:
- Because of HTTP/2 multiplexed stream, only 1 process of two-way-handshake (in TCP layer) and key-exchange (encryption layer) is needed per communication, which reduces both CPU usage and bandwidth. Both ingress and egress bandwidth are reduced; however, it is easier to observe this advantage on ingress bandwidth (GET/POST request) than egress bandwidth (response) since the request is usually smaller than response.
- The CPU usage probably also benefits from HTTP/2 being binary protocols which explains why HTTP/2’s (with encryption) CPU usage is slightly lower than HTTP/1.1 without encryption.
- Ingress bandwidth and egress bandwidth are both reduced from (1.) and also from Header Compression. With constant header (with exception on date, path), the header compression can compress easily up to 40%. In my experiment, it could easily compress header up to 80%. However, because it’s just a load-test, headers do not change. The “Seconds” in date headers also affects percentage of compression too. (https://github.com/nghttp2/nghttp2/issues/519)
- The bandwidth is decreased roughly by 40 bytes for every stream that replaces the connection. Since HTTP/2 can wrap up many streams into a same TCP transfer, HTTP/2 reduces a lot of bandwidth especially when using with website that serves lot of small resources.
- From (3) and (4), these are the reasons that HTTP/2’s ingress and egress bandwidth are less than those of HTTP/1.1 (both encrypted and plaintext).
- When using more streams, HTTP/2 also gets more advantage from better ratio between streams and connections, as effect from (1). The bandwidth is lower as effect from (3) and (4). The header compression works more efficiently with higher stream number.
Conclusion (Non-geek version)
- Upgrading from HTTP/1.1 with encryption to HTTP/2 with encryption will reduce both CPU usage and bandwidth of your server.
- HTTP/2 with encryption uses resources approximately as same as HTTP/1.1 without encryption. So don’t be afraid that you need to use encryption along with HTTP/2.
- Static web servers that serve high number of resources are servers that reduce the most CPU and bandwidth from switching to HTTP/2.
- Using more streams will reduce even more CPU usage and bandwidth. Don’t be afraid to use high number of streams for your sites. (I haven’t tested 100+ streams though)
- Anyway, because of it’s nature is different from HTTP/1.1, make sure your infrastructure (or your cloud service) supports HTTP/2 before upgrade.
Any discussion and further analysis are welcomed. :)
The version of applications I used in this experiment are also bit out of date (I didn’t have time to write this article) but overall results should stay the same since there are no change to HTTP/2 core behavior up til date.
-28 February, 2017-