Akamai’s Adaptive Acceleration product analyzes mPulse RUM data to determine the optimal combination of page sub-resources to Server Push to individual web sites in order to maximize performance.
The full slide set presented at the IETF 102 conference describes the methodology used to gather the performance statistics, and this slide in particular summarizes the results:
As can be seen above, the results are generally positive/neutral.
Since last year, we have continued evolution on a couple of fronts:
- Further development of the Adaptive Acceleration algorithm that selects which page sub-resources will be Server Pushed to the browser.
- Refinement of the statistical methodology used to measure Server Push performance data.
Some recent research by Akamai’s Service Performance team when measuring Server Push performance has yielded very interesting results. One of the great challenges of measuring A/B network performance data is the natural variability of response times across the data set (due to a myriad of external factors), and how this makes it difficult to correlate the actual effect of the A/B change to specific performance improvements/degradation.
Use of a linear regression model (as used in our case) based on a number of dimensions that can influence performance (e.g. geographic location, hour of day, day of week, ISP) will help isolate the effect of the A/B change. However, this requires more samples in the data set in order to keep a sane confidence interval for the A/B result.
After a deeper dive into the results, it was found that the confidence interval became significantly larger for subsets of data that had very slow response times.
The following is a quantile plot showcasing Server Push performance data for a specific customer production web site served by Akamai.
x-axis — The quantiles of the response time range interval in the data set (0.01 is fastest quantile, 0.99 is slowest quantile).
y-axis — The mean performance effect of Server Push on response times (negative values indicate performance improvement due to Server Push, positive values indicate performance degradation due to Server Push).
bar size — The confidence interval for the performance difference for each quantile.
As can be seen in this plot, the confidence intervals start to increase in size rapidly towards the highest quantiles. This is due to the fact that there is less response time data available to calculate the performance delta in these highest response time quantiles, and therefore we have less confidence in the measured effect of Server Push. There is also the possibility that other factors are influencing response times when they are that high, and that the effect of Server Push is insignificant in this scenario.
In the above quantile plot chart, we can see that the very last quantile has a massive confidence interval, resulting in a statistically insignificant result.
What happens if we filter out this quantile’s underlying response time data from the aggregate data set and re-compute the analysis?
Before we dive into filtering the data, let’s first take a look at some fresh data from 2019.
The following chart is based on Server Push performance data collected in April 2019, using all quantile data.
Note: This is based on a larger set of customer web sites than what was used in 2018 IETF presentation, and therefore cannot be compared directly to that data set.
One thing that is immediately striking is that there appears to be statistically significant performance degradation (red bars) across some sites, compared to results presented in 2018. However, as pointed out, this year’s data set is substantially different in that:
- This 2019 data set is twice the size as the 2018 data set (longer data collection period)
- More data has resulted in narrower confidence intervals, and less statistically insignificant results
- As a result of more data in general, there is more data comprised of very high response times
Ok, so now what if we take this same data set, but filter out the quantile representing the slowest 1% of response times?
Here is what that chart looks like:
We witness a drastic difference now in the results by filtering out the slowest 1% of responses in the data set. All but two sites show a statistically significant performance improvement, and the remaining two have statistically insignificant results.
What are the key takeaways from this research?
HTTP/2 Server Push is generally helping to improve web page performance for Akamai’s customers
Using the latest methodology that involves filtering out the slowest 1% of responses, HTTP/2 Server Push (as applied by Akamai on Chrome browsers) is demonstrating a clear benefit in the vast majority of cases when it is applied.
No evidence yet that HTTP/2 Server Push helps or hinders in the slowest 1% of response times
Since the slowest 1% of responses were filtered out prior to analysis, the question remains as to the effectiveness of Server Push for these responses. Determining a statistically significant result for this quantile is very challenging due to the wide range of high response times and the relatively smaller scale of performance delta that can be correlated to Server Push. Essentially, any performance change due to Server Push gets “lost in the noise”. It is possible that Server Push causes degradation in these very high-latency scenarios, but we cannot come to that conclusion given the high uncertainty levels involved with the data.
Our teams continue to perform additional performance research and we are continuing to find new ways to optimize the set of resources that should be Server Pushed to the browser.
Additionally, we have undertaken a new branch of research into the effectiveness of Preload hints for third-party page resources, with the Adaptive Acceleration product now issuing Preload hints for third-party font resources.
Of particular interest to our team is the draft Early Hints feature, as we think this would be a very powerful performance feature when used in conjunction with Preload hints, in order to enable early fetching of third-party resources prior to delivery of the base page.