Test Pilot graduation report: Pulse

Chuck Harmston
Firefox Test Pilot
Published in
3 min readOct 9, 2017

As we began ramping up for our release of Firefox 57 in November, our product intelligence team faced a difficult problem: they had lots of telemetry to help understand Firefox’s performance, but no way of understanding exactly what those numbers mean to users. Pulse was designed to bridge the gap between hard performance numbers and the incredibly vague concept of user satisfaction.

At what point does page size affect user satisfaction? On which sites do users report the most problems? To what extent can users perceive the impact of new Firefox features like e10s and multiple content processes? These were all questions we hoped to answer.

How it worked

When installed, Pulse tracked performance metrics for every page visited. The page weight, time to load, content process count, and much more were recorded. These were submitted to the Test Pilot team whenever the user filled out a short, four-question survey, which asked for details about their experience. This survey was presented to users in two ways:

  1. A pageAction was registered with the browser, putting an icon in the URL bar that allowed users to report their sentiment at any time.
  2. Around once a day, users were directly prompted by the browser, using a <notificationbox> element.
The Pulse notification box

This arrangement allowed us to collect two distinct but equally important types of data: random-sampled data, reflecting the average experience of the user, and outlier data, reflecting when a user’s experience was so good or bad that they felt compelled to report it.

Since user sentiment is a notoriously challenging thing to measure, we tried to restrict analysis to submissions reporting the primary reason for a score as fast or slow. This reduced noise from users who may have been reporting, for example, whether they liked or disliked a website. We also used a segmentation methodology similar to Net Promoter Score to cluster users into positive- and negative-sentiment groups. Those who provided a rating between one and three stars were considered detractors, while those that provided a five-star rating were considered promoters.

What we learned

In short: a lot. Thanks to the dedication of our users, we collected over 37,000 submissions in just a few months. We can’t thank you enough for all the time and effort you put into helping us understand how to make Firefox great.

Cumulative Pulse submissions, by date

Product intelligence is continuing to comb through the data to look for meaningful findings. A few that stand out:

  1. Performance matters. Nearly every metric showed significant evidence that poor performance negatively affects user sentiment.
  2. Ads hurt user sentiment. One of the strongest effects on sentiment was our proxy for the number of ads: requests made by the page to hostnames on the Disconnect.me tracking protection list. This covaried with the overall number of requests, total page weight, and each of the timers, so it’s unclear if this effect is due to a specific aversion to ads, or to their consequences on performance.
  3. Developers should focus on DOMContentLoaded to improve perceived performance. Timers were placed on a number of events in the page load cycle: the time to first byte, time to first paint, time to the window.load event, and the time to the DOMContentLoaded event. The one that most consistently affected perceived performance was the time to DOMContentLoaded. If you want to see strong returns on a limited amount of time to tune your site’s performance, try to reduce that number. Your users will thank you.

As we move close to the Firefox 57 release in November, the team will continue using the data collected by Pulse to make sure that it’s the fastest Firefox yet.

--

--