Building a JS Performance Monitoring Tool — Part II

Sep 11, 2020 · 6 min read
Image for post
Image for post


JavaScript has become as crucial to many websites as the HTML it’s written in. The language may be regarded as the Toby Flenderson of programming languages by many developers, but without it, the functionality of so many of our favorite websites would crumble. And as the saying goes, with great power, comes great responsibility. How performant JavaScript is on a page plays a large part in how good of an experience a user has, and we’re in an age when milliseconds matter. This is why a performance monitoring tool was deemed necessary. And sure, tools of this nature already exist, but when looking for specific data points and ways to visualize them, a tailor-made tool created in-house was our best option.

To read about the process of architecting this tool and what third-party products we did leverage, head over to Part I of this article here. Otherwise, we’ll continue on now to find out what data we decided to collect and how we would use it to visualize our JavaScript’s performance.

We Can Collect Data, Now What?

Part I of this article goes over the process and tools we use in order to collect data, store it, parse it, and make it query-ready. Now that we had a workflow for all this, we needed to determine what kind of values we wanted to store and how to do so.

We opted to use JavaScript’s Performance Interface as a model for how we wanted to collect our data points and to use the Performance Interface itself for getting the time-based values since it provides incredible granularity. Using the time stamps provided by the interface’s method, we created custom methods for setting Marks and Measures. The value of a Mark is the amount of time in milliseconds that has passed since the page loaded before this method was called. Measures are the amount of time in milliseconds between the occurrence of any two given Marks.

Meaningful Data Points

Now we needed to determine the points in the processes of our codebase that would provide the most telling information and help us to pin-point areas where we could reduce latency and improve our performance.

Since this particular code is used to make ad requests and render the response as an ad on a publisher’s page, some initial data points were pretty self-evident. Of course, we started with a Mark when our file first loads on the page. Then, as we send ad requests and receive responses, we set a Mark for each of those and use those Marks to create Measures for how long our requests are taking. However, these speak more to the user’s network speed as well as the performance of our back-end team.

To really begin seeing data that’s informative of the front-end codebase’s performance, we needed to add Measures (and the necessary Marks to create these Measures) for the processes that occur between responses and subsequent requests as well as the creation of objects used for rendering and when they are finalized. This final Mark occurs when the ad has fully rendered on the page.

Image for post
Image for post
Timeline of critical data-points we’re capturing

As we began to store and analyze data from these points, of course additional metrics began presenting themselves as useful or necessary inclusions. Some such metrics include when we receive consent data for regions that currently require users’ knowledge and opt-in for usage of data (such as GDPR in the European Union and CCPA in California). Also, we found that it would be better to assign specific Marks for the different ad products and implementation methods (e.g. Prebid.js versus a direct relationship with the publisher page). Later on, this will help us narrow down problematic areas and troubleshoot issues that arise.

Finding What Matters Most…

In order to create meaningful visualizations of the data we had begun collecting, we first needed to determine which metrics should be used.

After analyzing the initial information we received, we quickly found that the Marks alone were too heavily influenced by outside factors. For instance, many pages do not load our initial script file until a user has interacted with the page. On a page that has a fair amount of content above the fold, a user can read through content for a good amount of time before actually interacting with the page, causing the value of all Marks to be higher than if our initial script file was loaded immediately when the page loaded. Another outside factor that influenced our Marks was whether or not we were served below the fold. For our ad products that will be viewed only after a user has scrolled, we do not finish creating or rendering the ad until the ad space is within the user’s viewport. Again, this caused inflated values for some Marks.

“Above the fold” refers to content seen before a user has scrolled. “Below the fold” is content that exists below the user’s viewport when the page loads.

We did, however, find that these Marks would still prove useful in finding information between two different parts of our process. For example, some of our ads require asynchronous requests to be made with the first needing to be parsed and processed before making the second request. In these cases, we can subtract the value of the Mark created on receipt of the first response from the value of the Mark created when sending the second request in order to find how long it took our code to process the response of the first request.

…And Visualizing It

Once we had found which metrics and calculations we would need in order to provide the most meaningful reports, the final step for this performance monitoring tool was to be able to see the outcome in graph form. For this, we decided to use the third-party tool that is already used by our company for visualizing other reports: Looker. Looker allows for our data team to make SQL queries to our data that’s stored within Athena (how it gets there is covered in Part I).

Within Looker, custom reports can be created with built-in equations to handle computations - like the example above - for calculating time between processes. It also allows us to see metrics over time, averages, and compare multiple metrics at once on the same graph and with different graph types.

Image for post
Image for post
Data visualization of our JS performance in Looker

Bonus Feature

As an additional feature, we added the ability to see the data for any ad we were viewing on any page in real-time.

We already have a tool for displaying information about a page and any of our ads that appear on it for users who are currently logged into our company-wide dashboard. This tool allows the user to see more information about each individual ad on the page by expanding an overlay with tabbed sections of information. All that was needed was to add a new tab for the performance data we could now collect.

In this view, the values of the most important Marks for this particular ad instance are printed into a table. For the most informative Measures, we added a waterfall visualization that would feel familiar to those who have used browser developer tools.

Image for post
Image for post
Performance data that can be viewed for all ad instances


At the end of the day, whether you find JavaScript beautiful for its flaws or a beast you’d rather not encounter, it has become the backbone of the modern web. Being mindful of performance can play a large part in the success of a site. When the sites your code is found on are not your own, being mindful may be the only way you find yourself continuing to be on that site. For this reason, having an at-a-glance view of our performance became the driving force behind creating our JavaScript Performance Monitoring Tool.

We’re always looking for new talent! View jobs.

Follow us: Facebook | Twitter | | Linkedin | Instagram


Thoughts from the GumGum tech team

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store