Intro to Memory Profiling & Chrome DevTools Memory Tab explained

Visualizations for Memory profiles

I think the Homer hiding gif best captures my immediate reaction when I saw the Memory tab for the first time.

Chrome DevTools Memory tab

As I wrote in my intro to this publication, my attempts at hiding from performance engineering somehow instead got me hired onto the profiling at Datadog. Life has its own sense of humor.

After a year or so of working on the software, through the process of osmosis, the Chrome DevTools Memory tab finally started to make sense.

If you’ve used a profiling tool like Datadog’s Continuous Profiler, you might have seen these different profile types:

Different profile types of the Datadog Continuous Profiler

The above shows all the available profile types for a Go runtime. There’s a section titled “Memory” with four different profile types: allocations, allocated memory, heap live objects, and heap live size.

In the Datadog UI, selecting a profile type lets you visualize that profile in a Flame Graph. If you’re not familiar with Flame Graphs, read my blog post here!

Heap Live Size Profile Type for Go

Although Go and Javascript are different languages, and the process by which the profiles are collected vary amongst languages, the concepts in memory profiling are interchangeable.

The Chrome DevTools Memory tab allows a selection of different “profiling type.”

Memory tab

Chrome uses a Javascript runtime, and these profiling types can produce profile types analogous to the Go profile types presented in the Datadog UI.

Let’s see how it works.

First, as a prerequisite to understanding different memory profile types, let’s first look at how memory is managed.

How is memory managed?

A memory refers to the physical space where data is stored. Most of the time, as an application developer, you’re dealing with either RAM or drive.

When you’re running an application in Chrome, Chrome uses the V8 Javascript engine, which parses your code and uses your computer’s RAM or drive to store data. Most of what you see is stored in the RAM, things in localStorage and IndexDB are stored on the drive.

There are several types of memories that reside in RAM, amongst which are:

  1. Stack memory
  2. Heap memory

Stack memory refers to variables that exist only within a function itself. They live in the RAM’s stack.

Heap memory refers to objects that must persist beyond the lifetime of a single function call. They live in the RAM’s heap.

Most of the time, memory leaks happen when some buggy code introduces objects to get accumulated in the heap. In both the frontend (browser) and the backend (server), it might lead to out of memory crashes.

Now, back onto memory profiles and the different profile types.

JS Heap (i.e. heap live size)

Before we look at the Memory tab, let’s first look at the Performance tab. When you go to the Performance tab, you have an option to check the “memory” checkbox when collecting profiles.

If you check that “memory” box and run the profiler, it will give you some graphs that look like this:

This graph shows the size of JS heap over time as I was interacting with the page.

As you interact with a page, memory can grow for many reasons. React for instance, maintains a virtual DOM representation of the actual DOM, and this virtual DOM lives within the heap memory. Thus, as you add more and more elements into the application (e.g. via infinite scroll), you will increase the size of this virtual DOM, and thus your JS heap. When you create Redux stores, they also live in the heap.

Looking at which user interactions are associated with JS heap growth is a good starting point to investigating memory leaks.

The “JS heap” graph here tells you the size of Javascript heap memory currently in use over time. However, it doesn’t tell you what objects in memory have grown over time.

For this, you need the Heap snapshot from the Memory tab.

Heap snapshot (i.e. heap live size / live objects)

Chrome DevTools Memory tab

The Heap snapshot gives you a detailed view of your JS heap.

Below, I took a Heap snapshot of Medium.com after the page load. And I took one again after scrolling down its infinite scroll for a bit.

Side by side comparison of 2 heap snapshots

This gives you a rough idea of what kind of objects has increased from one snapshot to the other. As you can see, the shallow size of Object has increased by 26MB. The Object objects themselves have increased from 200,936 to 1,145,575. Shallow size means the size of the object itself. Retained size means the size of the object itself and all the objects it alone references. In other words, it is the size that would be freed once this object is deleted from the heap.

I will write a detailed step-by-step with examples of how to investigate memory leaks in a future article. Stay tuned!

The information from Heap Snapshot is analogous to the “heap live objects” and “heap live size” profile types from the Datadog UI for Go runtime.

Datadog’s Heap live objects Flame Graph for Go

However, the Heap snapshot from Chrome doesn’t tell you what functions created these objects. In other words, you can’t build a Flame Graph with this information.

If you want to look into how functions are contributing to the JS heap as you interact with the page (e.g. infinite scrolling on the Medium homepage), you need to select “allocation sampling.”

Allocation Sampling (i.e. Allocated Memory)

Select Allocation sampling, and record a profile as you interact with the page. Then stop it. This produces an “Allocated Memory” profile type.

Allocation samping

Allocated memory profile type shows you the amount of heap memory allocated by each function over the duration of the profile, including allocations which were subsequently freed. That’s a big difference between this profile type and heap live size, which tells you the live size at the time the heap snapshot was taken.

Look at the first line, the function Pa has allocated 3.4 MB. Self size is analogous to the shallow size used in Heap snapshot, and is the size of the allocated objects themselves. Total size is analogous to retained size mentioned earlier.

Because the Javascript is minimized, we can’t actually tell what functions Pa etc. are referring to. You’ll need a source map or an non-minimized development version.

Notice at the top a Select field with "Heavy (Bottom Up)" ? This refers to how the functions are displayed.

This way of visualizing data is called a Call Tree.

Bottom up indicates the direction of the Call Tree, which refers to the direction of the stack trace.

Let’s say you have function apple calling banana and beer, both of which call carrot:

const apple = () => {
banana();
beer();
}

const banana = () => {
carrot();
}

const beer = () => {
carrot();
}

const carrot = () => {
return 'carrot';
}

Bottom up tree view will look like:

Bottom up Call Tree of code snippet

Top down will look like:

Top down Call Tree of code snippet

A (top down) Flame Graph, on the other hand, will look like:

Flame Graph of code snippet

Call Trees are not as user-friendly as a Flame Graphs, are they?

This goes to show having the right visualization is critical for making profiling accessible to non-experts. Want to learn more about how to visualize profiling data, read my blog here.

In our real life example with Pa,

Pa is a leaf node, and are called from several places, amongst which are t.useSyncExternalStoreWithSelector, e.useQuery, i, and some “anonymous” functions like callbacks. Any Javascript function without a name are going show up as “anonymous.” E.g:

const message = 'hello world';

setTimeout(() => {
[1, 2, 3].forEach((i) => {
sayHello(message, i);
});
});

Above, there are two anonymous functions. One for the closure passed into setTimeout, and the other for the forEach callback. And yes, anonymous functions make debugging difficult.

The last option, “Allocation instrumentation on timeline”, kind of combines the Heap snapshot and Allocation sampling.

Allocation instrumentation on timeline

The bottom part should look familiar to you, it’s what’s displayed when you take a “Heap snapshot.” With allocation timeline, you can select a small time frame and look at the snapshot of that time frame.

The top dark blue + light gray bars show the memory size of the objects that are allocated and deallocated. The entire bar length (dark blue + gray) indicates total size of objects allocated. The gray part indicates size that were now deallocated as of the current clock time (or when you ended the profile). Thus, the heights of the blue and gray parts change over time. If you switch between <Route> of a single <Router> React app, but much of the blue part remains, it might be an indication of memory leak, especially if the new page contains vastly different information.

Conclusion

This is an intro article aimed at giving an overview of the different memory profiles. If you’re a frontend engineer, I hope the Memory tab now makes more sense.

There are however, still two important and interesting pieces of information missing from this article.

  1. A step-by-step example on how to investigate memory leaks with the Chrome DevTools.
  2. A high level explanation of how the data for these memory profiles are collected.

I will be creating these blog posts soon. Stay tuned!

I’m Lily Chen, senior software engineer on the profiling Team at Datadog. Don’t want to miss an article from me? Give my publication a follow :)

--

--