How to profile a React Application ?

A guide through the Chrome Profiler & React Profiler

Emmanuel Meric
partoo
14 min readOct 27, 2021

--

Writing web applications in React is efficient and fast thanks to its declarative API. But when your application increases at scale, you may encounter some performance bottlenecks due to React working internals that may be hard to debug.

In this guide, we’re going step by step on how to identify and solve performance bottlenecks using Chrome developer tools.

We should note that this requires a solid understanding of React and the Javascript runtime in general. While we’ll give some basic explanations, you may need to be familiar with concepts such as : event loop, rendering engine architecture, reconciliation algorithm, or referential equality in Javascript.

How React works under the hood ?

All you need to remember is that React always runs in three phases :

  • update phase : a callback (“onClick”, “onKeyPress”, etc.) triggers a state change
  • render phase : since the state changed, some nodes are marked as “dirty” in the virtual DOM, some components are rendered recursively. The render method or the function body for functional components is called ; changes between the previous render are recorded.
  • commit phase : Changes are applied to the real DOM, which makes them visible. Lifecycle methods like componentDidMount, componentWillUnmount are called.

This well-known schema from wojtekmap.pl sums up the different lifecycle methods called during each phase :

Seeing everything through the Chrome Profiler

All of these phases are no more than the React Javascript code itself running in your application when performing an update. We can easily see this through the Chrome Profiler.

Running the Profiler

To do so, make sure you are running your app on a development bundle, which will make sure that the actual function names are preserved for easier debugging.

Then, open the devtools using Ctrl+Shift+I, and go to the Performance tab.

Also, click on the three dots icon and hit “Undock into separate window”, which will give you much more space to debug your application.

Then, click the record button, perform any action on your application to be recorded, and hit the stop button to stop recording.

Now, you should see all the code that got executed during the record :

What on earth is all of this ?

While there’s a lot of information, we’ll describe the main interesting parts here.

First, let’s notice that executed code forms a bunch of columns, headed by dark yellow rectangles. This is because Javascript is event-driven: apart from the initial loading of the page, anything that makes Javascript run is an event.

It can be a user event like mouseclick, mousemove, keyup, keydown, or other types of events, like an xhr event, that happens when an Ajax request resolves, or an event scheduled by setTimout or setInterval.

In the dark yellow rectangle at the top of a column, we can see the name of the event that triggered the execution of Javascript code :

Now we can start to understanding why some code got executed at certain moments regarding the user actions we performed while recording :

Measuring performance

Now that we have a basic understanding of why code runs, we should ask ourselves : in which way does it impact the performance ?

In particular, how to measure the performance of our application in the profiler ?

The main metric that we’re interested in is the FPS rate. Indeed, browsers act as a video engine, and simply try to render the website frame by frame like a video player would do.

We can take a look at the frame rate by scrolling up above the flame graph in the Frames section :

Each image shown here is a frame, headed by a rectangle showing how many milliseconds the frame last.

We can easily calculate the frame rate from a frame duration :

It’s also shown while hovering the rectangle element.

Dropped frames and idle frames

Ideally, browsers tend to maintain a framerate of 60fps, which corresponds to a frame duration of maximum 16ms.

It’s important to understand that while the browser is executing Javascript, it cannot show any new frame. That’s the key point behind frontend optimization : if the Javascript event takes too long to execute, no other frames will be shown and the UI will be frozen for a noticeable amount of time.

If a frame is stuck for too long, it is called a dropped frame. You can see they appear in red in the screenshot above. Usually, we want to avoid dropped frames to have a smooth user experience.

If the browser has nothing to do for a long amount of time (just because no events are happening), there is no reason why it should calculate new frames at 60fps. In this case, it just shows the same frame for a long amount of time, until a new event appears.

These are called idle frames, and they can last as long as we want, they are not a problem. It just means the image stays fixed on the screen. They appear in white in the screenshot above.

What it takes to render a frame

We insisted on the fact that running Javascript makes the browser unable to render new frames on the screen. But in fact, few more things are going on in frame rendering.

Modern browsers these days follow a pipeline using multiple steps and to render frames on the screen. This is what is known as the rendering engine.

It can be summarized in the following way :

A (simplified) schema representing stages of a browser rendering engine
  1. Run Javascript event
    When an event occurs, if a Javascript handler of that event is defined, it will execute. The code of the handler runs. It may call a lot of functions, make some calculations and eventually update the DOM. This is where the React code and our code executes.
  2. Recalculate styles
    The browser maintains an internal tree structure in addition to the DOM called the render tree. It is mainly a fusion of the DOM with CSS stylesheets that contain computed styles for each element. This tree has to be updated if some Javascript mutated the DOM. This is mainly about applying CSS scoping rules.
  3. Layout
    Another internal structure is known as the layout tree. It contains the dimensions and positions of each element after applying layout styling rules (such as display: flex or float: left). This tree has to be updated if some Javascript mutated the DOM.
  4. Paint
    Once we have a fresh render tree and layout tree, we need to produce images to render on the screen. The paint phase is about determining what to draw on the screen and schedule work on the rasterizer threads for the next stage.
  5. Rasterize
    Rasterizing is the process of converting geometric shapes into pixels. Because pixels don’t overlap, this can be highly parallelized using multiple threads. At the end of this stage, we have finally pixels to draw on the screen.
  6. Composite layers
    Because most frame changes in the web are due to scrolling or various translations in general, browsers implement an optimization called compositing. Instead of computing a single image at once for each frame, it’s going to split the UI into different layers. These layers are computed independently and finally gathered by the compositor (exactly like we would use layers on Photoshop) during the composite layers phase. Then, on each update, we may not re-compute every layer.
  7. GPU
    Finally, images are sent to the GPU to be rendered on the screen. The GPU may perform compositing itself with help of the compositor thread, for better performance.

Where performance bottlenecks can happen ?

Now that we’ve seen a high-level overview of the rendering pipeline, you should be able to see every step in the Chrome Profiler, thus find what takes “too much time” before rendering frames on the screen if your UI is freezing.

While identifying bottlenecks, you will realize that :

90% of the time, performance bottlenecks comes from too much Javascript execution that makes new frames unable to be rendered on the screen

Also 10% of the time it will be outside the Javascript, in the rendering engine, so it is still useful to know how the browser works beyond Javascript. For example, the paint stage is known to be a performance bottleneck sometimes.

A zoom on the Javascript execution in React

Because 90% of bottlenecks come from the Javascript, let’s zoom in on a Javascript event processing and see what happens in there. This is where the three React phases happen (update, render and commit) :

What happens in the Javascript execution of an event with React

First, you can see that we can find the three phases inside the Javascript code execution. Reading function names may give you hints about which phase it belongs to. For example, the commit phase is generally wrapped up with a commitRoot function call.

Also, you can see multiple renders can occur in a single event processing. This is a feature of React 16 that implements a scheduler that can decide how many renders can be done in a single frame and can delay or cancel some renders if it considers the main thread should be returned to the browser engine.

Then you can start to understand on a case-by-case basis, what takes too much time; in particular, what phase seems bloated and could be the source of a bottleneck.

As a general rule, you should avoid having events that last more than 100ms, because this is the amount of time where users start to perceive some delay in the UI. In the picture above, we have a 130ms event, which is not ideal but not too bad either. Certainly, the first render phase or the first commit phase is taking quite a long time and could be shortened.

Common performance pitfalls

Here we go for a list of the most common performance pitfalls in React as well as how to solve them. Let’s start !

Unnecessary renders

Fundamentally a render is an indication that something may have changed. If the generated virtual DOM is actually different from the real DOM, there will be some relevant DOM updates in the commit phase. But this is really common in React apps to have renders that just end up giving you the same virtual DOM, thus they are unuseful.

This is not a bad thing if they don’t take too much time. Sometimes, some large renders can be unnecessary and cause performance problems. It may be time to memoize some of your components, using React.memo, useCallback, or useMemo. Also, take care of the referential equality in your props.

We’re going to detail the render phase troubleshooting in the last chapter of this article using the React profiler.

Long update phase & unnecessary renders (Redux)

Using Redux, there’s a common pitfall that is related to the connect HOC. You should remember the following two points while profiling Redux applications, and designing Redux architectures in general :

1. When dispatching an action in Redux, every mapStateToProps of every mounted components are called.

2. MapStateToProps return values that differ from the previous props using a shallow equality check result in new rendering of these components.

First, this means that dispatching an action is quite heavy, so use the global state only if this is necessary. You may rather use local state or contexts in general.

Secondly, it’s very easy to kill performance if a mapStateToProps is not implemented correctly. Let’s look at some examples :

Here we do an expensive task inside a mapStateToProps. This is going to execute each time we dispatch an action as soon as the component is mounted, even if the action has nothing to do with the component and relates to other parts of the application.

Here, we’re going to create a new object for the value of stats each time mapStateToProps is called. Thus, the reference of the object will be different and the component will always render when an action is dispatched, like before.

This happens for objects but also arrays, arrow functions, or anything that returns a new reference different from the store :

What not to do in a mapStateToProps

You can do various tests in a Javascript runtime to know what keeps a reference or not :

{} === {} // false
[] === [] // false
(() => {}) === (() => {}) // false
const myArray = []
myArray === myArray // true
myArray.map(x => x) === myArray.map(x => x) // false

Overall, the solution is to always return direct references of the store :

Always return direct references on the store in mapStateToProps

If you’re using selectors to implement complex derived states from the store, you may want to memoize them using reselect as recommended by redux’s official documentation.

Long commit phase due to many lifecycle methods called

Lifecycle methods such as componentDidMount, componentDidUpdate, or componentWillUnmout are called synchronously each time a component mounts, updates, or unmounts, respectively.

If they perform some non-trivial tasks on a lot of components at one time it can create huge bottlenecks.

This often happens in lists of items, where the user can load hundreds of items using an infinite scroll. Then, when leaving the page, they must all be unmounted synchronously and this can lead to a page freeze. Here is an example of a bloated commit phase when leaving a page with a lot of items mounted :

A bloated commit phase when unmounting a long list of items

The solution is simply to remove what’s in the lifecycle method. If you end up with this many calls, there’s certainly a better way to perform the feature. Also, you can consider using list virtualization with react-window for example.

Forced synchronous layout

Since we’ve seen that the browser always :

  1. runs the Javascript
  2. then, updates the style tree and the layout tree, which contains each elements positions and dimensions

what happens in the following snippet ?

Some Javascript updating and reading the DOM synchronously

Since all these operations happen synchronously, we don’t leave the “run Javascript event” phase, but we change the width of an element and then query the layout tree.

This is a special case where some layout tree calculations are performed within the execution of the Javascript event. This is what we call a Forced Synchronous Layout (FSL), or Reflow.

This happens as a general rule when the DOM is updated then read in a single frame. Not only getBoundingClientRect produces FSL, but many other properties are listed on this gist.

The Chrome Profiler will warn you when these happen :

Sometimes, it's voluntary to perform such synchronous layout calculations, like when making FLIP animations for example. But this is usually a performance bottleneck that extends the Javascript execution duration making new frames unable to show on the screen.

To prevent these, you may avoid reading the DOM in methods such as componentDidUpdate or componentDidMount which runs synchronously after the DOM updates. useEffect is fine because it runs asynchronously in another frame.

Debugging render phase performance with the React Profiler

Among these different types of bottlenecks, one is going to be pretty hard to debug with the Chrome Profiler : the render phase. Because it’s hard to see what exactly renders and why it renders in the Chrome Profiler.

Thankfully, React Dev Tools comes with a profiler specific for React, as you can see in your developer tools :

This profiler is for the render phase specifically, it won’t show you what happens in the update phase and the commit phase. So if you’re trying to debug a performance issue, you should first take a look at the Chrome Profiler, and if it seems to be related to render, you can take a look at the React Profiler.

Before running the Profiler, let’s configure it so that we’ll have more information. Click on the ⚙ icon, then in the Profiler tab, click “Record why each component rendered while profiling”.

Then close the modal, and click the “start profiling” button. Perform some actions on your app and click the button back again to stop profiling. Here is what we got :

  • Zone 1
    We got one rectangle for each render that occurred during the profiling. The more yellow the rectangle is, the longest took the render. We can click on each rectangle to show what components got involved in zone 2.
  • Zone 2
    This is the flame chart for a specific render. It is basically the component tree of your app. Grey components did not render. Green and yellow components did render. Yellow components took more time to render.
  • Zone 3
    React implements a priority system for renders. In this pane, you can see the priority level for that event.

Hovering a specific element, you can know why the component rendered :

Usually, this is either :

  • a prop changed
  • the parent component rendered
  • initial render

If it is an initial render, there’s nothing unuseful here. We just need to generate the virtual DOM to know what to put in the DOM.

If this is a prop change or the parent component rendered, you may ask yourself : did the component generate a different output this time ? did the UI change at this place when you were recording ?

If not, this is probably an unuseful render. Take care of props referential equality, memoize props using useMemo and useCallback, wrap your component withReact.memo to prevent rendering from the parent.

With that, you should be able to chase fairly easily long rendering tasks that make your application slower and solve them. Long renders often happen in lists of items. For example, updating one item may trigger a render of every item on the list in a non-optimized React application.

Conclusion

We’ve seen an overview of the browser rendering engine pipeline, the different React phases, and how we could observe these inside the Chrome Profiler.

Understanding “how things work” in the browser allows you to perform a strong analysis of why some performance bottlenecks happen. They can occur in one of the three React phases (update, render, commit), or in the browser engine.

We provided also a few solutions for common bottlenecks in React applications. If your bottleneck happens in the render phase, you can debug it more easily with the React Profiler.

--

--