Profiling React Server-Side Rendering to Free the Node.js Event Loop

Túbal Martín Pérez
Expedia Group Technology
5 min readFeb 18, 2019

At HomeAway, our front-end applications are isomorphic/universal React/Redux applications running on Node.js instances using hapi as the server framework. That means we render all the HTML on the server and serve it to the client (a web browser), and when the client receives the HTML, React hydrates the received HTML in order to make it interactive for the user.

Server side rendering (SSR) an application offers several advantages over client side rendering (CSR). Some of those advantages are:

  • Users get to see the content earlier, specially the critical path content.
  • SEO performance is consistent.

But SSR also comes with caveats:

  • Slower time to first byte (TTFB).
  • Lower server throughput. The longer it takes to process and render a page, the lower the server throughput.

In this post we’ll focus on improving the server throughput a bit by reducing the event loop lag when server side rendering a React application.

Profiling tools

We’ll make use of the node-clinic and AutoCannon npm packages to profile the application I developed with my HomeAway team, which is responsible for merchandising landing pages (MLP).

To keep this article short, we won’t explain how to use node-clinic and understand its multiple outputs. The node-clinic official website and the Flame Walkthrough are good sources for more detail.

It’s all about not blocking Node.js event loop

The authors of Node.js warn in its documentation about blocking the event loop or the worker pool. Since Node.js uses a very small number of threads (1 main thread + 4 libuv worker threads by default) for all client requests, it’s critical not to block those threads with long-running or CPU-intensive tasks. They also remind us Node.js is fast when the work associated with each client at any given time is “small.” Keep this in mind.

If you’d like to learn more about how Node.js internals work, I found that this article and the rest of its series explain their mechanics well.

React v16 offers several methods to render a React application to HTML markup. The renderToString()method is the most widely used and well known. Before we began profiling the event loop, our Node applications were using that method to render React to HTML on the server.

It turns out, as we observed after trying the excellent node-clinic profiling suite, renderToString() is a synchronous operation that blocks the event loop for a considerable amount of time because server-side rendering a React application isn’t cheap. So we checked for an event-loop-friendlier way of rendering React and realized React v16 already offered a renderToNodeStream() method which makes use of Node.js Streams under the hood.

In Node.js, Streams are part of the asynchronous non-blocking I/O operations group so that made sense since the very beginning but we had the need to put both to test and check for ourselves whether there’s any difference at all.

Profiling time

So we profiled our MLP application using both rendering methods, one at a time, modifying just the rendering part a bit.

This is a barebones version of the code we use to render React in Node.js using renderToNodeStream():

We profiled the app and used clinic flame to produce MLP’s SSR flame graph for the implementation based on React’s renderToString():

MLP’s corresponding SSR flame graph when React’s renderToNodeStream() is used:

Note that in both flame graphs, only the app button is selected in order to display exclusively the MLP app code being run.

Let me explain what the colored boxes mean before getting to the analysis of the flame graphs. With respect to server-side rendering React, two operations always take place:

  1. Creating the React Virtual DOM (VDOM) tree, that is, executing all React.createElement calls which create objects representing the user interface. It's a synchronous blocking operation. The box colored in yellow represents the components that took quite some time to process. Not all of the components MLP app imports are represented inside that stack.
  2. Converting or rendering that VDOM tree to an HTML string using renderToString(), renderToNodeStream(), or other methods. The green box contains this operation, which is more expensive than creating the VDOM tree. That is the focus of this article.

Both operations are expensive and take time. They usually are the most expensive operations performed by Node.js in a universal React application.

If we take a look at the first flame graph using React’s renderToString method, node-clinic reports that rendering the React Virtual DOM tree to an HTML string is an operation that sits at the top of the stack 11.4% of the time. That means it is actually blocking the event loop.

Quoting node-clinic authors:

If a function is frequently observed at the top of the stack, it means it is spending more time executing its own code than calling other functions or allowing function callbacks to trigger.

In Node.js only one function can execute at any one time (ignoring the possibility of Worker threads). So if a function takes a long time to execute, nothing else can happen, including the triggering of I/O callbacks. This is the essence of the phrase “blocking the event loop.”

Now let’s take a look at the second flame graph using React’s renderToNodeStream method. There's a completely different story there. The previous wide red stack (green colored box) has been replaced by a narrow pink stack instead. Rendering the React Virtual DOM tree to an HTML string is no longer blocking the event loop that heavily (it's no longer observed at the top of the stack) thanks to Node.js Stream’s async I/O nature.

Instead of rendering the whole React application in one shot, blocking other operations/callbacks until it’s done, rendering it is now happening in smaller chunks when the event loop is available, without blocking it.

Soon after we noticed this behavior, we deployed a new version of the MLP application to production using React’s renderToNodeStream method. We observed that the event loop lag got cut roughly in half compared to the previous version deployed, which used the renderToString method. Overall, we observed a 10% improvement in server concurrency/throughput down the funnel.

Conclusion

By reducing the amount of time the Node.js event loop is blocked, we reduce Node.js event loop lag and allow the server to process more events/callbacks, that is, more “requests” in the same time window.

--

--