Measuring JS code performance. Part I: using react-benchmark.

Alexander Gusman
7 min readSep 20, 2022

--

Photo by Indira Tjokorda on Unsplash

Have you ever had any arguments in your team that <code snippet A> should be more performant than <code snippet B>? It is hard to be objective in such disputes without a well-done investigation.

Were you ever in the process of comparing two different libraries in terms of performance? For example, is react-leftpad more performant, than react-left-pad😂? In the modern world of library diversity, it is always hard to choose the best one.

So how can you compare the performance of any given JS code?

If your code is simple, and you do not have any dependencies, you can just use JSbench or anything similar to benchmark it easily.

But what if your use case is much more complex and requires 3rd party dependencies? Is it possible to benchmark how fast a given React component is rendered in a real browser? And how much RAM is consumed?

Answers to those questions await for you in this series of articles, in which we will cover two different methods to measure performance of any given JS code snippet.

Practical example: measuring OpenReplay performance penalty

The object of this article is to teach you how to accurately compare the performance of two different React components. I will show how to do this using an example of a specific problem I was coping with some time ago.

Once, I was researching how we can improve the bug discoverability and user visibility of our enterprise web app. One of the ideas was to integrate our application with OpenReplay: an open-source session recording tool.

But then a rational question arises: How can we be sure that this 3rd party system does not affect the FE performance? The obvious method comes first: we can integrate it and deploy it to production, and then we will be able to monitor our performance metrics. But this method comes at a cost: First we need to integrate the system, and only then will we know if it is relevant for this use-case.

Are there any ways to measure the performance effect in advance?

Without deploying real code to the end users? Let’s consider two different ideas:

  1. Create a synthetic benchmark in a headless Chrome: a component which renders a lot of DOM nodes (500–25000) should be measured with and w/out OpenReplay enabled. This part is covered in this article.
  2. Measure button interaction speed on a real page of our application: A button should render 500 DOM nodes when clicked, and we can use Chrome DevTools to measure the reaction time of this interaction with and w/out OpenReplay enabled. This part is available in Part II of this blog post series.

Pro tip!

Why is a big amount of DOM nodes important? It’s because of how OpenReplay works: It takes a snapshot of DOM when initialized and then watches for the DOM nodes to be changed with MutationObserver.

So to simulate how OpenReplay will work in our enterprise web app, which often has >2000 DOM nodes on the page, I created tons and tons of DOM nodes inside the benchmarks.

Synthetic benchmark

Instrument: react-benchmark

Benchmarking is great, because it is done automatically, and you can easily run 100 tests without any need to perform any manual actions. But what tools should you use?

Of course, you can build something on your own. But before starting something from scratch, it is always useful to research what has already been created by the open-source community.. And luckily for us, I’ve found a useful tool: react-benchmark.

How should it be used? You just pass any React component to the CLI and the tool:

  1. Builds a JS bundle of your component + code of benchmark.js to run the benchmark
  2. Starts headless Chrome with puppeteer
  3. Runs the test X times (usually around 30)
  4. Displays the result in the console

Yes, this tool was missing some of the features I needed, but c’mon this is open source, and you can easily contribute to add everything you need, e.g.:

Pro tip!

There is an interesting problem in RAM measurement which I’ve discovered from this wonderful article by Christoph Guttandin. RAM consumption is not persistent between tests, because of different optimizations in the V8 engine.

If you create an object in JavaScript there is no guarantee on how much memory it will be using. A browser (or its JavaScript engine) may choose to store it in an extremely memory efficient way first, but when you use it heavily the browser might decide to switch to an alternative version which consumes more space on your machine but is much faster to access. Who knows?

Because of that, my RAM measurement PR does not only check how much RAM was occupied by inspecting JSHeapUsedSize property, but also checks how many objects inherited from Object.prototype existed in RAM at the time.

Test components

For my use-case I’ve created two different components: one renders a massive amount of DOM nodes inside therenderfunction, another one adds the same amount in several batches asynchronously.

To make those examples more interesting, let’s reference a famous US Scouts’ song:

10 bottles of beer on the wall

10 bottles of beer!

Take on down, pass it around

9 bottles of beer on the wall!

An example of a synchronous component implementation:

You can run it with react-benchmark --cpuThrottle 4 --ram

An example of an asynchronous component:

You can run it with react-benchmark --cpuThrottle 4 --ram --onReady

Yes, I could move getNodes abstraction to a separate reusable file. But c'mon, nobody should be so strict about DRY principle, especially for a demo project, which will be thrown in the trash tomorrow.

Too many bottles of beer, my friend…

Running it!

For each of the components (sync and async) I’ve run the benchmark six times: two times for 250 DOM nodes, two times for 2500 DOM nodes, and two times for 25000 DOM nodes, to measure performance for different application sizes.

I really like the interface of this CLI tool

Then I repeated the same process, but this time with OpenReplay enabled. By using different tests with different numbers of DOM nodes, we can test the performance effect OpenReplay has in correlation with the number of nodes.

And the winner is…

Photo by Chris Bulilan on Unsplash

Let’s take a look at the results received after running this benchmark twenty-four times in a row.

Sync test result

Then let’s build a graph with those results to visualize it.

To have even an even broader picture, let’s also run the async test and compare the results.

Async test result

After all the tests are done, it is easy to reach a conclusion: OpenReplay does not increase RAM consumption so much (usually around 33%). But! It increases CPU consumption by 2–3 times depending on the use case, which should be a noticeable performance decrease for users with low-end devices.

What have I learned today?

Today, we’ve learned that you can benchmark any given React component automatically, by running it inside headless Chrome with react-benchmark. The result of the measurement is:

  1. CPU benchmark in ops/sec — higher values are better.
  2. RAM benchmark in MB (heap size) and number of Prototype.object in memory — lower values are better.

To get more accurate results, please remember these simple rules:

  1. Enable CPU throttling to make test results less dependent on other processes of the OS.
  2. Think about the use case you are benchmarking. If you test an error reporting system, your test component should throw a lot of Error objects. If you test an analytics system, your test component should have a lot of events triggered. If you test some data transforming function, please test it on a really huge data structure.
  3. Close all the applications on your computer before starting the benchmark. Of course, you cannot close everything (e.g. you cannot close Finder, or your firewall installed by the security department), but try to close as many programs as you can.

Also, if you are interested in measuring performance in terms of user action-interaction time, please continue to Part II of this blog post series.

--

--