Some of our developers recently came with the question, when should I use useMemo in React? That is an excellent question. In this article, we will have a scientific approach, define a hypothesis, and test this with real-life benchmarks in React.
Read on to find out what the performance impact of useMemo is.
What is useMemo?
UseMemo is one of the hooks offered by React. This hook allows developers to cache the value of a variable along with a dependency list. If any variables in this dependency list changes, React will re-run the processing for this data and re-cache it. If the variable values in the dependency list have already been cached before, React takes the value from the cache.
This mostly has an effect on re-renders of components. Once the component re-renders, it will fetch the value from the cache instead of having to loop through an array or process data again and again.
What does React say about useMemo?
If we look at the React documentation regarding useMemo it is not mentioned when it should be used. They simply mention what it does and how it can be used.
You may rely on
useMemoas a performance optimization
The question here is, from what point on is useMemo interesting? How complex or big should the data be before we see performance advantages in using useMemo? When should developers actually use useMemo?
Before we start our experiment, let’s define a hypothesis.
Let’s first define the complexity of the object and processing we want to perform as n. If n = 100, then we need to loop through an array of 100 items in order to get the final value of the memo-ed variable.
We then also need to separate two actions. The first action is the initial render of the component. In this case, if a variable is using useMemo or not, they both have to calculate the initial value. Once the first render is done, subsequent re-renders (the second action we need to measure) with useMemo can retrieve the value from the cache, where the performance benefit should be visible compared to the non-memo version.
In all cases, I would expect an overhead of about 5–10% during initial renders in order to go set up the memo cache and store the value. I expect to see performance loss for useMemo when n < 1000. For n > 1000, I would expect to see similar or better performance on re-renders with useMemo, but the initial render should still be slightly slower due to the extra caching algorithm. What is your hypothesis?
We set up a small React component as follows, which will generate an object with complexity n as described, the complexity is defined as the prop level.
This is our normal benchmark component, we’ll also make a benchmark component for useMemo, BenchmarkMemo.
We then set up these components to be displayed when pressing a button, in our App.js. We also use React’s <Profiler> to provide us with the render times.
As you can see, we render the component 10 000 times and fetch the average render time for these. Now we need a mechanism to trigger a re-render of our components on demand, while not having to re-calculate the useMemo, so we do not want to modify any of the values in the dependency list of useMemo.
In order to keep the results clean, we always start out with a fresh web browser page before starting a test (except for re-renders), to clean out any cache that may still be on the page and affecting our results.
Results with complexity n = 1
The complexity is shown on the left column, with the first test being the initial render, the second test being the first re-render and the final test being the second re-render. The second column shows the results for the normal benchmark, without useMemo. The final column shows the results for the benchmark with useMemo. The values are the average render time distributed over 10 000 renders of our benchmark component.
The initial render is 19% slower when using useMemo, which is a lot higher than the expected 5–10%. Subsequent renders are still slower, as the overhead of going through the useMemo cache costs more than recalculating the actual value.
In conclusion, for complexity n=1, it is always faster to not use useMemo as the overHead is always more expensive than the performance gain.
Results with complexity n = 100
With a complexity of 100, the initial render with useMemo becomes 62% slower, which is a significant amount. Subsequent rerenders seem to be slightly faster or similar on average.
In conclusion with a complexity of 100, the initial render is significantly slower, while the subsequent re-renders are quite similar and at best slightly faster. At this point, useMemo does not seem interesting yet.
Results with complexity n = 1000
With a complexity of 1000, we notice the initial render with useMemo becomes 183% slower, as presumably, the useMemo cache is working harder to store the values. Subsequent renders are about 37% faster!
At this point, we can see some performance increase during re-renders, but it does not come without cost. The initial render is a lot slower, with a 183% time loss.
In conclusion, with a complexity of 1000, we can see a bigger performance loss during the initial render (183%), however, subsequent renders are about 37% faster.
Whether this is already interesting or not will highly depend on your use case. A 183% performance loss during the initial render is a tough sell, but might be justifiable in case of a lot of re-renders in the component.
Results with complexity n = 5000
With a complexity of 5000, we notice the initial render being 545% slower with useMemo. It seems the higher complexity the data and processing is, the slower the initial render is for useMemo in comparison to without useMemo.
The interesting part comes when looking at the subsequent renders. Here, we notice a 437% to 609% performance increase with useMemo on every subsequent render.
In conclusion, the initial render is a lot more expensive with useMemo, but subsequent re-renders have an even bigger performance increase. In case your application has data/processing of complexity >5000 and has a few re-renders, we can see the benefits of using useMemo.
Notes on Results
The friendly reader community has pointed out some possible reasons as to why the initial render can be much slower, such as running production mode and so on. We re-tested all our experiments and found the results to be similar. The ratios are similar, while the actual values can be lower. All in all the same conclusions apply.
These are our results with components having values of complexity n, where the application will loop and add values to an array n times. Please note, results will vary depending on how exactly you are processing data along with the amounts of data. This however should be able to give you an idea of the performance differences with different sizes of datasets.
Whether or not you should use useMemo will highly depend on your use case, but with a complexity of < 100, useMemo hardly seems interesting.
It is worth noting that the initial renders with useMemo take quite a setback in terms of performance. We expected an initial performance loss of around 5–10% consistently, but have found that this highly depends on the data/processing complexity and can even cause 500% performance losses, which is 100x more performance loss than expected.
We have re-run the tests a couple of times even after having the results and we can say the subsequent results were very consistent and similar to the initial results we have noted down.
We can all agree, useMemo can be useful to avoid unnecessary re-renders, by keeping the same object reference of a variable.
For the case of using useMemo to cache the actual calculation, where the main goal is not to avoid re-renders in subcomponents:
- useMemo should be used when there is a high amount of processing
- The threshold from when useMemo becomes interesting for avoiding extra processing highly depends on your application
- Using useMemo in cases with very low processing, there can be extra overhead for its usage
When do you use useMemo? Will these findings change your mind on when to use useMemo? Let us know in the comments!