We had a leak! Identifying and fixing Memory Leaks in Next.js

--

Hi, I’m Joshua. I recently joined the John Lewis Partnership as a Product Engineer in the “Front End Foundations” team, working on waitrose.com within the larger Waitrose Digital team.

We use Next.js to power parts of our digital estate at the Partnership. Next.js is a popular open-source React framework for building server-side rendered React applications. Next.js is generally known for its good performance and out-of-the-box developer experience. However, this can all change when a memory leak occurs. Resource use increases, and performance decreases. What was previously a good developer experience quickly becomes frustrating and tedious.

We have been moving to a new micro frontend architecture and recently discovered a server-side memory leak in our production environment. After resolving the issue, we wanted to share some general advice for finding common causes of leaks, plus reveal what caused our own leak and how we fixed it.

What are memory leaks?

A memory leak is when your application’s memory usage keeps growing and growing, eventually overwhelming the system.

In a healthy application, memory usage fluctuates based on your application usage. More memory is used when more people visit your application. When those people leave, memory is “cleaned up” and released, decreasing overall usage. However, when a memory leak occurs, some of the memory used is never “cleaned up”; over time, there is a sustained upward trend in usage.

It can be challenging to know if you have a memory leak. At Waitrose, we only noticed a memory leak in our application after it had been in our production environment for several days. We use Grafana, an open-source analytics and monitoring tool, to keep track of our applications.

A chart from a Grafana dashboard. There are various lines plotted on the chart all showing an increase in memory usage.
You can see from this zoomed out graph of memory usage that we had a sustained upward trend in memory usage, indicating a memory leak.

What can cause a memory leak?

Unfortunately, various code patterns can cause server-side and client-side memory leaks, so it is essential to be aware of some of the most common causes.

Global variables

Variables declared outside of functions typically persist for the application’s lifetime. So, if you push to a global array every time your application processes a server-side request, that array will continue growing. To avoid a memory leak, limit global variables or clear the variable when you have finished with the data.

// Global variable
const myGlobalArray = [];

app.get('/', (req, res) => {

// The global array will increase in size for every request
// It will not be cleaned up, and will cause a memory leak
myGlobalArray = myGlobalArray.concat(
new Array(1000000).fill('Random data')
);

// Return my global array
res.status(200).send(JSON.stringify(myGlobalArray));
});

// To prevent a memory leak, avoid updating global variables for each request

Closures

A “closure” is a function that is enclosed within another function. “Closures” continue to have access to the outer function’s scope and “remember” the outer function’s variables used in the closure, even after you have finished with the closure. To avoid a memory leak, you need to clear the reference to the closure, allowing the garbage collector to release the memory.

function productImages(productId) {

// This is a variable in the "outer" scope
const productData = getProduct(productId);

// This is the closure
return function getFirstProductPhoto() {

// This function will continue to have access to `productData`
// It will never be cleaned up
return productData.photos[0]
};
}

// `productImages` returns the closure function `getFirstProductPhoto`
// We can use this first product photo as a cover photo
let getCoverPhoto = productImages(1234);

// Execute the closure to get our cover photo
const coverPhoto = getCoverPhoto();

// We have our cover photo, we are done with the closure
// To prevent a memory leak, release the closure
getCoverPhoto = null;

Timers and Intervals

Timers and intervals are a common source of memory leaks. The function you pass to either setInterval or setTimeout will never be cleared by the garbage collector unless you use clearInterval or clearTimeout. Additionally, be careful of using global variables and closures with setInterval that could increase memory usage every time the function executes.

const myArray = [];

// Run an interval every 100 milliseconds
const myIntervalId = setInterval(() => {
myArray.push('This interval could cause a leak if it is never cleared')
}, 100);


// To prevent a memory leak, clear the interval
clearInterval(myIntervalId);

Event Listeners

If you attach an event listener to an element and remove that element from the DOM, the event listener will not be removed. Remove an event listener to avoid a memory leak once it is no longer needed.

const myButton = document.querySelector('#my-button');

function buttonClick() {
console.log('Button clicked!')
}

// Add an event listener to the button
myButton.addEventListener('click', buttonClick);

// If you removed the button now, the event listener will not been removed
// myButton.remove();

// To prevent a memory leak, remove the event listener first
myButton.removeEventListener('click', buttonClick);

// Remove the button
myButton.remove();

Third-Party libraries

Imported packages can introduce memory leaks as their code could include any of the above mentioned issues. Be careful with Next.js, as not all packages will work with server-side rendering, which could introduce memory leaks.

If you notice a memory leak after installing a new package, investigate any existing GitHub issues or raise an issue. If you can identify the cause of the memory leak, consider opening a merge request that addresses the issue. Alternatively, switch to a different package designed for use with server-side rendering.

Issues with Next.js

In the Next.js framework, the dependencies or packages related to your version could have memory leaks. On GitHub issues, some Next.js versions have reported memory leaks. Similarly to the above, consider investigating any GitHub issues or raise an issue about your specific version. If you are not using the latest long-term support version of Next.js, consider upgrading.

How can I find out what is causing a memory leak?

As we knew our memory leak was server-side, we needed to debug our server-side code. You can debug Next.js server-side code using Chrome DevTools by passing the --inspect flag when starting Next.js. We recommend first building your application for production with next build and then running the application with NODE_OPTIONS='--inspect' next start to more closely resemble your production environment. Initially, we had trouble launching the inspector, which we resolved by upgrading to Next.js v14.

Once your application has started, you can open Google Chrome and visit chrome://inspect. On this page, within the “Remote Target” section, you can click “Inspect” under your application’s name, opening a separate DevTools window to investigate your server-side code with mapped source files. If you don’t see your application, try restarting your application.

Switching to the “Memory” tab of the new DevTools window provides options to profile server-side memory usage, which we will need to identify the cause of the memory leak. By default, “Heap snapshots” are selected. While investigating our memory leak, we followed a process of “Collect garbage” (Trash icon), “Take heap snapshot”, simulate some traffic, and repeat until we had ten heap snapshots.

A screenshot of the Chrome Developer Tools on the “Memory” tab where memory profiling can be selected.

You want to “Collect garbage” between snapshots to clean up and release memory. Memory leaks occur when memory is not released or “cleaned up”. While the garbage collector periodically runs automatically, you should collect garbage between snapshots to clean up as much memory as possible.

You can simulate traffic by opening the client side of your application in a new Google Chrome window and refreshing it a few times. We used Vegeta, an HTTP load testing tool, to simulate two requests per second for 60 seconds (120 requests in total) to our application running locally. You can choose how much traffic to send to your application. As memory leaks can be challenging to spot, the more traffic you can simulate to your application, the more likely you can replicate the issue and identify where the leak originates while comparing snapshots.

Once you have around ten snapshots, you can select each snapshot individually and switch from “Summary” to “Comparison” (towards the top of the DevTools window) and compare each snapshot with the previous.

You are generally looking for anything that consistently increases between your snapshot comparisons. You can see this from the “Delta” column (the mathematical term for variation). If you sort “Delta” by “High to low”, it is easier to see what has increased between snapshots. Ideally, you are looking for something that grows in proportion to the number of requests you made to your application, so tracking how much traffic you sent between snapshots is helpful. As memory usage fluctuates, it is best to toggle the comparison of your last snapshot between the previous and the first snapshot to ensure an overall increase from when you started your application.

A screenshot of the Chrome Dev Tools. A “heap memory” snapshot is selected, showing an increase of 120 Side Effects.

While debugging our Next.js application, we noticed a proportional increase in “SideEffects”. Every request to our application created a new SideEffect. As we simulated 120 requests between snapshots, each comparison always showed the same increase of 120 SideEffects. From running Next.js in inspect mode with source maps, we could select each new SideEffect and determine that the “React Helmet” package created them. We used React Helmet as we ported over the code from our larger monolithic application as part of our migration to a micro frontend architecture. After an online investigation, we discovered that React Helmet had known memory leak issues if used server-side.

The author of React Helmet no longer actively maintains the package. So we replaced React Helmet with the package Scott Taylor created while at the New York Times after they experienced the same memory leak issue, “React Helmet Async”.

After replacing the package, we repeated the --inspect debugging, and no new SideEffects were created. We had a good idea from our pre-production environment that we had solved the issue. However, as memory leaks take time to become noticeable, and the unfortunate fact that you could have more than one memory leak, we were only really sure we had solved the leak when we deployed our changes into production and monitored our Grafana dashboards. We are pleased to say that the changes fixed the memory leak.

If you struggle to find the source of your memory leak, you could try alternative tools such as heapdump or memwatch.

Memory leaks are a frustrating and tedious process to resolve. They are incredibly challenging to diagnose in large monolithic applications. This particular memory leak occurred in one of our new micro frontend applications. As micro frontends split large monolithic codebases into smaller, more manageable projects, we solved this memory leak significantly faster than we had with previous memory leaks that occurred on our older monolithic application. This is another way a micro frontend architecture can improve your digital estate’s developer experience and stability.

We look forward to sharing more in the future. Thank you for reading!

--

--

Joshua Morris
John Lewis Partnership Software Engineering

Product Engineer at John Lewis Partnership. Previously Senior Front-End Developer at University of Bristol. I build websites, apps, course and games.