What profiling my Universal React-Redux app revealed

My app had no bad symptoms, who knew that it was hiding a secret issue? In this post I show you how I dug into the running application and why you should use the same process to look for issues in your app.

CPU profiling with Chrome Devtools

A little truth that not everybody knows is that you’re unlikely to know what happens in your application until you actually look at your application while it is running. It doesn’t matter if you are the developer that wrote most of the application code, todays applications might be so complex that what happens behind the scene is hard to picture.

A story that supports this statements happened a week ago, when I started looking into the code that performs the server side rendering of an application. “It’s straightforward and it works fine” I thought, “after all it’s less than one hundred lines”. But I was just guessing.
What I saw looking at the code was different from what I saw running the code with the proper investigation tools.

Long story short, this process allowed me to track a mistake I made that was hurting the application performance. Here is how I did it.

Enter profiling

Profiling is the process of capturing significative data while the program is running to later display them in a human readable form. Profiling an application gives you a detailed analysis of what happens, if that’s not enough you can complement this process with debugging to dig further into your code at runtime.

The server side of my application is based on Express. For Node apps, there are usually two kind of profiling we can make:

  • Heap profiling, to view and optimise the memory usage of your application at runtime, and detect memory leaks.
  • CPU profiling, to view for how much time the CPU is used by any function, find bottlenecks and optimise performances.

In this post I will focus exclusively on CPU profiling, which is what took me to the hidden bug I didn’t know I had.

There are many tools to profile a Node application, but starting from Node v6.3.0 we have the chance to use the excellent Chrome Devtools remotely for this task.

There’s a caveat, in the most recent Chrome versions the profiler has been extended with other metrics under a new Performance tab. Profiling a Node application remotely shows instead the old interface. Not a problem, but it’s worth saying to avoid confusion.

Setup profiling

To profile our Node app we need to run it with the --inspect flag to activate the debugger:

node --inspect server.js

You can configure Node to open the debugger on a specific ip and port with --inspect=0.0.0.0:9229, or to start the debugger with a break on the first statement using --debug-brk.
Node will then start with a message like the following:

Debugger listening on ws://0.0.0.0:9229/d04ffc7a-bc92-43f6-b69b-d0874bf1f1ae

Sometimes you want to profile or debug a running application and you don’t have the chance to restart it, otherwise you’ll lose the context that makes profiling interesting. In this case you can send a Unix signal to Node in order to activate the debugger (node-pid is the process ID of the running node process):

kill -s SIGUSR1 node-pid

After the debugger activation, you have to open Chrome and write about:inspect in the URL bar. A page like the following appears:

Opening about:inspect.

Under Remote Targets you should see the Node application running. Click on Inspect, a popup will open with the remote Devtools.

If you don’t see anything under Remote Targets, check that Discover network targets is enabled, and also check that the host and port you used while starting the debugger are listed under Configure (host and port should be reachable).

Capturing data

The popup window has a Profiler tab.

The Profiler tab

This is where you start recording data about the CPU at runtime. Clicking on the Memory tab will show you the options to profile the memory usage of your application.

The Memory tab

To start profiling you can click on the record button on top left (the grey circle). It will turn red. Then interact with your application for some time, and click again on the same button to stop the profiling and view the captured data.

Analysing the CPU profiler data

With my application running, I clicked on Record to begin the CPU profiling, Devtools started collecting the stack of functions being executed and their execution time. At the end of the recording, Devtools generated a profile containing all the collected informations, that is then parsed and visualised.

There are three visualisations for this data: the Chart visualisation, the Heavy (bottom up) visualisation and the Tree (top down) visualisation.

The chart visualisation

Chart visualisation

The Chart visualisation shows two reflected charts on a timeline. The top chart is a smaller and less detailed version of the bottom one. Height is determined by the height of the call stack at that moment in time.

The lower chart (also called flame-graph) is more interesting because it gives you a visualisation of the function calls made by the application, stacked one above the other. If a function A calls a function B, B is printed below A. You can visually see how much time each function takes, compared to the caller (remember that you can zoom-in to highlight particular sections).

Hovering a function

Hovering a function gives you more details about the function itself, like its execution time and the file that defines it.

The Heavy (bottom up) visualisation

Heavy (bottom up) visualisation

The Heavy (bottom up) visualisation shows a tabular list of the functions executed during the profiling. Every row is made by three columns: Total time, Self time and Function name. When retrieved by Devtools, the line of code for that function is also shown.

Total time is the time elapsed into the function, including the time elapsed into its child functions.
Self time is the time elapsed into that function, minus the time elapsed into its child functions.

You can sort by Total time and Self time to understand which function is heavier on your CPU. You’ll notice that you can expand the row on some functions, where a small triangle appears next to its name. Clicking on it displays the caller of this function (hence the Bottom up naming, you see the leaf and if you expand you see the caller).

The Tree (top down) visualisation

Tree (Top down) visualisation

The Tree (top down) visualisation shows you a tabular list of functions, but instead of showing all of them it shows only the root functions, the ones not called by other functions. You can then expand each function to display the function called from this one. It’s the opposite of what you see on the Heavy visualisation (hence Top down naming).

Data analysis and the culprit

Back to my story, switching between the Tree and the Heavy visualisation showed me something weird. I noticed an unexpected function call, apparently not related to my own code.

Heavy view: onStatError

I had no idea what that function was, but clearly the name wasn’t friendly. Moving to the Tree view showed me the same function call, taking 57ms of CPU time.

Tree view: onStatError

I then clicked on the indicator of the line of code where this function is defined and placed a breakpoint. Navigating the app in the same way I did during the profiling, I hit the breakpoint and gathered more informations.

According to the right panel, this call comes from serveStatic, the Express middleware that serves static files. In this case the variable error at the breakpoint tells me that there’s no file or directory called newsflow, which reveals that I must have something wrong in my Express configuration, since /newsflow is not supposed to be handled by the static files middleware, but it’s instead a call that should be handled by the server side rendering middleware.

Here is the code that contains this error:

app.use(express.static());
app.use('*', () => {
...
})

This code looked correct to me, but only running the profiler I was able to identify that any request to the application was also looking for a file in the static files middleware, generating an obvious error and wasting precious CPU time. Someone else would probably have not done the same mistake, but are we sure that just guessing what the code does we have the necessary confidence to deliver bug-free software?

Conclusion

Guessing what happens running an application was not enough, but worst, it gave me a false sense of confidence. I demonstrated myself that there’s no better way than profiling the app to identify issues and bottlenecks.

My suggestion is to do the same with your apps, in the Javascript ecosystem we can leverage on the excellent Chrome Devtools, profiling is easy and there’s no reason to skip it.

Do you have a story about how profiling an app saved you from a bug that was killing performances? Or how you found a hidden issue that was incredibly difficult to trace?
Feel free to share it in the comments!

If you like this post, subscribe to this publication on Medium to get all the updates and stay tuned. You can also click “Follow” below, next to my account to receive email updates from Medium.
And don’t forget to say “hello” on Twitter, my account is @darioghilardi!