React Native’s bridge, under the hood

Sam Robinson-Adams
7 min readMar 18, 2022

--

React Native is widely used so that developers can write familiar, declarative markup—much like, but generally not the exact same — React code that they write for the web. Its performance is generally regarded as ‘tolerable’, but tends to operate at the boundaries of feasible performance, with a gigabytes-large ballast of memory, and particularly expensive work — like transitioning to a new screen, or animating a view — requiring handwritten native code.

But what is React Native doing that allocates so much memory, and does so much work? Why is it slower than native code? Can it theoretically be equally performant — or more?

This is a deep dive into React Native’s internals: the scary C++ code that lives inside /facebook/react-native. After reading this, I hope the reader will come away with an understanding that will enable you to really dig into your app’s performance. And also, equally importantly, a familiarity with where the important discussion takes place, and how you can keep up with the project direction and the semi-secret hacks to make your app perform well.

A bridge to nativeland

React Native, at its simplest, basically consists of a for loop which calls your topmost function 60 times per second. Let’s call that function App(). That App() function, in turn, calls all the functions within it. And, when this very deep function stack finally reaches the bottom, App() returns a deeply nested object containing everything to be rendered to the screen. This is your application code, the code that you yourself write.

The process by which this huge UI-describing object is rendered to the screen is a little more complex. React Native is this: a native app, which starts a JS interpreter, calling a main function with some initial arguments: you can see this in your AppDelegate.m or MainActivity.java. That JS program uses the React library, the same way any other JS program might. But here, React is wrapped by the library React Native, which contains some special code intended to communicate with the (quite separate) native code that create-react-native-app will generate in your iOS and Android repos.

The main difference here is in what happens when your component tree is rendered. For both web and native, React calls your root component, which recursively calls all the sub-components — and transitive sub-components — eventually returning a massive nested tree that specifies what should be rendered at this specific ‘tick’ (there are 60 ticks a second).

In normal web React, that renders to the web DOM via web APIs to append divs and ps and suchlike. The principle is the same in React Native, but, because no ‘native DOM API’ exists for JavaScript, React Native essentially creates it. This is what React Native does on top of React: it implements a generic DOM for JavaScript, building on the existing native APIs of both iOS and Android (and theoretically some other platforms, used by approximately 17 people).

React Native’s DOM API is based on a structure called the ‘bridge’. This is just a FIFO — first-in, first-out — queue. Think of a conveyor belt, as opposed to a stack of plates. A JS thread named mqt_js runs continuously, taking the output of the render function — that enormous nested object representing your current layout down to the output of the tiniest <Text/> functions— and adding it to a queue. If the last queue flush was more than 5ms ago, this code will call global.nativeFlushQueueImmediate, a C++ function bound to that name in JSCore when the JS runtime was set up.

So here we have two asynchronous bridges between three threads. JS talks async to C++ through shared memory (JS Cpp) and callback invocation (Cpp JS). C++ talks to native (ObjC/Java) by FFI both ways. _callID identifies the message through the pipeline.

Let’s stop here and take a step back. This interface between JS and C++ (recall that the C++ code is shared between iOS and Android, runnable on both) is asynchronous. Each message sent from JS to C++, via shared memory, has a _callID , initialised to zero at launch and incremented each time a message is enqueued. It is this _callID , carried all the way into the ‘native’ call from C++ to ObjC/Java, which allows JS to map which message dumped onto the queue connects to which message C++ invokes a callback for when it’s finally finished processing in nativeland.

The ‘bridge’ between C++ and native is not terribly interesting. The callNativeModule function FFIs into native, asynchronously (returning immediately without receiving a result), and eventually invokeCallback is called with a callbackID whose rightmost bit indicates success (1) or failure (0) — inverting the old C convention — and whose other bits contain the _callID mentioned above, to be returned to JS, bit-shifted to obtain both of those values, and processed accordingly.

So here we have two asynchronous bridges between three (among more) threads. JS communicates with C++ asynchronously through shared memory (on the JS->C++ side) and callback invocation (on the C++->JS side). C++ communicates with native (ObjC/Java) by FFI both ways. _callID is the tracer ID which identifies the same message through this whole pipeline.

Traffic jams and JSON spam

All communication from your JS code, anything that is not pure JS computation, is sent over this queue: not only render instructions, but also timer calls (JS has to call native to get clock reads), network and disk I/O, etc.

All this bridge traffic predictably causes a tremendous traffic jam at the very point — generally during transitions and user interactions — when your app is most busy trying to compute and communicate render instructions, 60 times per second. Those render instructions will be stuck in traffic, with plenty being dropped due to being stale, accounting for the janky UI you’re likely to be seeing.

This behaviour can be seen in X-ray vision by installing Detox Instruments on your React Native app and observing the bridge traffic at the point of your UI transition. If you’re reluctant to set about adding build phases in Xcode, you can use react-native-devtools-spy (itself a timesaving wrapper around the spy method in react-native/Libraries/BatchedBridge/MessageQueue which you can use yourself) to log all the messages being passed through the MessageQueue, both js->native and native->js.

What Is To Be Done? Burning Questions of Our Movement

The core team are gradually making changes to phase out the bridge, but this is a slow process. The new JavaScript Interface (JSI) introduces a global namespace shared between the JS and native threads (currently, the reader will recall, memory is shared only between JS and C++). Native code will be able to declare a given function foo(a, b) in the global namespace, which can be accessed within JS at global.foo(“hello”, “world”).

If you think of Go’s concurrency mantra — “share memory by communicating, don’t communicate by sharing memory” — this flips it around and communicates by sharing memory. This, for React Native, is a tremendous improvement. That being said, it’s likely to take years for these improvements to fully materialise.

What Is To Be Done (For Now)?

Your best strategy, independent of React Native releases, is to decouple state reconciliation from UI logic. What I mean by this slightly obscure technobabble is:

  • When your UI is changing, and you need some data to populate it, read only from the cache.
  • Separately, a periodic background job should be responsible for keeping your local state up-to-date with your remote state (i.e. your database).

The implementation of periodic low-priority background tasks in React Native is, surprisingly yet unsurprisingly, still quite a difficult task. Your simplest option right now is to use React Native’s InteractionManager API, which can be called within a setInterval callback to schedule a block of code to run at the next available opportunity, when all important UI work has been handled.

Jury-rigged solutions like third party thread packages are discouraged, since they solve the issue of concurrent computation in JS, but not concurrent communication with the native thread, which is our main concern here.

Skipping the queue

This may be a bit advanced for most purposes. If you’re not interested in spending days hyper-optimising your code, you probably ought to call it a day right here.

For those willing to dive into React Native’s internals, there are ways to implement the state synchronisation logic I described above without using the bridge at all. One solid candidate is the new Turbo Modules functionality, which empowers you to communicate with native code via direct FFI — or rather JSI, the interface written for Fabric, the past UI rewrite of React.

This FFI/JSI is obviously not recommended for classic render instructions, but rather for our second lane of traffic: requesting new data from the server, and then sending it back to native code to be written to disk. Of course, if you’re proficient in both Swift|ObjC and Java|Kotlin, and unconstrained by any need to interface with React state, you can equally implement all of the state synchronisation logic in native-only code.

If you’re excited by the idea of working on hard technical problems like this, Finimize is currently hiring. We’re on the lookout for engineers who are motivated by solving problems and scaling our product to the next million users. Check out our careers page for current opportunities!

--

--