Your app is an hybrid of native and React Native code. Your CI pipeline takes forever because it builds everything from scratch. But as a RN engineer you think there must be a better way to save your time. You’re not mental. Let’s find a way to make our pipelines smarter.
If you are developing apps via a hybrid native/React-Native mobile model, you are likely to face the struggle of slow builds, testing and deployment in your CI pipeline. Probably some of your RN engineers are complaining about how slow it is when compared to pure React Native deployment, which does not require compiling and linking. And your pipeline actually suffers from being slow: the build process for native code is complex by nature, and it is furthermore slowed down by native/RN integration tests.
There is an ongoing historical fight between native and React Native developers; part of that is just folklore, as it usually is in IT. But RN developers do have some points. Put yourself in the shoes of those RN guys for once, and note that:
- they don’t have to compile and link their code, at most they byte-compile their bundle and usually reuse the same old binary, compiled long time ago;
- they play with RN and see changes applied instantly at run-time;
- they run unit-tests without recompiling;
- they don’t care much about native code, for RN code is boxed in components and is mostly independent from native.
We can confidently assume that RN developers are set up for a quicker development to deployment cycle, and live happily ever after with their code.
If onlya giant, clumsy evil had not boldly appeared on the horizon: the Terrifying Pipeline Monster!
That bunch of tremendously slow scripts will probably blight their plans. RN developers will experience long waiting times on the CI build server because someone once thundered from far behind: “ it’s the toll your code has to pay to be safe”. Sure, safe from fast development and delivery, you think.
Are any of the following situations ever happened to your React Native team?
- “We sometimes need to pre-merge all of our branches to a temporary git branch, then merge that into the main develop stream, to reduce the number of jobs in the pipeline and have a chance to go home by 7pm”
- “We baby-sit our merge requests at night to make sure the pipeline does not break because of flaky native tests we actually don’t care about”
- “We check-in less often because we have to sit around for ages waiting for the software to build and the tests to run”
- “Pipelines take a long time to run, and this prevent us to use a fully automatic process to check code quality”
- “The CI process takes so long that multiple commits will have taken place by the time you can run the build again, so you won’t know which check-in broke the build”
- “We experienced peak times of hours waiting for a job to be finished, especially on busy days such as before the code freeze for a release, or when RC-fixing. Those are crucial times for our success; we definitely need a faster response”
If so, please read on. There is probably a way out of this pain.
Fast and slow lanes
I guess every RN developer would intuitively agree that the time spent on rebuilding and retesting unchanged native code is a waste of resources, especially if you develop pure RN code. Equivalent operations for RN code take a negligible time if compared to a native build.
Most of the issues listed above might relate to an inefficient/inappropriate pipeline implementation. Despite that, allowing some ‘lucky’ builds to jump ahead operations related to native code to focus just on RN would anyway save a lot of time.
In other words, if it were possible to identify fast jobs (that do not require rebuilding and retesting of invariant binary code) and put them in a separate queue, its throughput would get a dramatic increase, resulting in a faster development cycle for RN engineers.
Let’s then imagine a pipeline with two lanes:
- a fast lane, where jobs that don’t require any recompilation and testing of binaries are queued, and
- a slow lane, the standard, full and exhaustive path, able to build an app from scratch to its final bundle.
If fast jobs were queued to a fast lane, their throughput would dramatically increase: no unnecessary operations anymore, we would save a lot of time. On the other hand, even the slow queue throughput would raise, because small fast jobs are out of the way.
The next key points then become:
- how can we decide when to mark a job as fast?
- how do we arrange the pipeline to host two lanes and combine their result in a consistent way?
I’ll go bold and say:
Hold on… is it that simple? Meh. In a way.
Let’s first quickly review the diagram below to refresh how a RN/native app is structured.
The binary makes use of many frameworks (i.e. native libraries with possible media assets); in particular, it also uses the React Native framework. The latter, in turn, loads the RN bundle, a byte-compiled version of your RN source code. The whole infrastructure constitutes the app.
As explained later in deeper detail, we can fairly say that RN components and Native objects are in a loosely coupled relationship. They live in the same Eco-system, but in a separate environment: a native object can use more RN components, as well as a RN component can use more native views. But they can communicate via a predefined protocol, which constitutes the only dependency.
Otherwise put, the existing dependency between the two environments relies on a protocol roughly constituted by a contract per component, whose integrity naturally falls as a responsibility that integration tests are accountable for. Any change to the protocol would require changes to both the client and the consumer at the same time, that is the same merge request. Other than that, objects in different environments can be thought of as independent.
It’s key to read integration tests as the inflection point where to actually pivot our lanes, because they guarantee the required behaviour invariance.
So, by ensuring the I/O protocol is honoured by integration tests, we can safely assume that RN components are independent of native objects and vice versa.
As a direct consequence, modifying React Native source code will not affect any binary behaviour based on RN, and vice versa, so we can build the binary and develop RN code independently.
We can seriously cut time short by having a pipeline with a slow lane for the binaries, and a fast lane just for RN code. Let’s see how.
A smarter pipeline
Let’s assume we are developing a hybrid application, where some developers usually work on native code, and others usually work on RN.
The regular ( slow) pipeline for native code might look like this:
- RN unit tests. Faster than binary tests, they do not require compilation. Having them run first makes RN development faster, because RN unit tests fail early.
- Compile binaries. Compile, then link. After this phase, we will have a binary ready for testing.
- Native unit tests. Unit tests are performed in this phase of the flow.
- Integration tests. These include native <-> RN test suites to ensure the I/O protocol between the two is consistent (bridge and view integrations are tested).
- e2e RN tests. Automation tests take place here on simulated/emulated devices. We say RN end to end tests here, but in general terms you might have your own native UI automation tests.
- Merge phase. Upon success of automation tests, the code is eventually merged to the development streamline.
This pipeline is good for both for native code and for modifying the I/O protocol between RN and native.
On the other end, if we modified only RN code/assets, the optimised pipeline path is taken. The optimisation will jump part of the queue by picking an already validated binary instead of building and testing it from scratch.
The retrieved binary is guaranteed to be verified against all tests by definition, because it was build using the slow lane. It’s supposed to be adhering to the same I/O protocol of our RN bundle it will be tested against. We’ll verify that in the integration test phase of the pipeline flow to enforce this assumption.
The smart pipeline performs the following steps:
- RN unit tests. These tests do not need any binary because they can run in a pure React environment. So let’s run them first!
- Combine phase. Upon tests success, a relevant precompiled binary is picked from the build server and its RN bundle is replaced by the current RN code. The new RN bundle can thus be tested against an already validated and (supposedly) compatible binary.
- Integration tests. Precompiled integration tests must run, from native side, in order to ensure that the new RN code is not breaking the adopted I/O protocol between RN and native. This is where we assess the hypothesis on binary compatibility. Executing this phase in both lanes is of utter importance: it’s making sure we honour the inflection point to asses the pivot of the two lanes is consistently invariant. I’ve seen similar ideas failing just because they were missing this bit.
- e2e tests. Automation tests are performed on simulated/emulated devices.
- Merge phase. Upon success, the code is merged.
Finding the right candidate
It’s now time to show one of the most delicate action in this deployment model: choosing the binary as the candidate in order to combine the right build against the fresh RN code. How to do it actually depends on the adopted deployment strategy.
As an exemplification, let’s assume your teams push their branches against a develop streamline, so all merge requests will flow into that branch. This is indeed one of the most common scenarios.
Let’s first follow the full path of the pipeline, through the slow lane. This lane will perform an exhaustive build, and if successful will deploy a working build. The produced binary is by definition compatible with the RN bundle it was deployed with.
Because your merge requests will go against the develop branch, . Then, regardless of your changes in RN, the I/O protocol between RN and native must adhere to the latest binary code in the develop branch.
As a consequence, the right binary to retrieve is actually the latest made available by the slow lane of the pipeline. Simple, and effective.
Digression on objects composition
React Native and platform native objects are loosely coupled by nature.
Inter-process communication happens between different threads: native and RN communicate asynchronously via properties exchange and the RN event bus. Passed objects are serialised/deserialised via JSON, and communication happens mostly via Promises.
React Native and Native objects are thus two worlds apart and depend on the I/O protocol (made of properties and events). This means that, in other words, we just need to make sure that significant integration tests are put in place between native and React Native in order to check that all our objects correctly adhere to the I/O protocol adopted.
If integration tests for the I/O protocol fail, then either RN or native broke the contract. If successful, there should be no need to worry about how native and RN components are composed one on the other, because the contract is satisfied.
To see that from another point of view, let’s try and compose native objects and RN components. An application can have different hybrid styles to mix up native and RN code, that we could summarise into two significant categories:
- RN provides a set of components to native, pretty much as a library or framework, that are used to build up native views;
- or the opposite, native code provides a set of objects to RN, to be consumed by RN. This resembles a framework, though roles are actually reversed.
Spotting a framework-like structure in these models is a way to understand there are two independent opponents playing the same game. We can test them apart, pretty much as we would test and deliver an isolated framework or library, which is by definition distinct from the application, as well as its test suites.
Note that, eventually, a more complex model is just a composition of the two. As an example you might have a RN component with a native subview, that in turn has a RN component as a subview. But, in the end, decomposing that to the minimal level, you would get just two cases:
- RN importing a native object
- Native importing a RN component
And these are, in the end, just the base cases for the integration tests required to check the I/O protocol conformance of the imported object.
While your actual pipeline and deployment strategy might look different from the one I showed here, my main objective was to outline the picture and give you the main idea.
There have been some simplifications, for the sake of readability, but I’m happy to elaborate further or integrate some other material if someone is interested.
As a suggestion, I would recommend to run the optimised pipeline only to speed up development works. All different objectives, especially builds going to real testers and eventually the store, should go through the slow lane.
Please let me know about your experiments and further suggestions!
- Humble, Farley, Continuous Delivery: reliable software releases through build, test, and deployment automation, Addison Wesley, 2010.
Originally published at https://www.linkedin.com.