“Can it be done in React Native?” — a case for declarative gestures and animations
Not long after starting React Native development, I started to look at the apps on my phone in a very different way. For every delightful user experience, I would wonder: can we implement the same user experience using primitives from the React Native world? This is how the “Can it be done in React Native?” YouTube series was born.
In each episode, we look at a user-experience from an app people know and love, like Instagram, Snapchat, and so one. We discuss what makes the example exciting to look at, how we would implement it using React Native and finally we write the actual implementation. The source code of every episode is also available on GitHub.
In this story, we will give the rule of the game: what qualifies something as being doable or not doable in React Native? Then we will go over some of the recipes and common patterns that emerged while doing the show.
🎲 The rule of the game
“When speaking of animations, the key to success is to avoid frame drops”.
In his talk at React Europe 2018, Krzystof Magiera made a strong an elegant case for declarative gestures and animations in React Native. I strongly recommend you check it out.
In the example above, we overlay two components:
<ScrollView> . The
y animation value from the ScrollView drives the animation of the cards. Everything will be executed on the native thread, providing a butter-smooth user experience even on low-end Android devices. Life is good. But of course, life is not always that easy. Sometimes we need do need to cross the React Native bridge. And if we need to do it, we need to be clever about it.
On the example above we need to update the text value inside each circle. The first step will be to use
setNativeProps in order to avoid a re-rendering on the React side. On top of that, we will also need to use some sort of schedule according to the type of effect we are trying to achieve. A combination of
throttle() for instance.
Or sometimes being clever can also mean finding a way to not cross the bridge it at all.
In the case of the play button from the Headspace meditation app, we first thought of a solution that would require us to cross the bridge between the JS and UI thread. The circle is defined as an SVG path, we have different versions of the circle and we use SVG path morphing with some scheduling in order to display the current state of the circle. This will work great on iOS because the iPhones are powerful devices but it will look somewhat clunky on low-end Android devices. Can we avoid to cross the bridge in this case? Another solution would be to build the animation with Adobe After Effects and use Lottie to run it on React Native. Or as a subscriber of the channel suggested it, we can actually achieve the same effect by overlaying four circles on top of each other and apply some slight translations on each of them. That way, we get a great result on every device.
Below are some of the recipes and common pattern that have emerged so far while doing the show.
Gestures in React Native have a barrier to entry because there are somewhat low-level API. Unless you can use a higher level construct like a ScrollView. The ScrollView component gives you a lot of things for free. Things like gesture state handling, snap points, and so on. If you can use a ScrollView for your gesture, life is easy. And you can use a ScrollView for more gesture than you think.
Above is an example from the Flutter demo app. It looks overwhelming at first: how would we implement this? It looks complex. But with hidden ScrollViews, we get a lot of things for free. We use two hidden nested ScrollViews to drive both the
y animation values. And in the Section component, we build interpolations from these two values. Lots of interpolations are needed but they are simply derived from x and y. So suddenly the example doesn’t look so hard to implement anymore.
In the series, we use hidden ScrollViews in a lot of examples and they will be used in future episodes in ways that might look surprising.
React Native GestureHandler and Reanimated
Without react-native-gesture-handler and react-native-reanimated, the spectrum of what can be done in React Native would be quite small and my videos would be quite boring. We use these libraries to declare complex gesture and animations that run at 60 fps on any devices. They are both replacements for the Gesture Responder System and Animated APIs from React Native.
React Native Gesture Handler is not only easier to learn and use that the Gesture Responder System API, but it is also a declarative API for gestures that will lead to much better performances even on the most simple examples.
React Native Reanimated is a low-level abstraction to deal with Animation values. There is definitely a learning curve with Reanimated but using this library is extremely rewarding. In the example folder of the library, you will find many examples to learn from and components that you can reuse in your own app.
React Native SVG has also an important role in expanding the spectrum of what is doable in React Native. Not only the SVG support provided by this library is great, but it also plays extremely well with animated values.
Above is the implementation of the refresh button from the Google Chrome app. We use Gesture Handler for the gesture and Reanimated to Animate the SVG ellipse. Such an example can look daunting at first. The gesture drives the Ellipsis animation but there is also a transition driving the animations when going from one button to the other, even though the gesture is not released yet. How do we go about building such an example?
setNativeProps to set the correct values to the ellipse. We build the values case by case. If the active index is 0, these are the animation values, if the active index is 1, these are the animation values and so on. From there we can look at the common patterns in our code and factorize it in a way that the animation values can be inferred for any index. That will make our code look much cleaner. Once this works, we can finally remove our imperative code and translate it into a Reanimated declaration. Now we are almost there and we can look at the transition effect. We will see that our code depends on two animation values:
center. And instead of having these values being treated as discrete values, we can treat them continuous. So instead of
set(index, index + 1) we will have
set(index, runSpring(index + 1)) for instance: Et voilà!
I always find it interesting to see how seemingly complex animations can be broken down into small digestible parts.
3D animations are common in modern mobiles app. Reanimated is great for 3D use cases because these animations often rely on trigonometry and other mathematical functions which are provided out of the box. And while it’s easy to have them look pixel perfect on iOS, they are much more difficult to build on Android.
3D transformations often depend on the perspective applied to the scene and while this is not a problem on iOS, it doesn’t seem possible at the moment to have full control the perspective applied to a view on Android.
<View> component has some other behavioural differences between Android and iOS. These differences are not often noticeable when doing regular React Native development but they became much more obvious when doing 3D transformations.
Some people have suggested that the goal of the YouTube series is to try to prove that everything can be done in React Native. But I actually find it much more interesting to spend time on the edges of what is possible to do. And these 3D animations are definitely on the edge.
So what do you guys think?
I’m looking forward to read your thoughts on this. And please send me awesome suggestions for the show. We have a lot great episodes planned for the series so don’t forget to subscribe on YouTube. And in the meantime: 🎉Happy Hacking!