Crafting AR apps with React Native
Augmented Reality (AR) is an emerging technology that enhances the ability to present computer-generated content along with the real world. It combines digital image processing, 3D graphics, artificial intelligence, and various other fields to create its enhanced capabilities.
What is under the hood?
There are several popular AR development libraries and frameworks available for mobile app development. To the date, most of them rely on ARKit and ARCore SDKs provided by Apple and Google respectively.
iOS ARKit and Android ARCore are two native SDKs that utilize the device’s camera, motion sensors, and processing power to overlay virtual objects onto the real world, creating interactive and realistic AR content.
These were initially released in mid-2017 and the following are some common features/capabilities for AR development.
1. World Tracking
- Uses the device’s camera to track the movement of the device and detect features in the environment, such as flat surfaces and objects.
2. Scene Understanding
- Understanding the surrounding environment, including recognizing horizontal and vertical planes, and estimating lighting conditions.
3. Motion Tracking
- Tracks the device’s orientation and movement, allowing virtual objects to remain stable and aligned with the physical environment as the user moves the device.
4. Face Tracking
- Accurately recognize and track facial features
5. Image and Object Recognition
- Can recognize and track specific 2D images or 3D objects in the real world.
React-native AR libraries
Viro-react is one of the popular libraries for react-native AR development when compared with other options. It uses native AR capabilities on both iOS and Android platforms and ensures a high-quality and immersive experience for users with comprehensive support for 3D models, animations, physics, and user interactions.
Documentation: [https://viro-community.readme.io/docs/overview ]
Following are some key milestones in Viro-react in brief
- Initial release on March 2016 aiming VR by a startup called ‘Viro-Media’
- AR Integration, as AR technology gained traction
- Open-Sourced: In August 2018 under MIT License.
- Viro Media was acquired by Facebook in November 2019
Let’s do some code
If this is your first-time trying AR with react-native, I recommend you use the StarterKit first.
(It is the best way to experience the features without spending time on initial configs and searching 3D objects).
git clone https://github.com/OmalPerera/viro-react-starter-kit.git
cd viro-react-starter-kit
npm install or yarn install
npx pod-install
Running the code
You need a physical mobile device to run the app. AR is not runnable on simulators as it heavily depends on device sensors.
We can run the app using the following commands according to the ReadMe file.
> npx react-native run-android or npx react-native run-ios
But I prefer to open the app in Xcode (by clicking viro-react-starter-kit > ios > myviroapp.xcworkspace
file) as it is easy to identify any build time issues (if any).
Possible errors at the first run
- Not having a matching development team in Xcode while trying to run on a physical iPhone
Fix: Select your Team from the dropdown and add a bundle Identifier in ‘signing & Capabilities’.
Congratulations! You completed the 1st milestone. 🎉
Let’s try to understand the code
The starting point is <ViroARSceneNavigator> inside the render method in App.js.
const HelloWorldSceneAR = () => {
return (
<ViroARScene>
<ViroText
text={text}
scale={[0.5, 0.5, 0.5]}
position={[0, 0, -1]}
style={styles.helloWorldTextStyle}
/>
</ViroARScene>
);
};
export default () => {
return (
<ViroARSceneNavigator
autofocus={true}
initialScene={{
scene: HelloWorldSceneAR,
}}
style={styles.f1}
/>
);
};
<ViroARSceneNavigator>
handles the transitions between <ViroScene> objects. It enables the 3D equivalent of a navigation stack.
<ViroARScene>
contains all the ViroReact elements, such as UI controls, 3D objects, lights, etc. So, it’s actually where we need to arrange our digital AR media according to our requirements.
For example, if the requirement is to show a 3D dragon on the flow, the steps will be;
1. Position the dragon correctly on the Scene
2. Attach the Scene to <ViroARSceneNavigator>
Here is how positioning works in a Scene:
As mentioned above <ViroARScene>
is a 3D space. By default, the camera is positioned at [0, 0, 0] and looks in the direction [0, 0, -1]
Positioning a 3D Object
<ViroARScene onTrackingUpdated={onInitialized}>
<ViroAmbientLight color={'#aaaaaa'} />
<Viro3DObject
source={objects_3D.pug_animated.obj}
type={objects_3D.pug_animated.type}
position={objects_3D.pug_animated.position}
scale={objects_3D.pug_animated.scale}
rotation={[0, 0, 0]}
animation={{...objects_3D.pug_animated.animation, run: true}}
dragType="FixedToWorld"
onDrag={() => {}}
/>
</ViroARScene>
<ViroAmbientLight>
is a light object that emits ambient light that affects all objects equally.
<Viro3DObject>
can be used to render any 3D objects. The behavior of any Viro components can be controlled by passing props as we do with normal react-native components. All the props are well explained in the Viro-Community documentation
Adding 360 background to viroARScene
Replace the existing <Viro3DObject>
component with the following code block.
<ViroARScene onTrackingUpdated={onInitialized}>
<ViroAmbientLight color={'#aaaaaa'} />
<Viro360Image source={objects_3D.diving_360.background} />
</ViroARScene>
Image recognition
Image recognition is a key component of AR: it enables you to interpret the real world and respond to it accordingly. Let’s work on a simple example.
Here we will try to identify a real-world 2D or 3D object and render a 3D object on it.
We can use the <ViroARImageMarker> component for real-world object identification. And also, we should provide a reference image that Viro will recognize and track. (used ViroARTrackingTargets for it)
const HelloWorldSceneAR = () => {
return (
<ViroARImageMarker target={'pug2D_img'}>
<Viro3DObject
source={objects_3D.pug_animated.obj}
type={objects_3D.pug_animated.type}
position={objects_3D.pug_animated.position}
scale={objects_3D.pug_animated.scale}
rotation={[0, 0, 0]}
animation={{...objects_3D.pug_animated.animation, run: true}}
dragType="FixedToWorld"
onDrag={() => {}}
/>
</ViroARImageMarker>
);
};
export default () => {
return (
<ViroARSceneNavigator
autofocus={true}
initialScene={{
scene: HelloWorldSceneAR,
}}
style={styles.f1}
/>
);
};
ViroARTrackingTargets.createTargets({
pug2D_img: {
source: objects_3D.pug_animated.img,
orientation: 'Up',
physicalWidth: 0.12, // real world width in meters
},
});
What is next?
So far, we have used a few components from Viro-react (ViroARScene, Viro360Image, Viro3DObject, ViroAmbientLight, ViroARImageMarker, etc.) but there are a huge number of components in the Viro-react library that provide different experiences.
In order to build a rich AR experience, these components need to be combined and place hierarchically in scenes, as we did in the Image Recognition section.
Keep in mind that Viro-react is one of the many options available for building AR apps in React-native. It is a mature and stable library which has both capabilities and limitations, especially when compared with native SDKs such as Apple ARKit and AR-core on Android.
Wrap up
With viro-react, we have seen how AR can seamlessly blend physical and digital environments. As you know, technology is evolving rapidly and the potential for innovation seems boundless. It keeps pushing the limits of what’s possible and making the virtual world more tangible and interactive than ever before.
Thanks for reading! Let me know in the comment if you found this information useful and applicable to your projects.
Resources
Full code is available in: https://github.com/OmalPerera/viro-react-starter-kit/