New cool things you can do with ARKit 2.0 and React Native

AR recently became not only a gimmick when you create your apps, but a valuable addition to businesses. AR enables users to preview and interact with products in a real world, enables experiences based on physical world context, drives sales and much more.

AR scene is changing constantly. Not only new AR headsets and accessories are released once in a while, but mobile phone operating systems are constantly updated with new software changes, sometimes targeting AR.

Since it’s release, ARKit and ARCore are major players in mobile AR market. They are included in iOS and Android respectively and are enabled by default in latest phones. Recently iOS 12 was released and with it came ARKit 2.0, which added several important features. In this blog post I want to discuss them as well as how it applies to ViroMedia platform.

If you were not familiar with ViroMedia platform and specifically ViroReact, I will be talking here about a framework on top of ARKit and ARCore, that leverages power of React Native to create rich UIs for both AR and VR.

I won’t dive into how you can get started with Viro here, but you can both read it in very throughout guide here or check out my talk about AR and specifically ViroReact here:

ARKit 2.0 features

ARKit 2 enabled lots of really cool features, such as new USDZ file formats for AR, mesh for face tracking, gaze tracking, tongue detection, multi user experiences, reflection mapping, image tracking, object detection and much more, and even though several of these features are still on ViroMedia roadmap, There are several really cool features that you can use pretty easily with ViroReact.

Continuous image tracking

Due to the fact you can now track 2D reference images with ARKit 2.0, creating realistic content attached to real world images is much easier. This enables developers to create content dynamically intertwined with existing world such as in example bellow.

While the use case of business card is fairly limited and looks more gimmicky, think about other creative usages of this technology and in the end of this blog post we will talk about real world examples and other ideas.

So assuming we would want to create basic image recognition, when doing this in Swift it will look like this:

This is a basic example provided by Apple, but you already can notice imperative nature of this code as well as the need to calculate eulerAngles for plane orientation.

With ViroReact this would be much easier. Let’s check out the code for the business card example. First of all we need to define our reference images. With Swift we would need to import images to asset catalog in Xcode. In ViroReact we just need to put our business card image in our resource folder and reference it when defining our tracking targets.

If you’ve used ViroReact before you might say that it looks pretty much the same how it worked before and you will be right. Nothing changed in terms of API when defining your tracking targets, but now you have an extra prop on your ViroARSceneNavigator called numberOfTrackedImagesWhen you define it, you basically tell Viro to track up to 5 images constantly in your ARScene. Like this:

Now after you’ve defined everything you can start creating your animations, and AR content. Business card rendering will look as following:

The only thing is left to add a bit of animations and styling and our example is basically done.

In our scene we have only two animations. We register them in the regular way we would have registered them with ViroReact

I won’t go into definition of styles and materials here since they are really straightforward, but you can check example code at the following repo:

3D Object detection

ARKit 2 gives us an ability not only to detect 2D images and use them as markers for placing our AR content in the real world, but also scan and track real world objects and use them as markers. In the following example we will create the following basic AR experience with ViroReact.

So where should we start? When dealing with 2D images it’s rather simple. We can download images, scan or photograph them, but with 3D objects it’s much more difficult. AR engine need to recognise specific features of 3D object and need to understand how we distinguish one feature from another.

In order to being able to understand these features we first need to scan an object we want to use later as a marker. For that you will need an app provided by Apple here:

First of all let’s download it and open in Xcode.

We will need Apple developer account for this so we can build on device. Let’s do that and not forget to change the signing:

Choose your phone as a target

Now an app will be installed on your phone and you will be able to scan our coca cola can. The main idea is that you point your phone at the object and go around it. An app will detect features of your 3D object (more points you get better recognition will work). Then you will go to the next step of scanning object surfaces and at last you will have an option to share the result. We will choose sharing it via email.

Resulting file will have .arobject extension and even though it’s sometimes confused for being a 3D model of your scanned object it’s not a 3D model. In practice it’s ARReferenceObject class in serialized filesystem form. This object is not displayable and basically is an opaque data representing “spatial points”

Now it’s time to get this object into Viro. Similar to reference images we used for markers, we put this arobject inside our js/res folder.

Creating object target will look similar as we used with our 2D markers, but with one change. We need to provide a type of Object to our 3D marker.

Now in order to put this marker in the scene we will use the ViroARObjectMarker and provide it target that we’ve defined earlier. As you can see API looks similar to how it looks like for 2D image markers.

You can check all the code in the following repo:

What practical implementation does this have?

This is a common question asked when dealing with AR. People still perceive it as a gimmick and still think it’s used only for game development. It’s not. AR has lots of use cases in retail apps, educational, location based apps and much more. It provides user a different type of engagement with digital content.

It’s much better to check how your product will look like in the real world physical space than looking at fancy picture on a website.

Speaking of new features of ARKit 2 think about the fact that you can show user product information, tutorials or learning materials based on real world objects. Forget printed manuals. You can have interactive manuals in AR. With continuous image tracking there are various implementations. From business cards and games to interactive magazines or learning cards.

In current mobile AR scene and with technological advancements of this year, there is no reason you shouldn’t add AR features to your app. Even small features can skyrocket your app sales and put it among top ranked app.