ARKit: How to Detect, Track & Display a Video with Alpha on top of an Image Plane
Detecting and displaying a video on top of an image using ARKit can be tricky when first starting to develop. This guide is intended to quickly help developers achieve it by making use of ARKit and blending Apple’s image recognizer sample project together with a couple of custom code lines.
First of all, to implement ARKit we must add a storyboard with a ARSCNView. This scene will be in charge of using the camera to track the real world around the user. Also an AR Resources asset folder must be added to store all the images that are going to be detected and tracked.
Image Tracking
It’s important to add the images you want to detect inside the AR Resources folder of our project:
Note: images to be recognized can also be obtained from an api and created programmatically.
In our ViewController we must add a reference to the ARSCNView and apply the delegates in the viewDidLoad and we must also add an accessor for the sceneView session.
In our viewDidAppear we must add a resetTracking method:
The resetTracking method is in charge of:
- Starting an ARSession
- Setting the ARSession configuration
- Telling the ARSession which known images are going to be tracked
When any of the images added to the AR Resources folder is detected while using the device camera, the ARSCNView calls the following delegate:
The delegate holds information about the detected image inside the anchor and once it is detected, it shows the detected image name :
Displaying a video on top of a detected image plane
We can already detect an image with that code. Now the idea is to display a video on top of the image. Take a look at the delegate where we already get the detected image:
The idea is to add a displayVideo method inside a helper by passing three parameters:
- the detected reference image
- the node
- the video to be shown
The displayVideo method inside the VideoHelper is in charge of displaying the video. Basically it does the following:
- Get the physical width and height of the reference image
- Create a node to hold the video player
- Create the video player
- Add the node holding the video player to the original detected node
- Setup the video node
The setupVideoOnNode method is in charge of setting up the video inside the video holder plane :
- Create a videoPlayer
- Create an SKVideoNode with the videoPlayer holding the video
- Create a spriteKitScene to position the video inside
- Add an alpha transparency
- Playing the video
- Looping the video
Video alpha transparency
Notice that the getAlphaShader is added by a helper class EffectNodeHelper that is in charge of applying an SKShader . An SKShader object holds a custom OpenGL ES fragment shader, used to customize the drawing behavior of many different kinds of nodes. In this case, an alpha is applied.
Result
As a result, when the recognizable image is detected a video with alpha transparency is shown on top of the detected plane.
It’s important to mention that the images need to have good reference points and contrasts in order to work with the image tracking in iOS12 .
A demo project is available here . It’s basically the image recognizer sample project from apple with some changes to add the video on top of the image.
Useful resources
Do you know any suggestions? Leave a comment! We really appreciate it.
Major League is a Staffing and Sourcing agency by Lateral View.