Getting started with ARKit 2
Enables developers to create the most innovative AR apps for the world’s largest AR platform.
Apple introduced ARKit 2, a platform that allows developers to integrate shared experiences, persistent AR experiences tied to a specific location, object detection and image tracking to make AR apps even more dynamic.
One of the most interesting feature in ARKit is Image Tracking.
Image recognition and tracking
“A photo is like a thousand words” — words are fine, but, ARKit-2 turns a photo into thousands of stories.
Among the new dev features introduced in WWDC 2018, Image detection and tracking is one of the coolest. Imagine having the capability to replace and deliver contextual information to any static image that you see.
Image detection was introduced in ARKit 1.5, but the functionality and maturity of the framework was a bit low. But with this release, you can build amazing AR experiences. Take a look at the demo below:
Let’s dive into tech implementation
Step #1: Load ARImageTrackingConfiguration
- We can load image to ARImageTrackingConfiguration with user of ARReferenceImage. Store that image to Assets/ folder.
- Actually by doing this configuration Setup SceneKit find that image in Real World.
Below method which is ARSCNViewDelegate’s method is call when configured image found from real World.
When we found image from Real world, first we get its anchor point, then we create clear plane surface with use of that anchor point , then we create SCNMaterial with our custom view which is our Gif. And add that material into SCNPlane, And add that SCNPlane to SCNNode, and return this node into this function.
Note: we can do anything in that SCNMaterial , material.diffuse.contents accepts UIView.
You can download full demo from Github. This demo content includes
- Image tracking
- Save and load maps
- Detect objects
- Environmental texturingInspired By :
This demo is created from Apple’s ARKit 2 sample [demo]