Building The Future of Advertising Using ARKit

Advertising is the means of selling items of value to people. Whether you see a billboard on the highway or a poster on the wall or a commercial on TV, all of these are ways of advertising a product to the customers.

In the future Augmented Reality is going to play a very important role in how ads will be presented to the consumers. In this post I will present a vision of the future where just by looking at the image of the product, one can experience the product itself in multiple dimensions.

In ARKit 1.5 framework which is part of Xcode 9.3 and iOS 11.3, Apple added a new feature called Image Detection. This means your ARKit app can detect an image and react to it. Consider a scenario where you are looking at a movie poster and suddenly the movie poster comes to life by playing the trailer of the movie. In this post we will be configuring an image of Google Pixel phone and once it is detected it will add a 3D model of the phone on top of the detected image. Check out the live demo below:

As you can see in the above video, the image of Google Phone was detected and as soon as it was detected a 3D model was added. Our first task is to configure the image to be detected.

Configuring Image Recognition

There are multiple ways of configuring image recognition in ARKit 1.5 but the easiest one is to add the images as AR Reference Images using assets. Create a new Augmented Reality Application in Xcode. Click on the Assets.xcassets file and then add a new item using the “+” sign. Make sure the new item is “AR Resource Group”. Once, you have added the AR Resource Group simply drag and drop the image you want ARKit to recognize. We are going to use an image of Google Pixel phone but you can use any image you like. Make sure you give width and height to the image or else Xcode will complain.

Next, we need to configure the reference images in our code. Inside the viewWillAppear function implement the following code:

override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
// Create a session configuration
let configuration = ARWorldTrackingConfiguration()
guard let referenceImages = ARReferenceImage.referenceImages(inGroupNamed: "AR Resources", bundle: nil) else {
fatalError("Missing expected asset catalog resources.")
}
configuration.detectionImages = referenceImages
// Run the view's session
sceneView.session.run(configuration)
}

The above code tells the ARKit configuration about the reference images. Next, we will see how to add a virtual 3D model of the Google Pixel phone to the real world once the image is detected.

Detecting Images

When ARKit detects a preconfigured image it tries to add anchors to the real world. There is a special delegate function that gets triggered and is responsible for adding anchors. The implementation is shown below:

func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
if anchor is ARImageAnchor {
let phoneScene = SCNScene(named: "Phone_01.scn")!
let phoneNode = phoneScene.rootNode.childNode(withName: "parentNode", recursively: true)!
phoneNode.position = SCNVector3(anchor.transform.columns.3.x,anchor.transform.columns.3.y,anchor.transform.columns.3.z)
self.sceneView.scene.rootNode.addChildNode(phoneNode)
}
}

After making sure that the anchor is of type ARImageAnchor we extract the phone model from the “Phone_01.scn” (SceneKit Scene). We position the phoneNode right on top of the anchor and then finally adds it to the sceneView. The Phone01.scn is available as part of the project and you can download it at the end of the post.

Rotating the Phone

The final remaining task is to rotate the phone so it looks attractive in the real world. The rotate task can easily be accomplished by using the build in actions in SceneKit. Check out the implementation below:

func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
if anchor is ARImageAnchor {
let phoneScene = SCNScene(named: "Phone_01.scn")!
let phoneNode = phoneScene.rootNode.childNode(withName: "parentNode", recursively: true)!
// rotate the phone node
let rotationAction = SCNAction.rotateBy(x: 0, y: 0.5, z: 0, duration: 1)
let inifiniteAction = SCNAction.repeatForever(rotationAction)
phoneNode.runAction(inifiniteAction)
phoneNode.position = SCNVector3(anchor.transform.columns.3.x,anchor.transform.columns.3.y,anchor.transform.columns.3.z)
self.sceneView.scene.rootNode.addChildNode(phoneNode)
}
}

Now, if you run the app you will see that as soon as the image is detected a 3D model of Google Pixel phone is added and starts rotating indefinitely.

You can download the source code here.

If you are interested in learning more about building Augmented Reality Apps using ARKit then check out my Udemy course “Mastering ARKit for iOS”. I also do live workshops and presentations for conferences and companies. If you are interested then contact me at azamsharp at gmail.

Like what you read? Give Mohammad Azam a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.