Nicoleta Pop
Jun 7 · 7 min read

Welcome back to “AR must knows” for creating amazing iOS apps! We’re continuing our previous discussion by presenting what did Apple manage to cope with this latest technology until ARKit 2.0, the “current trend”. But first, let’s see what ARKit 1.5 (mandatory!!) brought to the table.

What does ARKit 1.5 bring?

The next version of ARKit was released together with iOS 11.3 and brings some improvements on the table: vertical and irregular shaped plane detection that is accessible even from the camera of the good ol’ iPhone 6S. It was a very appreciated feature taking into consideration the fact that people could only detect horizontal surfaces in the first place, especially that you can detect both types in the same session, unleashing developers’ ideas and possibilities only by extending that single line of code.

configuration.planeDetection = [.horizontal, .vertical]

Another notable aspect is the face recognition enhancement, that uses ARFaceTrackingConfiguration and ARFaceAnchor objects.

let configuration = ARFaceTrackingConfiguration()func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {      guard anchor is ARFaceAnchor else {          return nil      }    // do some coding on the node that will be retrieved to the current ARFaceAnchor}

The most important property on this type of anchors, I’d say, it’s definitely blendShapes because it has a lot of information about the user’s face, the characteristics being mapped to the so-called ARBlendShapeLocation (for example, .mouthSmileRight returns an integer from [0,1] that tells you how much is the user smiling on the right side — from device’s point of view). ARKit 1.5 “defines 50 different ARBlendShapeLocation and has everything from nose sneers to mouth rolls to cheek puffing”, so you’re able to create almost any AR face app that you can imagine, all in the gross lifting of this amazing library.

Apple’s Animoji

Last, but not least, the upgrade provides 2D image recognition. The Apple Documentation says that if “your app provides known 2D images, ARKit tells you when and where those images are detected during an AR session.” And the official use cases are wandering through a museum and pointing the camera to a specific piece of art and finding some information from a virtual curator, or placing virtual board game pieces when the user points the device at the specific board, etc. Basically, you can get real-time information shaped into a virtual object about a specific concept from a 2D image. An example code snippet to demonstrate the ease of integrating this great feature can be seen below.

import ARKitimport UIKit@IBOutlet weak var sceneView: ARSCNView!guard let referenceImages = ARReferenceImage(inGroupNamed: “referenceImages”, bundle: nil) else {      return}let configuration = ARWorldTrackingConfiguration()configuration.detectionImages = referenceImagessceneView.session.run(configuration, options: [.resetTracking, .removeExistingAnchors])

I strongly recommend taking a look over the 2 demo apps form the Apple website, because they did a great job making a ball bounce and hit a target placed on a wall, the other one being able to recognize images of Apollo moon landing and recreating that experience.

Transition to ARKit 2.0

At the WWDC from June 2018, Apple announced ARKit 2.0, the latest version of its AR project, introducing an improved face tracking feature, optimized rendering, 3D objects detection, persisting/sharing experiences. So, therefore, has brought the following updates:

  • Mapping

The biggest difference between the current and previous version was that world mapping was available only form the ARSession object until latest version in which it is available through a new API object: ARWorldMap — mapping real-world 3D space into an ARScene.

WWDC 2018 #ARKit #mapping
  • Object Persistence

Now, in ARKit is possible to load objects in ARScene and when a new session is started, take them from the memory and put them again in the scene.

  • Multi-User Experience

A major breakout in this context is that, now, you can push the limits of your app to support not only a single device but multiple ones sharing the same experience.

  • Environment Texturing

Related to realistically rendering virtual objects into camera live feed. Improvements were brought on the side of lightning estimation, shadow perception, scallion, textures and reflections and, last but not least, better positioning and tracking.

  • Image Tracking

This feature came out as necessary taking into account that the previous version was already able to perform image detection. Now, images can wander in peace in the camera’s observation area because they no longer have to be static.

WWDC 2018 #ARKit #imageTracking

So, the position and orientation of multiple images detected in the same session can be determined at a precision of 60 frames per second.

  • Object Tracking

Detect and track known static 3D objects in a scene. Unfortunately, right now this feature was optimized to the level on which the user has to scan the object first and then the app will be able to detect and track it (position and orientation). Also, the objects have to be no taller than a tabletop and be “well textured, rigid and non-reflective” in both the scanning and detecting process (Apple).

WWDC 2018 #ARKit #objectTracking
  • Face Tracking

Since the last release, in which ARKit added position and orientation tracking of a human face, along with directional lightning estimation according to the detected face and the collection of 50+ Blend Shapes of specific facial features, the library now adds tongue detection and eyes’ separate movements in real time, also known as gaze detection.

WWDC 2018 #ARKit #faceTracking

It is undoubtedly sure that Apple is doing a hard and a great job to deliver to the developers’ community the best and the largest variety of AR APIs and to the users the best AR experience they could try. Although their project is quite large and efficient and it kind of covers the most wanted features, the detection of larger object do need some work. There are apps that require to, let’s say, beautify some random statues and buildings, and the position, orientation, and dimensions of the objects are mandatory. I will recommend you guys try and study what the market can offer at this moment when it comes to developing AR applications on the iOS platform and choose the one that suits your needs.


Now, from the outsider point of view…

One of the most complete libraries I found was Wikitude, which offers APIs for all mobile platforms, native and hybrid, Unity, including some smart glasses ones, such as Vuzix M100 and Epson Moverio. The team behind it is constantly improving its product, such as until now, the SDK supports many of the desired functional features like image and object recognition, tracking, ARKit and ARCore support, scenes with rooms and large objects recognition of different resolutions and versions, and a lot more. They even succeeded to use the iPhone 5s in developing AR apps, while Apple itself put the limit on iPhone 6s. It offers the opportunity to try the free trial version with full experience but the camera will always show a watermark until you upgrade to an upper plan.

Google has come with a “0 expensive” response to the huge impact that ARKit had on the iOS community by extending ARCore capabilities to support both mobile platforms. Although both SDKs do a great job, ARCore included Cloud Anchor capabilities (hosting/loading anchors and, on need, resolving/posting them), meaning supporting the same AR experience for both Android and iOS devices, even if most of the rendering and understandings tasks are done by Apple’s library.

Other great projects concerning Augmented Reality development are Vuforia, Maxst, DeepAR, and ARToolkit. Definitely worth studying them!


Finally

To sum up, iOS will always be one of the most targeted platforms to apply the latest technologies because all tech giants are planning to invest heavily in creations like Augmented Reality, which brings such excitement and ease within the community. ARKit is very easy to use but one should always get all important aspects together in order to be able to create and maintain stunning ideas.

If you liked my article please hit the claps button! Also, here are the links to the tutorials I’ve made for a better understanding of ARKit and ARCore:

ARKit Tutorial

ARCore Tutorial

Happy Coding!


Zipper Studios is a group of passionate engineers helping startups and well-established companies build their mobile products. Our clients are leaders in the fields of health and fitness, AI, and Machine Learning. We love to talk to likeminded people who want to innovate in the world of mobile so drop us a line here.

Zipper Studios

At Zipper Studios we help startups and well established companies build their mobile products. (www.zipperstudios.co)

Thanks to Raluca Marusca

Nicoleta Pop

Written by

iOS Developer @ Zipper Studios co.

Zipper Studios

At Zipper Studios we help startups and well established companies build their mobile products. (www.zipperstudios.co)

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade