Nicoleta Pop
Jun 7 · 6 min read

Howdy! In this article, we will dive in some of the most exciting and emerging inventions that have been taking humankind aback for the approximately last 5 decades. We will learn the basics of bringing Augmented Reality into our own apps, along with projecting existing features that can be applied on the iOS platform on a number of real-life use cases.

The first part of this writing will introduce some basic aspects of Augmented Reality along with some historical information. Next, we will present the rapid risen SDK developed by Apple, namely ARKit, presented 2 years ago at the Apple’s WWDC, from the first version to the latest released version, along with some other libraries that might get handy when it comes to the limitations of it.

Augmented Reality has been all around us for a long time and since then, it has been incorporated into many aspects of one’s life, especially entertainment (Pokemon GO, Iron Man) and even home products (IKEA Place) healthcare, etc. The main idea of this technology, that is rapidly rising as one of the most timesaving and efficient creations, is that one is capable of adding virtual objects into real-life scenes and can also interact with them like, let’s say, you can visualize which parts of Notre Dame Cathedral have burned during the fire by adding “the missing parts” or you can tackle Pikachu itself :).

Evolution of Augmented Reality

Apple’s reply to this emerging technology is called ARKit and it is capable to “build unparalleled augmented reality experiences for hundreds of millions of users on iOS”.

Let’s move on demonstrating why the Cupertino company is so proud of what it has done and why it could become much more important than any other type of apps.

The biggest advantage of ARKit is that it covers pretty much all of the stressful setups of the smartphone’s camera, also the configuration of the environment and its objects in real space and real time, making it a very easy and quick tool. This framework is able to transform any iOS device running an A9 chip or above into a markerless Augmented Reality machine.

Markerless AR device

With ARKit, it only takes a few lines of code to setup apps for using AR as it does the majority of heavy lifting and the developer can focus on what is really important: creating engaging and immersive apps with a strong impact on entertaining and easing people’s everyday tasks.


Layers and features of ARKit 1.0

The first version of ARKit was first introduced along with the release of iOS 11.0 and A9 processor chips. It is based on three main layers that, along with motion and images, convert your phone into a sophisticated AR device which can mix 2D and 3D virtual object within its own camera feed of the world.

I. World Tracking — library’s core functionality that provides a relative position in the physical environment and also the device’s tracking information using the iOS camera and motion sensors.

II. Scene Understanding — Surfaces/Planes detection ability, light estimation to properly adjust the virtual object’s appearance to match the physical world and, last but not least, sending rays and getting intersections with real-world topology (the process of hit-testing the environment).

III. Rendering — the last functional layer of ARKit, having access to a constant stream of camera images, it provides some custom AR views that can be used along with SpriteKit (for 2D rendering), SceneKit (for 3D rendering) or even Metal (low-level rendering) making it very easy to create your first app.


In terms of technical implementation, the main class used to enable world tracking is called ARSession, which needs an ARConfiguration set to be an ARWorldTrackingConfiguration for “experiences that augment the user’s view of the world around them through the device’s back camera”. If the developers use built-in AR views, then this object will include an instance of this shared object. Otherwise, it is mandatory to instantiate and maintain an object by themselves.

if ARWorldTrackingConfiguration.isSupported {configuration = ARWorldTrackingConfiguration()} else {configuration = ARSessionConfiguration()}

The ARSessionConfiguration class provides configuration for both world tracking and basic tracking, it can check the availability of the chosen one according to the current running device, enabling/disabling features that are specific to each of them. Session management is also possible: running (world tracking live monitoring), pausing (when the view is no longer visible) and handling session interruptions.

// run sessionsession.run(configuration)// pause sessionsession.pause()//resume sessionsession.run(session.configuration)//change configurationsession.run(otherConfiguration)//reset current device tracking and remove currently added points (anchors)session.run(configuration, options: [.resetTracking, .removeExistingAnchors])— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —func sessionWasInterrupted(_ session: ARSession)func sessionInterruptionEnded(ARSession)

One can manipulate the processing of all camera frames that also contain position-tracking information contained in the session object through the class ARFrame.

// access latest framefunc session(_: ARSession, didUpdate: ARFrame)

Also, in the current session, ARAnchor objects can be added, which are points in real space containing position and orientation, to which we can attach SCNNode objects.

func session(_: ARSession didAdd: [ARAnchor])func renderer(_: SCNSceneRenderer, nodeFor ARAnchor) -> SCNNode?func renderer(_: SCNSceneRenderer, didAdd: SCNNode, for: ARAnchor)

These are used to add geometries (through ARGeometry, be a cube or sphere, for instance), designed with materials, lighting, and textures, resulting in a genuine 2D or 3D object. Two basic configurations of SCNNodes attached directly to the current scene can be found below:

I. let scene = SCNScene(named: “art.scnassets/ship.scn”)!sceneView.scene = sceneII. let sphere = SCNSphere(radius: 0.2)let material = SCNMaterial()material.diffuse.contents = UIImage(named: "grid.png")sphere.materials = [material]let node = SCNNode()node.position = SCNVector(0, 0.1, -0.5)node.geometry = spheresceneView.scene.rootNode.addChildNode(node)

This was an example of building a basic level app in which a user can add a virtual object into the real world environment. The virtual object presented above, on the first example, is the default 3D model attached to a SCNScene for every newly-created ARKit project template. The second example adds a sphere at some specific 3D coordinates, but it is also possible to add it at a specific point provided by a hit-test on some detected horizontal plane.

let point = CGPoint(1.0, 3.0)let results = frame.hitTest(point, types: [.existingPlane, .existingPlaneUsingExtent, .estimatedHorizontalPlane])if let firstResult = results.first {let anchor = ARAnchor(transform: firstResult.worldTransform)session.add(anchor: anchor)}

Hence, the first version of ARKit has as the main characteristic, namely horizontal plane detection having its specific anchor to handle this, namely ARPlaneAnchor and it also supports face detection (ARFaceAnchor) when it comes to True Depth iOS cameras.

configuration.planeDetection = .horizontal

One thing to remember is that, like in the case of any other libraries, ARKit supports events handling through delegates for both scene (renderer) and session objects (session management). They are found and already well explained in Apple’s official documentation.

This was the first part of the theoretical aspects of Augmented Reality from the Apple perspective. In the second part, we’ll be discussing the most important updates that the company brought to this technology, both from the next and the latest version of it.

Stay tuned!


Zipper Studios is a group of passionate engineers helping startups and well-established companies build their mobile products. Our clients are leaders in the fields of health and fitness, AI, and Machine Learning. We love to talk to likeminded people who want to innovate in the world of mobile so drop us a line here.

Zipper Studios

At Zipper Studios we help startups and well established companies build their mobile products. (www.zipperstudios.co)

Thanks to Liviu Iuga, Filip Marusca, and Raluca Marusca

Nicoleta Pop

Written by

iOS Developer @ Zipper Studios co.

Zipper Studios

At Zipper Studios we help startups and well established companies build their mobile products. (www.zipperstudios.co)

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade