Get started and learn how to make your first ARKit application

Jan 21 · 10 min read

Apple’s ARKit API, makes the exciting world of Augmented Reality available to every iOS developer, but where do you get started? Come with us on an Augmented Reality journey to build an AR solar system and learn how to make your first ARKit application.

This post is from the multi-part series on ARKit, where we talk about designing for AR, building a demo app, exploring and testing many of the features of ARKit. We previously wrote on designing 3D models for AR apps .

Introduction

AR is the core technology behind amazing apps such as Pokémon Go, Snapchat’s animated emojis, and Instagram’s 3D stickers. Apple’s announcement of ARKit at WWDC in 2017 has already resulted in some impressive software, with a mix of fun and practical apps providing something for everyone. We wanted to have the opportunity to play around with it and see what incredible things we could build with it.

Over the past year Novoda have been investigating the features of ARKit, seeing what we could built and what were the limitations of the technology, we had tons of fun building things with it and wanted to share some of our findings.

Setting a house as a hat is best way to test location placement, they say

We will be using a custom 3D model created in the design part of this series for this demo. Even if you cannot create your own custom model, you could use the simple AR cube that Apple provides or download a model from SketchUp or Google’s Poly

The first thing to understand is how AR perceives the world through the device camera: it translates the camera input into a scene composed of planes, light sources, a virtual camera, and Feature Points.

ARKit recognizes notable features in the scene image, tracks differences in the positions of those features across video frames, and compares that information with motion sensing data. The result is a high-precision model of the device’s position and motion that also analyzes and understands the contents of a scene.

If you want a more in depth analysis I highly recommend you read this page About Augmented Reality by Apple or watch their WWDC 2017 talk on ARKit. I would aslo recommend to watch Understanding ARKit Tracking and Detection talk and ARKit2 video from WWDC 2018.

How a model with planes and light source looks on Xcode. This will be added to an AR scene

With this World Tracking and Plane Detection ARKit is able to create Feature Points, Feature Points are used in ARKit to place models on the scene and to have the models anchored to their “surroundings”. As Apple explains:

These points represent notable features detected in the camera image. Their positions in 3D world coordinate space are extrapolated as part of the image analysis that ARKit performs in order to accurately track the device’s position, orientation, and movement. Taken together, these points loosely correlate to the contours of real-world objects in view of the camera.

Using ARView and ARSCNView

To build the AR app we followed a series of tutorials AppCode ARKit Introduction, AppCoda ARKit with 3D objects, Pusher building AR with ARKit and MarkDaws AR by Example, as well as the documentation on AR classes Apple provides. Since most of the basic setup has already been covered by Apple and by other tutorials we will not post all the code here, just go through some of the logic, issues and solutions we found along the way. All the source code for this and all the following posts related to this project can be found on our GitHub.

The first decision to make when creating an ARKit project is whether to use a standard one-view app template or the AR template Apple provides. We have tried both and found little difference when it came to simple apps/demos. The AR template is set up to use storyboards and has a pre-configured ARSCNView with a model of a plane. If you like playing around with working code before you write your own, we would recommend the AR template, especially as it comes with some clear explanatory comments. Alternatively, if you like having control of every piece of code it is obviously better to start from scratch. For this demo we used the template and storyboards but even if you create the project from scratch you should be able to follow along.

There are some key points every AR app needs:

  • You will need an ARSCNView. Most people name their instance sceneView. This is where all the AR magic happens. You can set it to occupy the whole screen or simply as a part of the UI.
  • You need to implement the ARSCNViewDelegate protocol which includes the methods used to render the models into the View. The sceneView controller will implement this protocol and be the delegate of the View.
sceneView.delegate = self
  • ARConfiguration needs to be set up with the type of plane tracking you want (horizontal is the default) and then added to the sceneView session run() method to actually start the AR scene. ARSession
  • On viewWillDisappear we pause the sceneView session to stop the world tracking and device motion tracking the phone performs while AR is running. This allows the device to free up resources.
A Session will run on viewWillAppear and pause once we are not on the AR experience anymore

This is the basic configuration you need for every AR scene. None of this code will add any AR object just yet though, only set up the view.

Apple’s pre-made template then sets up a scene directly by initialising it with a model asset at launch. That is straightforward and works well if you simply want to open the app and have a model appear in the scene. If you want to let the user choose where to place the object (for example by tapping) then you’ll need to put in a little more work.

Before we move forward I highly recommend you add this to the viewDidLoad method of your view controller:

This is how the viewDidLoad looks with debug options and delegate set up

Enabling these options will allow you to see the recognised Feature Points and the XYZ axes of the AR scene. If there is any bug with your model these features are one of the few ways you can debug AR. We’ll dig deeper into how you can test and debug AR and ML applications in an upcoming article of this series.

With Feature points options on debug enabled you are able to see what ARKit is recognising as you move around your plane aka yellow dots

Now for the fun part: adding your 3D model to the sceneView! Instead of creating a scene with an asset you can create a SCNNode then place that node onto the sceneView at a specific point. We are using nodes here instead of SCNScene because a SCNScene object occupies the entire sceneView, but we want our model in a specific point of the scene.

To create the SCNode we first load a temporary SCNScene with an asset and then save the scene’s childNode as the node we are going to use. We do this because you can’t initialise a node with an asset but you can load a node from a loaded scene if you search for the node by name.

Be careful when loading the scene, it takes some seconds and power I recommend doing it on load and storing the nodes you want to show on the scene

Note that AssetName here is not the fileName of the asset but rather the node name of the model itself. You can find what nodeName your model has just by opening the .dae or .scn file in XCode, and toggling the Scene Graph view, which will reveal the layer list of the file.

How to set up the name of the node on the scene

After getting the node, the next step is adding it to the scene. We found two different ways to do it, and choosing one or the other depends on how you want your app to work.

First, we need to know where to render our model within the 3D world. For our demo we get the location by getting the user tap CGpoint from the touchesBegan method:

Getting the closest AR point to the 2D touch location from the user

Getting a location CGPoint and translating it into a float_4x4 matrix with the worldTransform method.

The location variable we are getting from the above example is a 2D point which we need to position in the 3D AR scene. This is where the Feature Points mentioned above come into play. They are used to extrapolate the z-coordinate of the anchor by finding the closest Feature Point to the tap location.

sceneView.hitTest(location, types: .featurePoint)

You can also use the cases .existingPlaneUsingExtent and .estimatedHorizontalPlane to get the positions of the planes when using planeDetection

This method gives us an array of the closest ARHitTestResult, sorted by increasing distance from the tap location. The first result of that array is therefore the closest point. We can then use the following
let transformHit = hit.worldTransform that returns a float4x4 matrix of the real world location of a 2D touch point.

Plane Detection

Now that we have the location of the touch in the 3D world, we can use it to place our object. We can add the model to the scene in two different ways, choosing one over the other depends on how we have set up our ARSession and if we have planeDetection enabled. That is because if you run your configuration with planeDetection enabled, to either horizontal or vertical detection, the ARSCNView will continuously detect the environment and render any changes into the sceneView.

When you run a world-tracking AR session whose planeDetection option is enabled, the session automatically adds to its list of anchors an ARPlaneAnchor object for each flat surface ARKit detects with the rear-facing camera. Each plane anchor provides information about the estimated position and shape of the surface.

We can enable planeDetection on viewWillAppear when adding a ARWorldTrackingConfiguration to the ARSession:

configuration.planeDetection = .horizontal

So while planeDetection is on we can add a new node into the scene by creating a new SCNode from our Scene object and changing the node's position, a SCNVector3, to where we want the model to be on the view. We will then add this node as part of the childNode of the sceneView, and since planeDetection is enabled the AR framework will automatically pick up the new anchor and render it on the scene.

Using the same method of getting the 3D location, we add the node we created before to the sceneView

You can use either .existingPlaneUsingExtent and .estimatedHorizontalPlane cases instead of .featurePoints when trying to find where to place the model. The results given would be different in each case and it depends on where and how you want to place your object. Existing Planes would give you a point fix on a plane, like a floor or a table, and feature points would give a more specific location around objects in being tracked in a real environment.

To get the correct node position we will need to use the finalTransform float4x4 matrix we created before and translate it to an float3. To do that translation we used an extension that translates our float4x4 matrix into a float3.

Translation extension that converts 4x4 matrix to a float3

Tada 🎉 we just successfully added a 3D model into an AR Scene!

Anchoring

Having the app continuously detect the plane is quite resource heavy. Apple recommends disabling planeDetection after you are done detecting the scene. But as we mentioned before, if planeDetection is not enabled the ARScene won't pick up your newly added childNode and render it on to the sceneView.

So if you want to be able to add new nodes and models to a scene after you are done detecting planes you will need to add a new ARAnchor manually.

To create an ARAnchor from the tap location we will use the same transformHit float4x4 matrix we created before — without needing to translate it this time, since ARAnchors and ARHitResults use the same coordinate space.

Adding an Anchor to a scene to render an object

By adding the new anchor by ourselves instead of relying on the Session Configuration we trigger the renderer() function from the delegate that will return the node to be rendered for a particular anchor.

Adding a node to the anchor through render method

We need to double check if the anchor triggering the render function is the anchor we just added and not an ARPlaneAnchor.

With this in place our model will be rendered at the tap location of the sceneView just as seamlessly as when we had planeDetection enabled.

Tada 🎉 we just successfully added a 3D model into an AR Scene!

Conclusions

To summarise, in this post we went through the basics of Augmented Reality and Apple’s ARKit. We applied the lessons learned and crafted an application that adds our 3D models to the world using two different methods.

The code for this demo can be found on Novoda’s GitHub and you can also check our ARDemoApp repo, where you can import your own models into an AR Scene without having to write a line of code.

If you enjoyed this post make sure to check the rest of the series!

Have any comments or questions? Hit us up on Twitter @bertadevant @KaraviasD


Originally published at blog.novoda.com on January 21, 2019.

Berta Devant

Written by

iOS craftswoman @novoda , swift ❤️, director of @WWCodeBarcelona unequivocally feminist and queer 🏳️‍🌈

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade