Dan Wyszynski
Mar 30, 2018 · 11 min read

While augmented reality has been a technology that has been around for many years, and increasing in accuracy for target-based experiences, the popularity of games such as Pokémon GO and the release of ARKit by Apple and AR Core by Google for Android have now brought AR to the hands of most consumers.

With the ability to have target-less AR that is cognizant of the world surrounding you, we have the opportunity to craft engaging new AR experiences. With new abilities, new challenges appear in how we work to create these experiences. In the next few posts, we’ll go through establishing a pipeline of sorts, taking 3D models, processing them, and using them in our own AR experiences.

The project for this post can be found at https://github.com/AbovegroundDan/ARTutorial_Part1

Basics

Our ARKit application will be using SceneKit for its rendering. SceneKit is the native 3D rendering engine for iOS, with direct hooks into ARKit.

3D model format

There are a few native formats available in SceneKit that we can use to load 3D models, but we will be concentrating on the Collada Digital Asset Exchange, or DAE, format. The DAE format allows us to have multiple objects in the scene file, including cameras and lights as well as any geometry.

SceneKit has routines for loading scene files, and we will be writing a couple of extensions to make loading simpler.

To edit our 3D models, we will use Blender, since it’s free and serves our needs.

Sizing and units

SceneKit uses meters as a unit of measurement for sizing and physics simulations. So when we see any mention of size, including the Scene Editor in Xcode, it is always in reference to meters.

In Blender, we also need to make sure we’re working in meters. Blender’s default is a meter unit, but it’s always safer to check than deal with a giant or minuscule model in your scene. In the right-hand panel in Blender, choose the scene tab and make sure the Units are set to default or Meter.

Coordinate system

SceneKit operates in a “Y-up” system, meaning the Y-axis is pointing up, whereas Blender works with the Z-axis pointing up. We need to be aware of this when exporting our scene and when loading it into SceneKit. Usually this isn’t a problem, as the exporter normally takes care of the conversion. Depending on whether you’re using a custom exporter or are working in a different coordinate system, you may need to rotate the model inside of your modeling application.

The Pipeline

The secret of being able to work fast when getting models from your artists, or when you’re doing everything yourself, is to have a good workflow. In this case, we need a good pipeline for getting models into our experience with as little massaging and processing as we can get away with.

Blender processing and exporting

The very first thing we have to think about is whether we’re going to use the origin of the models from the file in our AR world. In this example we’re going to create our AR scene directly from the 3D model file, so we won’t be changing too much.

One thing to make sure you do for simpler importing later on is to make sure that file paths are relative, so that when you import this later on or have to transfer files to another computer, the paths to the textures will continue pointing to valid files.

Normalizing

I recommend normalizing the current position, rotation and scale of each model the base value. For instance, if in the process of modeling you scaled the mode to (0.87,0.92,0.87) to better fit the scene, then applying the scale will make that current scale (1.0,1.0,1.0) but keep the model at the size you had.

This may interfere with animations which are keyed to certain values, but for static models it works well, and lets us make certain assumptions in our code if we are animating scale or rotation values.

To normalize the current values, we have to use Blender’s Apply options to the object. Let’s open the properties panel by hitting the + button next to the scene hierarchy panel. Selecting the model shows us the following properties.

We can see here that the object has a rotation and a scale. We’ll remove the rotation on the objects, but keep the scale at the values they are at (this is what our artist intended as far as how the object should be displayed), but what we’ll do is make these values the default, or identity values.

With our object selected, we choose the Object->Apply->Rotation & Scale menu item from the bottom menu bar.

After that is done, we can see that our rotation values are (0, 0, 0) and and our scale is at (1.0, 1.0, 1.0), exactly how we want it.

In this particular case, the objects are are separate mesh objects What we want to do now is group them together so we can manipulate the objects as a whole. Let’s create a parent transform and put all these objects as its children. From the Add menu on the bottom of the 3D view, choose Add->Empty->Plain Axes.

In the Hierarchy view, rename the Empty object to Sphere, and drag each of the ORB objects to the Sphere, so that they become children of Sphere.

Export

We’re now ready to use this model. Choose File->Export->Collada and make sure the following texture options are selected as shown:

This will help keep any textures attached to our models. All the other default options should work as-is. Save the file and lets get coding.

The Code

ARKit project setup

Let’s begin the Xcode part of this. For the sake of brevity, we’ll start by using the ARKit Xcode template. Open Xcode, then begin a new project by selecting File->New->Project or using ⇧⌘N, then choosing the Augmented Reality App template.

Enter a name for your project, make sure the code signing and bundle name are correct, click Next, then choose a directory to create the project.

With your project created, you can run the app, and see the default ship that the sample project displays in AR. One thing to note is that the ship is placed in a position that is relative to where the camera was facing when the project was started. Kill the app and run it a couple more times, each time facing a different direction. The ship always shows in front of the camera, at the same distance. This is because the ship is positioned -0.8 meters in the scene. The scene is created when the camera opens and the session begins. If you look at the “ship.scn” file, and drill down to the shipMesh object, you’ll see that it is placed at (X: 0.0, Y: 0.1, Z: -0.8). One thing to note is that the Z-coordinate is negative. This is due to the fact that negative-Z is pointing forward in the camera axis.

Now that we understand how the sample app works, the next thing we’re going to do is delete the scene assets (the ship) and use our own models.

Let’s make a new group in our project and call it Models. In here we will put in the sphere.dae file that we modified earlier. Drag and drop our model file and its texture into this group, or Control-click the group name and choose Add Files to Project then choose our files to be added.

SceneKit import

We’re going to create a custom scene in our code, load the Collada file with the model, and put those models into our scene. This allows us to use more than one file if we want to load multiple models into our scene. It will also give us control in what we add, since many times there are things in the file that we don’t want to import, such as additional cameras, lights or empty objects. Our goal is to minimize the amount of cleanup we do with the source files in the SceneKit editor.

Let’s check out our model file and make sure the rotations and scale are correct, and all the textures are properly connected. Select the model file, then open up the Scene Graph View by clicking on the tab button on the bottom left of the scene view. Select the model and the Node Inspector in the properties panel.

Here we can see that all our values are correct. Our rotations are 0 on all axes, and our scale is at 1.0, just as we intended.

We can also see that our materials came across fine, but lets switch over to the Materials Inspector in the properties panel to see how things look anyway. The Materials Inspector is the 5th icon from the left, the one that looks like a sphere. If you find yourself with a model whose textures are missing, then simply drag the texture file from the project inspector to the proper texture channel. Color texture maps generally go into the Diffuse channel.

Scene creation

Now that we have our model file ready to go, we’re going to create a new scene. This will set us up later on to customize the scene and not have everything in one file.

Create a group in our project called Scenes, and create a new Swift file in that group called HoverScene. We’ll define a new struct which will contain our scene. We’ll initialize it and add a couple of lights, an ambient light, and a directional light.

Now we’ll change the default View Controller to load up our new scene instead of assigning the contents of the DAE file as the scene.

In our ViewController class, add the following line under the sceneView declaration on top of the class:

Next, remove the line that loads the scene:

let scene = SCNScene(named: “art.scnassets/ship.scn”)!

And replace it with our own scene creation code:

Model loading

Let’s create a loading helper function to get all nodes out of a file and into the scene of our choice. Since we’re loading nodes, it makes sense to create an extension to SCNNode. Begin by creating a new group in our project called Utilities. In this group we’ll add a new Swift file called Node+Extensions.swift, whose contents will be the following:

That’s quite a bit of setup code, but it will give us great flexibility going forward. Just a bit more to add, and we’ll be able to put all this code to the test.

Object placement

Let’s create a new method in our HoverScene.swift file, called addSphere, that takes in a position of type SCNVector3, which we’ll use to place our item in 3D space. We’ll make use of our handy node loading extension which will keep our code tidy and reusable.

Now that we have the code written to put something in our scene, we need to know when to call this. One way that will give us some flexibility later is to get a callback from the system when each frame is ready to be shown on the screen. To achieve this, we will hook into the scene renderer’s updateAtTime delegate callback. When we get the first frame, we will set a boolean that indicates that the scene has been rendered for the first time, and is ready for us to work with.

Let’s add a Bool to the ViewController called didInitializeScene and set it to false initially.

Next, let’s add the render delegate callback.

Let’s go over what we did here. First, if the scene hasn’t been initialized, see if we can get the current frame’s camera. That is our scene’s camera, and let’s us know that we’re actively viewing the world. From that camera, we will get its transform property which will give us its position and rotation. We create a new transform that we translate by -1.0 on the Z-axis, which means we are moving it forward 1 unit in SceneKit’s world units (1 meter). We take that translated transform and multiply it to the camera’s transform, essentially placing an object 1 meter in front of the camera, wherever it may be pointing.

We then use that transform and grab only the necessary XYZ values that we need to create a SCNVector3. This gets passed to our addSphere function to place our sphere in the world.

Point your phone at something, and run the project. If all goes well, you should see our model in front of you! Feel free to walk around it, and see it from all angles.

Extra credit

While it’s great that we have the sphere appear in front of wherever the camera happens to be pointing when you run the app, it’s not very practical. Let’s change that creation code a little bit to place the sphere when you tap on the screen.

Create a gesture recognizer in your ViewController’s viewDidLoad() method:

Next, create our didTapScreen method:

This is almost the same code as before, except in this instance, we are checking that our scene is, in fact, initialized before we allow the rest of the code to execute. We also need to modify our updateAtTime method to the following:

That’s it. Run the app again and you see that nothing will show initially, as we intended. But now you are in full control, and can place our spheres wherever you wish, by tapping on the screen. Run around your room or office and place them everywhere!

In part 2 of this series, we will go over some SceneKit fundamentals, including actions, animations and working with the camera. We’ll also get into some details about how we can react to user interaction in our augmented reality scene.

S23NYC: Engineering

S23NYC is Nike's first digital experience studio and they've been the driving force behind some of the brand's most memorable moments.

Dan Wyszynski

Written by

iOS @ Nike, developer of Triller, Effects Wizard, Creepy Crawly Kingdom, DaVinci's Secret Machines, DrawPals, PubSavvy, Punch! Culture Shelf & more.

S23NYC: Engineering

S23NYC is Nike's first digital experience studio and they've been the driving force behind some of the brand's most memorable moments.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade