Dan Wyszynski
Jun 22, 2018 · 10 min read

In Part 2 of the series, we built up on our AR scene by providing some interactivity, and getting more familiar with SceneKit. We also looked at the rendering camera, and used it’s position (i.e. the user’s position in the real world) to drive an animation in our sphere object.

In this post we’re going to begin detecting horizontal planes, and attach objects to these planes.

The code for this tutorial can be found at: https://github.com/AbovegroundDan/ARTutorial_Part3

Detecting Planes

Detecting the plane

ARKit provides full lifecycle callbacks for when a plane is detected, when a plane is updated, and when a plane is removed, by way of the didAdd/didUpdate/didRemove node callbacks:

By checking whether the anchor passed in is of type ARPlaneAnchor, we can tell whether the anchor that was detected is a surface or not.

The plane anchor will also have information on whether the detected plane is oriented horizontally or vertically, depending on the options you passed in when configuring the planeDetection property in the ARWorldTrackingConfiguration when creating the session.

Rendering a representative plane

To visualize the plane that we have detected, we will create a simple translucent grid that we will place at the position of the detected plane. The grid will be of the same size and orientation as that plane.

First, we’ll create the Plane class that we will use to show the grid at the anchor that is passed in on the didAdd callback.

In our Objects group, create a new file called Plane.swift. In it, we’ll add a custom initializer which passes in that plane anchor.

The steps here are simple:

  • Save the anchor that we belong to
  • Load up our grid image that we’ll use as the texture
  • Create the plane geometry
  • Apply the texture
  • Position the node. We position the node slightly underneath the origin, so that if we add objects at a Y value of 0 (that is, at a height of 0 above the floor), the plane won’t interfere with the visuals of the new object.

Now, all we need to do is figure out where we want to put this plane. Let’s create the addNode method to receive the callback from ARKit when it finds a plane anchor.

Since the didAdd method returns ARAnchor objects, we need to check if the passed in anchor is of type ARPlaneAnchor. If it is, then we call our addPlane method, passing in the node that ARKit created and added, as well as the plane anchor, which contains metadata describing the plane anchor. We want to do this in the main thread, so we wrap it in a DispatchQueue async call.

In our addPlane method, what we want to do is create our plane, and then add it as a child node to the node that ARKit created. This will attach our newly created plane to the node that ARKit is controlling.

Before we can build and run the app, we need to tell ARKit that we want to track planes. Previously in our project we weren’t concerned with tracking planes, so we didn’t have to do anything special. Now, however, we want ARKit to call our didAdd and didUpdate methods when it finds planes, so we need to add a property to the session configuration that specifies what kind of plane we want to receive updates on.

In our viewWillAppear method, where we create the configuration, we will request information on horizontal planes.

Optionally, but something I like to turn on while developing, is to turn on the debug features that visibly shows feature points and the world origin.

In our viewDidLoad, after the other debug option we turned on to show statistics, add the following line:

Now we’re ready to build and run. Point your device to the floor and start scanning. You should see the point cloud appear as yellow dots, and after some time, you should see a plane detected.

One thing to notice is that when the plane is first detected, it may seem to jump a bit. This is ARKit refining its accuracy of what it is seeing, and adjusting the anchors.

If you keep scanning the floor, you will continue to see the feature points, but the plane does not grow or change in any way to fill the newly found space. This is because we are not responding to the didUpdate calls from the renderer. We can improve our implementation by keeping track of the anchors and the planes that belong to them. If we get a didUpdate callback, we will look up the anchor, find the plane, and adjust the plane with the new information for that anchor.

Keeping track of planes

To keep track of our planes, we’ll use a simple dictionary that maps anchors to planes. This gives us an easy way to lookup and modify the corresponding plane.

In our view controller, we’ll add the dictionary definition:

We’ll also change our addPlane method to keep track of the plane that we create. Add the following line after the Plane creation code:

Updating Planes

Now that we have our association, we can get to the business of updating the planes when ARKit detects a change. We’ll need an update method in our Plane class that we can call when something changes.

The things we are most interested in are the width, height, and position of the plane. We can get this information directly from the ARPlaneAnchor passed in to the didUpdate call.

Now that we have our method for updating the Plane object, we need to catch the ARKit callback:

That will then call our own update method:

We check to see if the passed in anchor has a corresponding plane, and if it does, we call our update method.

Build and run, and you should see the grid expanding to fill the found areas as you scan around.

Placing Objects

Now that we know where the surfaces are, we can begin adding items that are anchored to them. Instead of using an external 3D model as we did in Part 1 and 2, we’re going to use some 3D text.

Let’s begin by creating a method that, given some arbitrary text, will give us back an SCNNode which has the 3D text geometry.

There’s some things to note in this code. The font size when specifying the font is being set to 1.0. This is because the font size is in scene units. In our case, it’s in meters, so specifying a font size of 22, lets say, it would be 22 meters large which would be larger than our viewport. Instead, what we can do is either make the font size 1.0 and the scale of the node be the inverse of the font size we want (i.e. 1.0/22.0) or make the font size that number and leave the scale alone. This might conflict with word wrapping on the container if you have text that is larger than the width of the container. Apple’s recommendation in that case is to use normal font size values, and smaller scale values so that the container knows how to lay out the font based on the font size.

Another item to note is the flatness property on the text geometry. The flatness is the smoothness of the rounded parts of the font. It’s basically the subdivision it uses when SceneKit creates line segments to approximate the curve of a letter. Smaller values means more line segments, which means smoother fonts, but at a cost of performance.

Next, we’ll add a method to add this node parented to another node.

We want this code to be executed when we tap on a plane, so let’s modify our didTapScreen method in our ViewController class. Where we check to see if the tapped object is a sphere, we’ll add an additional check for whether we’ve tapped on a Plane object, and if so, then add it to the same anchor parent that the Plane object has.

Lets build and run and see what we get.

The text looks ok, but it’s floating. This is because the pivot point of the text container is on the lower left of where the actual text is.

Pivots

Pivot points describe the origin of the object. If an object is rotated, the pivot point is used as the center point of the object as it is rotated. If it is scaled, it will be from that point as well. This can be used to create some interesting effects.

For now, let’s try to “ground” our text object so that it appears to be sitting on the plane itself, rather than floating above it.

In our createTextNode method, right before we return the node, add the following code:

What we do here is take the bounding box of the object, and set the pivot to be the center of the X and Z axis, and the lowest point of the Y axis.

Running the code now will give us text that’s laid out where we assume it would be.

Shadows

We can see that our text is casting a bit of a shadow there, but we can’t see it very well due to the transparent grid. There’s a couple of tricks that we can do in SceneKit to get our shadow to show up better. We’re going to use one of those tricks here and create a plane that won’t render, but will accept the shadows being cast from other objects. This technique is explained in the 2017 WWDC SceneKit: What’s New (https://developer.apple.com/videos/play/wwdc2017/604/) session.

Let’s go to our directional light definition and add the following lines to it:

Here we set the the shadowMode to deferred rendering, which means the shadows are drawn to the screen after the objects are. That allows us to control the shadow colors as well. We set the color to a black with transparency on it to simulate a softer shadow. We also set the shadowRadius to a value a bit bigger than the default so that it gives the shadow a bit of a softer edge.

In our Plane object initializer, first define an SCNPlane object and a node for our shadow plane.

Next, in our init method, create a new plane for the shadow that we will be projecting onto, and define its material properties.

Finally, after adding our debug grid, we’ll add our new shadow node.

We specifically set the castsShadow property on the shadowNode to be false, since we don’t want that plane to be casting its own shadows. Don’t forget to do the same for the planeNode as well. To be complete, set textNode.castsShadow = true in our scene’s createTextNode method. Running the app now gives us the effect we were looking for.

Extra Credit

Let’s make the plane grid toggleable so that we can just use it for debugging purposes. In its place, we should still know that ARKit has found a plane, so what we’ll do instead of placing the visual grid, we’ll trigger a haptic effect, to give the user an indication that there is a surface available to place items onto.

Add a toggle function to the Plane object, allowing us to toggle the visibility of the grid.

Let’s add our double tap gesture recognizer.

We’ll call that method on all the found Planes in our ViewController when someone double taps the screen. We’ll add a bool to keep track of whether we are in visible or not visible mode.

Now, for the haptic feedback, we want to add a UIFeedbackGenerator to the class.

On the renderer didAdd call, let’s add the haptic call after adding the plane.

And don’t forget to call setPlaneVisibility when a new Plane is being created to set the proper state.

That’s it for that bit of polish.

Extra Credit II

If you notice in that GIF, we have scattered the text. To achieve that, let’s look at placing the text where we tap, instead of the current position of the plane anchor. To do this, we’re going to pass a new parameter to the addText call, with the position that we want the object to be. We’re going to get this position from the SCNHitTestResult object that comes back from our tap hit test.

Modify the didTapScreen call to grab the position of the tap, and pass that information to the addText call:

Then we’ll use that position to place the text object on the parent object passed in. To do that, we have to convert from the world position coordinates that the hitTestResult holds. In the addText call, after creating the text node, set its new position with the following call:

That’s it for now. Got a request for what you’d like to see next? Let us know in the comments!

Dont forget to follow the s23NYC: Engineering blog, where a lot of great content by Nike’s Digital Innovation team gets posted.

S23NYC: Engineering

S23NYC is Nike's first digital experience studio and they've been the driving force behind some of the brand's most memorable moments.

Dan Wyszynski

Written by

iOS @ Nike, developer of Triller, Effects Wizard, Creepy Crawly Kingdom, DaVinci's Secret Machines, DrawPals, PubSavvy, Punch! Culture Shelf & more.

S23NYC: Engineering

S23NYC is Nike's first digital experience studio and they've been the driving force behind some of the brand's most memorable moments.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade