Creating an Animoji-Style 3D Character to Use With TrueDepth

How to export from Blender 2.8 to Xcode for ARKit/SceneKit

Jake Holdom
Sep 5 · 12 min read
TrueDepth smiley

Recently, a project I was working on required the use of a 3D emoji-style character controlled using the TrueDepth camera in an iPhone X or above—much like Apple’s very own Animoji. I found a number of issues when trying to export the 3D character from Blender to SceneKit, so this tutorial will outline the problems I had and the solutions that I managed to find, as well as demonstrating how to implement the character in your own project.

This tutorial will assume that you’ve already created your character with all the blend shapes you need. This will go into a lot of detail, so someone with minimal experience of Blender or iOS development can follow along. However, I won’t be going over how to create your own character or blend shape animations.


Exporting Your Character for Use in ARKit

Before you start: please bear in mind that it’s always a good idea to make a backup before beginning, as you don’t want to lose or mess up any work!

When naming your shape keys, I found it a good idea to give them the same name as the corresponding blend shape key. For some reason, Apple has decided to name some of the keys slightly differently from their object names; here is the list of their key names in ARFaceAnchor.

Next, you’re going to want to make sure all modifiers are applied, otherwise, you’re bound to run into some issues (like I did) when importing the exported .dae file into Xcode. My character used two different modifiers, Mirror and subdivision. Not applying the Mirror modifier (used for the eyes) resulted in only one eye being visible on the exported object. This is very simple to fix. Simply make sure you’re in object mode on Blender, click on the Mirror modifier and select “Apply.”

Apply mirror modifier

The subdivision modifier is slightly more tricky and requires a Python script to work. Applying this modifier is important. Otherwise, the shape keys will not be retained when exporting the character — which means no animations will work. After some extensive Googling, I came across this very useful thread that contains a script for exactly what we need (big thanks to MorphCider).

Next, open a text editor, save the following script in an appropriate location, and call it apply_with_shape_keys.py:

Open the scripting workplace, click “Text” > “Open” and open the script you just saved. Now click “Run Script” (found in the top right) which will add the script to the project.

Scripting workplace

You will now need to go back to the “Layout” workplace (making sure you’re still in object mode) and select your object with the subdivision modifier. Now press “fn” + “f3” which will bring up a search box. Search for the function you just created by typing “Apply Modifiers With Shapekeys” and click on the corresponding result. Blender will wait a few seconds while it’s loading and then create a new object called (your object name)_APPLIED. This new object should now have your subdivision modifiers applied, so you can either hide or remove your old object.

Run apply modifiers script

We’re now ready to export the 3D character out of Blender into a .dae file. First, we want to select and highlight all of the objects we wish to export. In my case, this is the Smiley_APPLIED and eyes objects. Now click “File” > “Export” > “Collada (default) (.dae)”. On the left, we have some settings, for which I chose:

  • “Selection Only”
  • “Include Children”
  • “Include Shape Keys”

The other settings don’t matter too much but may vary depending on your character. Now, make sure it is exporting to an appropriate location and click “Export” > “COLLADA.”

Unfortunately, Blender does not export the file with the correct keys and values needed to import seamlessly into Xcode. Fortunately, however, JonAllee has created a fantastic tool that automatically maps the correct keys for you. If you are using Swift 5, I have forked this repo and converted it to work with Swift 5.

Download this tool from GitHub and open it up in Xcode. Select “Scheme” > “Edit Scheme.”

Edit Scheme

Now go to “Run” > “Arguments” > “Arguments Passed On Launch.” We want to pass three arguments:

  • The path to the input file
  • “-o”
  • The path to the output file

You need to make sure these are in the correct order. Here is an example of how mine looks:

Arguments for ColladaMorphAdjuster

Close the dialog box and then run the code.

If all went well, you should have an output log showing the number of all of the geometries (blend shapes) like this:

ColladaMorphAdjuster successful output

If the script hasn’t picked up the geometries, you may want to check that all your objects have exported correctly and all of the modifiers have been applied. You can check the .dae file by previewing it to make sure no obvious objects are missing.

If the script has picked them up, great! Your 3D character is nearly ready. We’re now going to move on to importing the character into Xcode.


Importing Your Character Into Xcode

Create a new Project with a Single View App in Xcode. I’ve called mine EmojiFace.

Right-click on your project directory and select “New File.” Here you want to scroll down to the resource section and create a new SceneKit Catalog called Models.scnassets.

SceneKit Catalog

Now, drag and drop the output .dae file generated from the ColladaMorphAdjuster above into the Models.scnassets folder. Click on the .dae file and you should see your 3D character. If the camera angle is a bit weird, click on the bottom left camera icon and select “Front.” Click on your object with the blend shapes, and you should see a list of them on the right where you can drag the values to change the character’s face:

Smiley .dae file in XCode

You may notice that the normals look a little weird, which gives it a low-polygon look when changing the geometry morpher values but don’t worry. I’ll show you how we can fix this programmatically when we import the object into the code later on.

You now want to export the .dae as a .scn file. This is done by clicking “Editor” > “ Convert to SceneKit scene file format (.scn).” A popup will then show saying that .scn files are not compatible with some applications. I normally select “Duplicate” as this will keep the .dae file in case you wish to use it for another reason in your project.

Convert to .scn file

Now, click on your .scn file.

I normally organize my character into a specific node structure in case I want to add additional nodes such as camera nodes. You can add a new child node by clicking the “+” at the bottom of the scene graph on your .scn file. I then name the child node model and then add a child node to that, which I name puppet and put my characters objects inside it, like so:

You may find that some of the colors are different from the character that you exported from Blender. You can change the colors to display as you want them inside the Material Inspector.

Material inspector

Congratulations! Your 3D character should be good to go, and we can finally start writing some code!


Importing Your Character Into ARKit/SceneKit

Open up Main.Storyboard and drag a SCNView and an ARSCNView into the view controller from the object library. The SCNView is the view that will contain your 3D character and the ARSCNView is the view that will track your face. The ARSCNView won’t actually display anything visible to the screen unless you want to configure it to display the camera’s feed. Set the constraints as you want them. Here’s how my view controller looks:

ViewController setup

Add your views to the ViewController.swift file and call the SCNView faceView and the ARSCNView trackingView. We’ll also want to create the following instance variables that we’ll need for this class:

The model and head variables are just corresponding to the nodes shown above in the Scene Graph, and it’s important that they’re exactly the same.

Now, we’re going to want to set up an AVCaptureDevice session. In order to do this, we’re first going to want to set up the camera permissions in the .plist file. So go to the Info.plist, click on the “+” button, select “Privacy — Camera Usage Description” and then type something in the text box like so:

Add permissions to .plist

Initialize an AVCaptureDevice request in the viewDidLoad method:

Next, we need a function to initialize the face tracker (Note: we haven’t yet made ViewController an ARSCNViewDelegate, so you’ll get an error until we do).

Then we want to set up the SCNView:

This part:

is the bit of code that will fix the low-poly issue that we saw earlier. This answer gives a good description of why it works.

Now, we want to set up the camera node which will position the SceneKit view to be in front and a little bit raised from the Character node.

These are all the functions you need to create the 3D character SCNNode, so now we want to add these to the viewDidLoad:

If you run the app at this stage, you should see your 3D character looking at the camera front-on, but with no facial recognition animation working yet. If the character is too large or too small, you can change the size of it by setting the scale variables either programmatically or in the Node Inspector.

Scale variables

Now we’re at the part where we can map the facial blend shapes to the 3D characters.

So we want to create an extension of ViewController inheriting the ARSCNViewDelegate and override the function

Assuming that you have named the Blend Shape keys/Geometry Morphers the same name as the keys defined by Apple, then all we need to do is a simple for loop to map them together.

If you run the app now, you should see that your facial movements are being tracked and displayed on your 3D character.

But wait… you also want to track the yaw, pitch, and roll of your face and move the 3D character’s head correspondingly. All you need to do is use the transform of the ARFaceAnchor and assign that to the character’s head’s node.

However, this doesn’t work very well as the transform gets affected by the camera position, which gives weird results. So we’re going to have to calculate the SCNVector3 ourselves with some good old-fashioned math.

Now set the model’s Euler angles to the SCNVector3 created in the renderer function:

You’re now good to go; you should have a fully-animated 3D character using the iPhone’s True Depth camera!

I hope you have enjoyed this tutorial and have fun playing around with your 3D characters. If you would like the source code then you can view it on my GitHub here, but please bear in mind this doesn’t include the 3D character I have used for this project.

Thanks for reading!

Better Programming

Advice for programmers.

Jake Holdom

Written by

iOS & Android developer from London

Better Programming

Advice for programmers.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade