How to create Apple’s Animoji using ARKit?

Ashutosh Dingankar
9 min readOct 3, 2019

--

As a computer vision & deep learning enthusiast, I am always curious to explore the areas related to images & video processing.

Recently the idea of Apple’s Animoji grabbed my attention. So, I started my research of exploring how Apple’s Animoji works with such a good performance.

Problem Statement:

  1. Render 3D Face model in the camera scene whenever user’s face is detected through front camera.
  2. Model should mimic user’s face movements, expressions & orientations.

Basic building blocks:

  1. Hardware with camera that can capture images in real time with good performance: iOS device’s true depth front camera (iPhone X & above) is the better choice.
  2. Software that can apply face detection algorithms to detect face with facial expressions: You can use OpenCV to achieve this. But thanks to Apple’s ARKit which is doing most of the leg work under the hood with good performance. It is capable of detecting face as well as facial expressions in the form of BlendShapes (or Shape keys or Morph targets). Each BlendShape location has value ranging from 0 to 1 which is the measurement of that particular facial expression. Learn More. ARKit provides 50 BlendShape locations
  3. 3D Face model: Download face model from https://www.turbosquid.com
  4. 3D modelling editing software where I can create & define blendshape locations: There are lot of good applications such as Unity, Maya etc to work with but I have used Blender v2.80 as it is free ;).

Approach:

Idea is whenever ARKit face tracking session runs, on every rendering cycle,it gives continuous 50 blendshape location values ranging from 0 to 1. If I update these values to the corresponding blendshape locations defined in 3D model, I will get animations for that locations. So, the key is to define appropriate blendshapes (or shape keys) on the model & give the shapekey name same as that ARKit provides.

Dividing solution into 4 parts:

  1. Create shape keys for 3D model
  2. Export model object as .dae from Blender
  3. Import .dae file into Xcode
  4. Code

1. Create shape keys for 3D model

Since I am noob to Blender, my most of the time spent it to get comfortable with this tool.

There are six expressions for six basic emotions — anger, happy, surprise, sad, fear and disgust. To draw or illustrate these emotions, we use the eyebrows, eyelids, and mouth. Just by changing these lines we will get very different expressions. Many other emotions such as bored, shock, devious, confused, contempt, etc. can also be achieved through various permutation and combinations of the eyebrow line, lips, and eyes.

Shape Keys

Shape keys are used where we need to animate the shape of an object by animating the vertices of the mesh. These are also know as morph targets or blend shapes.

Creating Shape Keys

Step 1

Press TAB to exit Edit mode. With the Head object selected, Press the + (plus) button to add a Shape Key. This first shape key is automatically named Basis, and is the default shape of the object. Do-not edit the shape of this key. Shape keys can only be created in Object mode, but we can of course edit the shape of any key by entering into Edit mode.

Step 2

We will create a separate shape key for each individual shape, or change to the geometry. That means one shape key for the left eyebrow raised, another for the right lip smile and so on. Add another Shape Key with the + (plus) button. We will start with the upper part of the face, so name this eye-brow-angry or browDown_L for short (or anything else you like.).

Step 3

Like wise, I have defined 30 shape keys by referring ARKit blendshape locations & gave same name as that of ARKit blendshapes.

Note that, ARKit provides 50 blendshape locations. So, you define all of them if you want smoother animation.

By changing values of shape keys from 0 to 1, you can test the animation of that blendshape.

Basis Shape Key
browOuterUp_L Shape Key
eyeBlink_L Shape Key
mouthSmile_L Shape Key

2. Export model object as .dae from Blender:

ARKit uses scene kit(.scn) to load 3D models. Blender object is exported in Digital Exchange Format (.dae) format. Xcode has the capability to convert .dae file to .scn file.

Challenges faced:

A) Make sure all of your modifiers exist in exported .dae file:

Somehow, I saw my modifiers were not exported in .dae file. My 3D model uses subdivision modifier

After Googling, I found that applying this modifier is important. Otherwise, the shape keys will not be retained when exporting the character — which means no animations will work.Itrequires a Python script to work

Next, open a text editor, save the following script in an appropriate location, and call it apply_with_shape_keys.py:

import bpy

class ApplyWithShapeKeys(bpy.types.Operator):
“””Tooltip”””
bl_idname = “object.applywithshapekeys”
bl_label = “Apply Modifiers With Shapekeys”

def execute(self, context):

selection = bpy.context.selected_objects

for obj in selection:
if obj.type == “MESH”:

# lists store temporary objects created from shapekeys
shapeInstances = []
shapeValues = []

# Deactivate any armature modifiers
for mod in obj.modifiers:
if mod.type == ‘ARMATURE’:
obj.modifiers[mod.name].show_viewport = False

for shape_key in obj.data.shape_keys.key_blocks:
# save old shapekey value to restore later. Will set to 0 temporarily
shapeValues.append(shape_key.value)
shape_key.value = 0.0

i = 0
for shape_key in obj.data.shape_keys.key_blocks:

# ignore basis shapekey
if i != 0:
# make sure only relevant object is selected and active
bpy.ops.Object.select_all(action=”DESELECT”)
obj.select_set(state=True)
context.view_layer.objects.active = obj

# make sure only this shape key is set to 1
shape_key.value = 1.0

# duplicate object with only one shape key active. Blender does the rest
bpy.ops.Object.duplicate(linked=False, mode=”TRANSLATION”)
bpy.ops.Object.convert(target=”MESH”)
shapeInstances.append(bpy.context.active_object)

bpy.context.object.name = shape_key.name

bpy.ops.Object.select_all(action=”DESELECT”)
obj.select_set(state=True)
context.view_layer.objects.active = obj

shape_key.value = 0.0

i = i + 1

context.view_layer.objects.active = obj

# create final object
bpy.ops.Object.duplicate(linked=False, mode=”TRANSLATION”)
newobj = bpy.context.active_object
newobj.name = obj.name + “_APPLIED”

# clear all old shapekeys from new object
newobj.shape_key_clear()

# apply all modifiers on new object
for mod in newobj.modifiers:
if mod.name != “Armature”:
bpy.ops.object.modifier_apply(apply_as=’DATA’, modifier=mod.name)

# iterate all temporary saved shapekey objects, select only that and the final object and join them
for shapeInstance in shapeInstances:
bpy.ops.object.select_all(action=”DESELECT”)
newobj.select_set(state=True)
shapeInstance.select_set(state=True)
context.view_layer.objects.active = newobj

result = bpy.ops.object.join_shapes()

# reset old shape key values
i = 0
for shape_key in newobj.data.shape_keys.key_blocks:
if i != 0:
shape_key.value = shapeValues[i]
i = i + 1

# reset old shape key values
i = 0
for shape_key in obj.data.shape_keys.key_blocks:
if i != 0:
shape_key.value = shapeValues[i]
i = i + 1

# delete temporary objects
bpy.ops.Object.select_all(action=”DESELECT”)
for shapeInstance in shapeInstances:
shapeInstance.select_set(state=True)

bpy.ops.object.delete(use_global=False)

# redeactivate armature modifiers
for mod in obj.modifiers:
if mod.type == ‘ARMATURE’:
obj.modifiers[mod.name].show_viewport = True

for mod in newobj.modifiers:
if mod.type == ‘ARMATURE’:
newobj.modifiers[mod.name].show_viewport = True

return {“FINISHED”}

def register():
bpy.utils.register_class(ApplyWithShapeKeys)

def unregister():
bpy.utils.unregister_class(ApplyWithShapeKeys)

if __name__ == “__main__”:
register()

In Blender, Open the scripting workplace, click “Text” > “Open” and open the script you just saved.

Now click “Run Script” (found in the top right) which will add the script to the project.

You will now need to go back to the “Layout” workplace (making sure you’re still in object mode) and select your object with the subdivision modifier. Now press “fn” + “f3” which will bring up a search box. Search for the function you just created by typing “Apply Modifiers With Shapekeys” and click on the corresponding result. Blender will wait a few seconds while it’s loading and then create a new object called (your object name)_APPLIED. This new object should now have your subdivision modifiers applied, so you can either hide or remove your old object.

We’re now ready to export the 3D character out of Blender into a .dae file. First, we want to select and highlight all of the objects we wish to export. In my case, this is the head_APPLIEDobjects. Now click “File” > “Export” > “Collada (default) (.dae)”.

On the left, we have some settings, for which I chose:

  • “Selection Only”
  • “Include Children”
  • “Include Shape Keys”

The other settings don’t matter too much but may vary depending on your character. Now, make sure it is exporting to an appropriate location and click “Export” > “COLLADA.”

B) Make sure all of your shape keys exists into the exported .dae file:

Somehow Blender does not export the file with the correct keys & values needed to import seamlessly into Xcode.

After Googling, I found this tool helped me resolve this issue. Thanks JonAllee

Download this tool from GitHub and open it up in Xcode. Select “Scheme” > “Edit Scheme.”

Now go to “Run” > “Arguments” > “Arguments Passed On Launch.” We want to pass three arguments:

  • The path to the input file
  • “-o”
  • The path to the output file

You need to make sure these are in the correct order. Here is an example of how mine looks:

Close the dialog box and then run the code.

If all went well, you should have an output log showing the number of all of the geometries (blend shapes) like this:

You can check the .dae file by previewing it to make sure the existence of modifiers & shape keys.

3) Import .dae file into Xcode :

Now, drag and drop the output .dae file generated from the ColladaMorphAdjuster above into the .scnassets folder. Click on the .dae file and you should see your 3D character.

You now want to export the .dae as a .scn file. This is done by clicking “Editor” > “ Convert to SceneKit scene file format (.scn).” I normally select “Duplicate” as this will keep the .dae file in case you wish to use it for another reason in your project.

Now, click on your .scn file.

Check whether all the shape keys (Geometry Morpheus) exist in the scn file or not. If not then again go back to Blender & repeat the above steps.

Modify the Position, Euler & Scale transform properties of the model & set it to the one below. I chose Scale as 0.2 (20cm).

You can add material to the model. I chose Shading as “Physically Based” & Diffuse as “White”

4) Code:

You can download my sample project FaceAnimoji.

In viewDidLoad of ViewController.swift,

a)Load scene & access rootNode of the scene. Also, set ViewController as delete of ARSCNView

b)ARKit asks us to provide node for detected anchor. In our case we are interested in faces i.e ARFaceAnchor. So, if any face is detected, always return our scene.rootNode to render.

c) On every scene render, ARKit returns 50 blendshape location values. Since our model blendshape key names are equal to ARKit blendshape keys name, we just need to set corresponding weights returned by ARKit.

Build & Run the project on iPhone X or greater with true depth camera.

You can see model mimics user’s face movements. Since I have defined 30 blendshapes in my model, some of the expressions are missing like cheek, tongue out etc.

I hope you have enjoyed this tutorial and have fun playing around with your 3D characters.

You can download my sample project FaceAnimoji from here.

Thanks for reading!

--

--