Published in


KeenTools FaceTracker Guide


The first thing you need is the 3D model of the face you want to track created with FaceBuilder, connect it to the geo input of FaceTracker.

The most basic setup, representing the case when you build a model using the footage where you want to track the model
Creating a pre-analysis file
Advanced setup: two cameras (photo and video), lens distortion correction, model is built using photos.

Appearance settings

While tracking, you work with a textured wireframe of the head 3D model being rendered over the footage. In case you experience any difficulties while working with the default colour scheme, you can change the three colours used for the wireframe in the Colors section of the main settings tab.

FaceTracker mesh appearance settings


Before you start pinning, you have two options to initialise the position of the mesh inside the frame: centring the mesh or using automatic alignment that finds faces on images. We recommend using automatic alignment since it’s much faster.

Pinning the first keyframe manually


When you’re satisfied with the initial position of the 3D model, you’re ready to launch automatic tracking. It can be done on a frame-by-frame basis or continuously to the last or first frame of a footage. On the left side of the first FaceTracker toolbar, you can find four buttons: Track To Start, Track Previous, Track Next, Track To End.

FaceTracker toolbars


In an ideal situation, you get clean and precise tracking. But we don’t live in an ideal world, so at times you need to refine the results of automatic tracking. It’s not difficult with FaceTracker. First, you need to find a frame where tracking results became noticeably wrong. Then, using the existing pins, or adding new ones, you need to correct the position of the 3D model to match the picture. You can notice that once you adjust a pin position, a keyframe is created. When the model fits the picture press the Refine button on the first toolbar of FaceTracker. The refinement process tracks the object from both closest keyframes and then merges the two tracks giving you the best results of the two tracks.

  • Place a model in the footage with the first keyframe
  • Track the model
  • Abort tracking if the track is not ideal
  • Adjust the position or the shape of the face where the track becomes slightly lost, creating a new keyframe
  • Press Refine button to refine the tracking results between the two keyframes you have
  • Continue tracking from the last keyframe you created

Removing keyframes

Sometimes you may want to remove a keyframe set before or next to the current frame, or all keyframes before or after the current frame. You can use cleanup buttons in the middle of the first FaceTracker toolbar. Clear Between Keyframes clears all tracking data between the keyframes closest to the current frame on the left and on the right side, leaving the keyframes intact. Clear Backwards removes data and keyframes before the current frame. Clear All clears all data and all keyframes. Clear Forwards clears tracking data and keyframes after the current frame. Note that even when you clear all keyframes and data, the pins you’ve set on the models are being kept intact, while the model is being reset to the neutral state.

Viewport Stabilisation

One of the newest features of FaceTracker is the viewport stabilisation that allows you to control the keyframe consistency keeping during the tracking.

Tracking With Masks

There are two kinds of masks that you can use with FaceTracker to improve tracking results.

“Roto” node provides a mask used by FaceTracker to exclude image regions from tracking
Creating surface masks


Another way to improve tracking quality is smoothing the results. It can help to avoid jitter in transformations, rotations and face expressions. On the tab Smoothing, you can find the controls for it. Note that changes in smoothing have to be made prior to the tracking process. If you already have tracked data and want to smooth it, you can press the Refine button after changing the smoothing settings and tracking results will be updated.

Smoothing parameters of FaceTracker

User Tracks (Helpers)

You can also use tracks created with Nuke’s built-in Tracker node importing them on the UserTracks tab of FaceTracker.

Using Results

The output of the FaceTracker node is the morphed, transformed and rotated 3D model you passed to its input. Usually, you just use it to connect FaceTracker output to the geo input of the next node in the chain.

Results tab of FaceTracker

Exporting Model & Animation

You can export the animated model using WriteGeo node and Alembic (ABC) file type with OGAWA storage format (it works best for animation in Nuke). Just connect the output of FaceTracker to the input of WriteGeo, set up the output parameters and press theExecute button.

Exporting geometry and cameras from Nuke

Transfer animation to another 3D model

You can also export the animation using ARKit-compatible FACS blendshapes — find the FACS as a CSV file option in the exporting menu. This way you can transfer animation to other models with compatible blendshapes.

Using FaceTracker Without FaceBuilder (Custom Head Model)

The short answer is: it’s possible. You can export the default head geometry from the FaceBuilder node (doing this will not require a license), then modify it keeping vertices order — this is how FaceTracker detects the face parts (e.g. nose, lips, eyes, etc), then import it back using ReadGeo or ReadRiggedGeo nodes and connect it to FaceTracker. Then you can track your custom head model with FaceTracker.


Download the KeenTools package here
Follow us: Facebook, Twitter, Instagram, YouTube

FaceTracker in action



Smart tools for VFX & 3D artists

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store