KeenTools FaceTracker Guide

Last update: ver. 2021.2

FaceTracker is a Nuke node for facial tracking designed to work with models built with FaceBuilder. It’s very useful for clean up, face replacement, digital makeup and other tasks that require tracking of facial expressions. It doesn’t require mocap rigs or markers on faces. Also, the model needed for tracking can be created using a frame from the footage.

Consider visiting our Knobs Reference page to check what every button does

Setup

The first thing you need is the 3D model of the face you want to track created with FaceBuilder, connect it to the input of FaceTracker.

The most basic setup, representing the case when you build a model using the footage where you want to track the model

Then you need to connect a footage where you want to track a face to the input of FaceTracker. Fixing lens distortion in the footage before you pass it to FaceTracker may significantly improve the tracking results. It’s optional, but remember that FaceTracker assumes that the image has no distortion at all.

Finally, you can either connect a node with the settings of the camera that were used to film the footage to the input, or choose on the main settings tab of FaceTracker. The estimation will happen each time you move or change the 3D model.

Now we need to pre-analyse the footage.

We isolated the analysis process to make the actual tracking fast and responsive, so you wouldn’t wait for a frame being analysed during the tracking process. Also, you can automate the pre-analysis of your footages and perform it on servers instead of user machines, so your artists will have everything ready for tracking right from the start.

Creating a pre-analysis file

To launch pre-analysis find the input field on the main settings tab of FaceTracker, enter the path where FaceTracker should save the analysis file or press the button with an arrow next to the field to choose the path using a file dialog. Once the path is specified, press the button next to the path field. A confirmation dialogue with a selectable frame range will appear. If you don’t want to track the model across the whole footage, you can specify the frame range here, it will reduce the analysis time. When the frame range is set according to your needs, press the button. It will take some time. You can pre-analyse different frame ranges using the same file, the information will be added to the file, or re-analysed if you choose the frames which were pre-analysed before.

Advanced setup: two cameras (photo and video), lens distortion correction, model is built using photos.

Appearance settings

While tracking, you work with a textured wireframe of the head 3D model being rendered over the footage. In case you experience any difficulties while working with the default colour scheme, you can change the three colours used for the wireframe in the section of the main settings tab.

FaceTracker mesh appearance settings

To see the mesh, connect the FaceTracker node to a , choose a frame to start with and press or button (more about these buttons in the Pinning section below).

You can also find other useful checkboxes like , and around there. Use them to customize the appearance of the wireframe in the viewer.

Tracking starts with matching the initial position of the 3D model with the picture, we in KeenTools call it . It’s better to start pinning with the parts of the face positions of which you can clearly see on the picture: corners of eyes and lips, nose, ears or chin.

Pinning

Before you start pinning, you have two options to initialise the position of the mesh inside the frame: centring the mesh or using automatic alignment that finds faces on images. We recommend using automatic alignment since it’s much faster.

Starting with the version , you can press button and a couple of built-in neural networks will find a face in the frame and then try to pin it automatically. It’s not yet 100% accurate all the time, so you’d probably need to adjust it manually in most cases.

Once the mesh is placed in the frame you can adjust its position. Click any of the — small red squares, and drag them to the correct position. You can add pins by clicking anywhere on the mesh and removing them with the right-click.

If you add your own pins from the very first one — the first one will allow you to pan the mesh over the frame. When you add a second pin, you can scale and rotate the model in 2D. The third pin will allow rotation of the model in 3D space. With four or more pins you can morph the model to fit the facial expression.

Pinning the first keyframe manually

If you’re familiar with our GeoTracker node, please note that now you’re not pinning a static 3D model. With FaceTracker you can open and close eyes and mouth to various degrees, change the facial expression, etc.

Tracking

When you’re satisfied with the initial position of the 3D model, you’re ready to launch automatic tracking. It can be done on a frame-by-frame basis or continuously to the last or first frame of a footage. On the left side of the first FaceTracker toolbar, you can find four buttons: , , , .

and launch continuous tracking to the first and last frames respectively, starting from the current frame. and launch tracking on the previous and the next frames respectively.

Note that tracking is interrupted by keyframes that you have on the timeline, so continuous tracking cannot go past a keyframe — keyframes are considered to be ideal and not requiring any tracking.

While tracking is happening you can see its results (almost) in real-time in the . If you see that something strange is happening (like wrong expressions, or the subject’s head is being lost), you can abort the tracking — the tracking results won’t be lost. And here we come to the refining process.

FaceTracker toolbars

Refinement

In an ideal situation, you get clean and precise tracking. But we don’t live in an ideal world, so at times you need to refine the results of automatic tracking. It’s not difficult with FaceTracker. First, you need to find a frame where tracking results became noticeably wrong. Then, using the existing pins, or adding new ones, you need to correct the position of the 3D model to match the picture. You can notice that once you adjust a pin position, a keyframe is created. When the model fits the picture press the button on the first toolbar of FaceTracker. The refinement process tracks the object from both closest keyframes and then merges the two tracks giving you the best results of the two tracks.

You can always use the button to setup and fix new keyframes.

The more or less real tracking process would look like this:

  • Place a model in the footage with the first keyframe
  • Track the model
  • Abort tracking if the track is not ideal
  • Adjust the position or the shape of the face where the track becomes slightly lost, creating a new keyframe
  • Press button to refine the tracking results between the two keyframes you have
  • Continue tracking from the last keyframe you created

Of course, it’s a bit simplified because sometimes you want to adjust some other settings, but more on that later (see Smoothing, Masks and User Tracks below).

Removing keyframes

Sometimes you may want to remove a keyframe set before or next to the current frame, or all keyframes before or after the current frame. You can use cleanup buttons in the middle of the first FaceTracker toolbar. clears all tracking data between the keyframes closest to the current frame on the left and on the right side, leaving the keyframes intact. removes data and keyframes before the current frame. clears all data and all keyframes. clears tracking data and keyframes after the current frame. Note that even when you clear all keyframes and data, the pins you’ve set on the models are being kept intact, while the model is being reset to the neutral state.

In case you want to restore the neutral face expression, use the button.

There’s also the button that removes all pins (note that they are shared between all keyframes, so you remove them everywhere), but at the same time the 3D model is kept intact, retaining its latest state.

The buttons can help you when you want to move the model without changing its shape or expression. Press the button — you’ll notice that all pins are yellow now, which means they are not really pinned to the picture and after adding a new one and dragging it or using an existing one (which becomes red) you can reposition the model in the frame without morphing it.

The checkbox changes the behaviour of pins. When on, FaceTracker will try to retain the pin on the mesh at any cost, deforming the mesh accordingly. When a model is not precise or you can’t find any peculiar features of a model or a picture, you most likely want to keep this checkbox on and work on the iterational basis, slightly modifying the model with a number of pins. But when you have a very precise model and you can spot a number of very precise features, you might try to set the checkbox off and then place your pins right where they are in the frame — FaceTracker will not try to snap them back to the model. The point of working with off is that you know precisely where some features of the 3D model should be on the picture — and so, you know that when you set enough pins, the model will be placed precisely and pins will be back on the surface of the 3D model.

Viewport Stabilisation

One of the newest features of FaceTracker is the viewport stabilisation that allows you to control the keyframe consistency keeping during the tracking.

To switch the stabilisation on press the button on the toolbar. By using this feature, you can retain the position of a face wherever it is, and stabilise it with any zoom level and viewport position.

Moreover, it’s possible to stabilise the face around the selected pins as well. When the stabilisation is on, simply select one or more pins and start the playback. To return to the stabilisation around the face, deselect pins by clicking the left mouse button anywhere in the viewport outside of the face. Stabilisation works with any kind of frame changes, whether it is playback, jumping between keyframes, manual frame-by-frame jumping, or even the tracking process.

Tracking With Masks

There are two kinds of masks that you can use with FaceTracker to improve tracking results.

“Roto” node provides a mask used by FaceTracker to exclude image regions from tracking

The first one should be familiar to you if you ever used masks in Nuke. It helps you to exclude regions of the frame that overlaps the subject, so FaceTracker will not be confused. You just create a mask of any kind as you usually do and connect it to the input of FaceTracker placed on the right side of the node shape. Then on the main settings tab of FaceTracker in the section choose or . With FaceTracker will exclude masked regions, with the mask will be inverted before being used. In some special cases, you might want to use the alpha channel of the source passed to the input of FaceTracker, then choose or according to your needs.

Creating surface masks

The second type of masks — — is built-in into FaceTracker, the controls for it can be found on the right side of the first FaceTracker toolbar. Use it when you want to exclude some polygons of the 3D model from tracking. To mark a polygon to be excluded, press the brush button and then mark the polygons of the model with the brush in the (either in 2D or 3D view). You can set up the radius of the brush and switch off the mask temporarily using the checkbox. The button resets the selection.

Smoothing

Another way to improve tracking quality is smoothing the results. It can help to avoid jitter in transformations, rotations and face expressions. On the tab , you can find the controls for it. Note that changes in smoothing have to be made prior to the tracking process. If you already have tracked data and want to smooth it, you can press the button after changing the smoothing settings and tracking results will be updated.

Smoothing parameters of FaceTracker

One important thing to understand here is that smoothing settings are not meant to be global, they affect tracking results only while tracking is happening, so you can use different smoothing settings for different parts of the shot.

User Tracks (Helpers)

You can also use tracks created with Nuke’s built-in node importing them on the tab of FaceTracker.

Here you should understand that all the imported tracks will have the highest priority and they are considered to be ideal, so you’d need them to be accurate if you want to actually improve tracking quality.

Also worth mentioning that these tracks will be used only if they’re ‘inside’ the geometry in the frame. So if you want to improve the tracking of someone’s face and you tracked an almost invisible freckle with Nuke’s tracker — that can help, but if you tracked something outside of the face — it just won’t be used.

Here once again, we’d like to recommend you to not get crazy with creating many custom tracks with Nuke tracker and then feeding them into FaceTracker — most likely everything you’ll be able to track is already being taken from the FaceTracker analysis file.

Using Results

The output of the FaceTracker node is the morphed, transformed and rotated 3D model you passed to its input. Usually, you just use it to connect FaceTracker output to the input of the next node in the chain.

Also, you can export geometry transformation and rotation as node. To do this, select in the drop-down of the section of the FaceTracker tab. Then press the button. If you switch on , the exported will be linked to the FaceTracker node and all changes applied to FaceTracker after exporting will be transferred to the exported node. If you don’t need this kind of behaviour, uncheck . Note that facial transformations are not passed to the node.

Results tab of FaceTracker

In case you need the estimated camera settings, you can export them as a node. Select in the section of the tab in FaceTracker’s settings. If you leave the checkbox on, the exported node will receive updates if camera settings are changed inside FaceTracker. Note that there’s no point in exporting camera settings if you have a node connected to the input of FaceTracker— just use that camera.

You may also want to use the camera position across the frames where you pinned the 3D model. In that case, you can export . Then the exported node will contain all transformations like if the model was static in the default position and only the camera moved around it.

When you have a number of keyframes among which a model is rotating for more than 360°, we recommend using the button. the algorithm behind it will go through all of the keyframes you have and will make the rotation continuous. It means that if your model was at 355° at the keyframe and at 10° at the keyframe , pressing the button will add 360° to the keyframe , so the model will be at 370°. It works backwards as well: 355° after 10° will become -5°.
Considering that the algorithm alters the rotation of object with the closest period (180°), you have to be careful if your model rapidly rotates forwards and backwards between keyframes — then the button will most likely make things worse.

Exporting Model & Animation

You can export the animated model using node and Alembic ( file type with storage format (it works best for animation in Nuke). Just connect the output of FaceTracker to the input of , set up the output parameters and press the button.

Here’s one possible problem: when exporting animated models from Nuke with WriteGeo you can get two or more models in the file. That’s a bug in Nuke related to multithreading nature of the exporting process.

Exporting geometry and cameras from Nuke

Transfer animation to another 3D model

You can also export the animation using ARKit-compatible FACS blendshapes — find the option in the exporting menu. This way you can transfer animation to other models with compatible blendshapes.

Using FaceTracker Without FaceBuilder (Custom Head Model)

The short answer is: it’s possible. You can export the default head geometry from the FaceBuilder node (doing this will not require a license), then modify it keeping vertices order — this is how FaceTracker detects the face parts (e.g. nose, lips, eyes, etc), then import it back using ReadGeo or ReadRiggedGeo nodes and connect it to FaceTracker. Then you can track your custom head model with FaceTracker.

You can also use the exported default model for 3D-wrapping your custom model that you already have, either modelled or scanned.

In future, we plan to support custom models in a more convenient way of course, but for now, you at least have this workaround.

Links

Download the KeenTools package here
Follow us: Facebook, Twitter, Instagram, YouTube

FaceTracker in action

Smart tools for VFX and 3D artists