FaceTracker is a Nuke node for facial tracking designed to work with models built with FaceBuilder. It’s very useful for clean up, face replacement, digital makeup and other tasks that require tracking of facial expressions. It doesn’t require mocap rigs or markers on faces. Also, the model needed for tracking can be created using the a frame in the footage with a face expression which neutral or almost neutral.
First thing you need is a 3D model of the face you want to track created with FaceBuilder, connect it to the
geo input of FaceTracker.
Then you need a footage where you want to track a face connected to the
bg input. Removing lens distortion from the footage before you pass it to FaceTracker might drastically improve the results depending on the lens quality. It’s optional, but remember that FaceTracker assumes that the image has no distortion at all.
And finally you can either connect a
Camera (3D) node with the settings of the camera that was used to film the footage to the
cam input, or choose
Estimate Focal Length on the main settings tab of FaceTracker. Estimation will happen each time you move or change the 3D model.
Now we need to pre-analyse the footage.
We isolated the analysis process to make the actual tracking fast and responsive, so you wouldn’t wait for frame being analysed in the tracking process. Also, you can automate the pre-analysing of your footages and perform it on servers instead of user machines, so your artists will have everything ready for tracking right from the start.
To launch pre-analysis manually find the
Analysis File input field on the main settings tab of FaceTracker, enter the path where FaceTracker should save the analysis file or press the button with an arrow next to the field to choose a path using a file dialog. When you’ve specified the path, press
Analyse button next to the path field. A confirmation dialog with selectable frame range will appear. If you don’t want to track the model across the whole footage, you can specify the frame range here, it will reduce the analysis time. When the frame range is set according to your needs, press
OK button. It will take some time. You can pre-analyse different frame ranges using the same file, the information will be added to the file, or re-analysed if you choose the frames which were pre-analysed before.
Now everything is ready for tracking. Connect FaceTracker node to a
Viewer, choose a frame to start with and press
Center Geo button on FaceTracker second toolbar.
Center Geo button on the toolbar you will see the 3D model you’ve created earlier with FaceBuilder in the middle of the frame.
By default it appears with a textured wireframe and usually it’s advisable. But in case you experiencing any difficulties while working with that kind of wireframe on some frames, you can switch off
Textured Wireframe checkbox on the main settings tab of FaceTracker, it will become bright green. You can also find other useful checkboxes like
Adaptive Opacity and
Back-face culling around there. Use them to customize appearance of the wireframe in the
Viewer. And finally, at the bottom of main FaceTracker settings tab you can find
Colors section where you can either change the color of the wireframe, it’s transparency and a number of other things.
Tracking starts with matching the initial position of 3D model with the picture, we in KeenTools call it
pinning. It’s better to start pinning with the parts of the face positions of which you can clearly see on the picture: corners of eyes and lips, nose, ears or chin. Click a corner of an eye or any other point on the mesh and drag it to the corresponding point on the picture. You will see a small red square appear — it’s a
pin, and now it’s ‘pinned’ to the picture. When you add a second point and start dragging it, the 3D model will be scaled to match the size. The second pin also lets you to rotate the model in 2D space if it’s needed. The third pin will allow rotation of the model in 3D space. Starting from the fourth pin you can actually morph the model to fit the face expression.
If you’re familiar with our GeoTracker node, please note that now you’re not pinning a static 3D model like you’ve been doing earlier. This time using pins you can open and close eyes and mouth to a various degree, change the facial expression, etc.
When you’re satisfied with the initial position of the 3D model, you can finally launch automatic tracking. It can be done on frame-by-frame basis or continuously to the last or first frame of a footage. On the left side of first FaceTracker toolbar you can find four buttons
Track To Start,
Track To End.
Track To Start and
Track To End launch continuous tracking to the first and last frames respectively, starting from the current frame.
Track Previous and
Track Next launch tracking on the previous and the next frames respectively.
While tracking is happening you can see its results (almost) in real-time in the
Viewer. If you see that something strange is happening (like wrong expressions, or subject’s head is being lost), you can abort the tracking — the tracking results won’t be lost. And here we come to refining process.
In an ideal situation you get clean and precise tracking. But we don’t live in an ideal world, so at times you need to refine the results of automatic tracking. It’s not difficult with FaceTracker. First, you need to find a frame where tracking results became noticeably wrong. Then, using the existing pins, or adding new ones, correct the position of the 3D model to match the picture. You can notice that after you adjusted pins position, a keyframe was created. When the model fits the picture press
Refine button on the first toolbar of FaceTracker, it’ll run through frames between existing keyframes and refine the model position using information from two keyframes. Use this procedure repeatedly to refine tracking results across all frames of a footage when needed. Usually, you don’t need to set a lot of manual keyframes but it depends on the content of the source footage.
Sometimes you might want to remove a keyframe set next or before the current frame, or all keyframes before or after the current frame. You can use keyframe and tracking data cleanup buttons in the middle of the first FaceTracker toolbar.
Clear Between Keyframes clears all tracking data between the keyframes closest to the current frame on the left and on the right side, leaving the keyframes intact.
Clear Backwards removes data and keyframes before the current frame.
Clear All clears all data and all keyframes.
Clear Forwards clears tracking data and keyframes after the current frame. Note that even when you clear all keyframes and data, the pins you’ve set on the models are being kept intact, while model is being reset to the neutral state.
In case you want to restore the neutral face expression manually, use
Unmorph button — it will keep your pins where they are on the mesh, but reset them to the default state of the 3D model.
There’s also the
Remove Pins button that removes all pins (note that they are shared between all keyframes, so you remove them everywhere), but at the same time the 3D model is kept intact, retaining its latest state.
Pin/Unpin button can help you when you want to move the model without changing its shape or expression. Press the button — you’ll notice that all pins are yellow now, it means they are not really pinned to the picture and adding a new one or dragging an existing one (which becomes enabled) you can reposition the model without morphing it.
Spring Pins Back checkbox changes the behaviour of pins. When on, FaceTracker will try to retain the pin on the mesh at any cost, deforming the mesh accordingly. When model is not precise or you can’t find any peculiar features of a model or a picture, you most likely want to keep this checkbox on and work on an iterational basis, slightly modifying the model with a number of pins. But when you have a very precise model and you can spot a number of very precise features, you might try to set the checkbox off and then place your pins right where they should be — FaceTracker will not try to drag them back to the model. The point of working with
Spring Pins Back set to off is that you know precisely where some features of the 3D model should be on the picture — and so, you know that when you set enough pins, the model will be placed precisely and pins will be back on the surface of 3D model.
Tracking With Masks
There are two kinds of masks that you can use with FaceTracker to improve tracking results.
The first one should be familiar to you if you ever used masks in Nuke. It helps you to exclude regions of the frame that overlaps the subject, so FaceTracker will not be confused. You just create a mask of any kind you want as you usually do and connect it to the
mask input of FaceTracker placed on the right side of the node shape, just as usual. Then on the main settings tab of FaceTracker in the
Mask section choose
mask alpha or
mask alpha inverted. With
mask alpha FaceTracker will exclude masked regions, with
mask alpha inverted the mask will be inverted before being used. In some special cases you might want to use alpha channel of the source passed to
bg input of FaceTracker, then choose
source alpha or
source alpha inverted according to your needs.
The second type of masks —
Surface Mask — is built-in into FaceTracker, the controls for it can be found on the right side of the first FaceTracker toolbar. Use it when you want to exclude some surfaces of the 3D model from tracking. To mark a surface to be excluded, press the brush button and then mark the surfaces of the model with the brush in the
Viewer (either in 2D or 3D view). You can setup the radius of the brush and switch off the mask temporarily using
ignore checkbox. The
Clear button resets the selection.
Another way of improving tracking quality is smoothing the results. It can help to avoid jitter of motion, transformations, rotations and face changes. On the tab
Smoothing you can find the controls for it. Note that changes in smoothing has to be made prior to tracking process. If you’ve already done tracking and want to add smoothing, you can press
Refine button and tracking results will be updated.
One important thing to understand here is that smoothing settings are not meant to be global, they affect tracking results only while tracking is happening, so you can use different smoothing settings for different parts of the shot.
User Tracks (Helpers)
You can also use tracks created with Nuke’s built-in
Tracker node. You can import the tracks you’ve created with
Tracker node on
UserTracks tab of FaceTracker. Here you should understand that all the imported tracks will have the highest priority considered being ideal, so you’d need them to be accurate in order to improve tracking quality.
The output of FaceTracker node is morphed and translated 3D model you’ve passed to its input. Usually you just use it connecting FaceTracker output to the
geo input of the next node in the chain.
Also, you can export geometry transformations as
TransformGeo node. To do this, select
TransformGeo in the drop-down of the
Export section of FaceTracker
Results tab. Then press
Export button. If you switch on
Link Output, the exported
TransformGeo will be linked to FaceTracker node and all the changes applied to FaceTracker node after exporting will be transferred to the exported
TransformGeo node. If you don’t need this kind of behaviour, uncheck
Link Output. Note that facial transformations are not passed to
In case you need the estimated camera settings, you can export them in a
Camera (3D) node. Select
Camera in the
Export section of the
Results tab in FaceTracker’s settings. If you leave
Link Output checkbox on, the exported
Camera (3D) node will receive updates if camera settings are changed inside FaceTracker. Note that there’s no point in exporting camera settings if you have a
Camera (3D) node connected to the input of FaceTracker— just use that camera.
You might also want to use the camera position across the frames where you’ve pinned the 3D model. In that case you can export
Camera Positions. Then the exported
Camera (3D) node will contain all the transformations in the keyframes where you’ve pinned the 3D model, like if the model was static in the default position and only camera moved around it.
For some special occasions you might want to export the 3D model you’ve built translated in 3D space in a way so the default camera in the default position would see it fitted in the frame where it was built. Then you can use either
Output Transformed Geometry checkbox and the output of FaceTracker node, or export the transformations to a
TransformGeo node and then modify the output of FaceTracker with the transformations stored in the exported node. For example, it might be useful when you create textures.
When you have a number of keyframes among which a model is rotating for more than 360°, we recommend using the
Unbreak rotation button. the algorithm behind it will go through all of the keyframes you have and will make the rotation continuous. It means that if your model was at 355° at the keyframe
#24 and at 10° at the keyframe
#25, pressing the button will add 360° to the keyframe
#25, so the model will be rotated at 370°. It works backwards as well: 355° after 10° will become -5°.
Considering that the
Unbreak Rotation algorithm alters the rotation of object with the closest period (180°), you have to be careful if your model rotates forwards and backwards between keyframes — then the
Unbreak Rotation button will most likely make things worse.
Exporting Model & Animation
You can export the animated model you’ve got after tracking using
WriteGeo node and
ABC format (since it works best for animation in Nuke). Just connect the output of FaceTracker to the input of
WriteGeo, setup the output parameters and press the
Using FaceTracker Without FaceBuilder (Custom Head Model)
The short answer is: it’s possible. You can export the default head geometry from FaceBuilder node (that would not require a license), then modify it keeping only vertices in the same parts of the face — this is the way FaceTracker uses to detect all parts (e.g. nose, lips, eyes, etc), then import it back using ReadGeo or ReadRiggedGeo nodes and connect it to FaceTracker. Then you can track your custom head model with FaceTracker.
You can also use the exported default model for wrapping of your custom model that you already have, either modelled or scanned.
In future we plan to support custom models in a more convenient way of course, but for now you at least have this workaround.