KeenTools FaceTracker Tutorial
Please, note that FaceTracker is currently in beta, which means not only that it’s free to use, but also that the workflow might be vastly changed.
FaceTracker is a Nuke node for facial gestures and expressions tracking designed to work with models built with FaceBuilder. It’s very useful for clean up, face replacement, digital make up and other tasks that require tracking of changes on human faces.
First thing you need is a 3D model of the face you want to track created with FaceBuilder, connect it to the
geo input of FaceTracker.
Then you need a footage where you want to track a face connected to the
bg input. Removing lens distortion from the footage before you pass it to FaceTracker might drastically improve the results depending on the lens quality. It’s optional, but remember that FaceTracker assumes that the image has no distortion at all.
And finally you can either connect a
Camera (3D) node with the settings of the camera that was used to film the footage to the
cam input, or choose
Estimate Focal Length=` on the main settings tab of FaceTracker. Estimation will happen each time you move or change the 3D model.
Now we need to analyze the footage. To do this, find the
Analysis File input field on the main settings tab of FaceTracker, enter the path where FaceTracker should save the analysis file or press the button with an arrow next to the field to choose a path using a file dialog. When you’ve specified the path, press
Analyze button next to the path field. A confirmation dialog with selectable frame range will appear. If you don’t want to track the model across the whole footage, you can specify the frame range here, it will reduce the analysis time. When the frame range is set according to your needs, press
OK button. It will take some time.
If you have analyzed this video file before, you can use old analysis file, just specify the path to this file in the path field.
Everything is ready for tracking. Connect FaceTracker node to a
Viewer, choose a frame to start with and press
Center Geo button on FaceTracker second toolbar.
Center Geo button on the toolbar you will see the 3D model you’ve created earlier with FaceBuilder in the middle of the frame.
By default it appears with textured wireframe and usually it’s advisable. But in case you feel it’s difficult to work with that kind of wireframe on a specific frame, you can switch off
Textured Wireframe checkbox on the main settings tab of FaceTracker, it will become bright green. You can also find other useful checkboxes like
Adaptive Opacity and
Back-face culling around there. Use them to customize appearance of the wireframe in the
Tracking starts with matching the initial position of 3D model with the picture, we in KeenTools call it pinning. It’s better to start pinning with the parts of the face positions of which you can clearly see on the picture: corners of eyes and lips, nose, ears or chin. Click a corner of an eye or any other point on the mesh and drag it to the corresponding point on the picture. You will see a small red square appear — it’s a pin, and now it’s pinned to the picture. When you add a second point and start dragging it, the 3D model will be scaled to the needed size. The second pin also lets you rotate the model in 2D space if it’s needed. Third pin will allow rotation of the model in 3D space. And all the rest will modify the 3D model.
Note that now you’re not pinning a static 3D model like you’ve been doing with FaceBuilder for example. This time using pins you can open and close eyes and mouth to a various degree, change facial expression, etc.
When you’re satisfied with the initial position of the 3D model, you can finally launch automatic tracking. It can be done on frame-by-frame basis or continuously to the last or first frame of a footage. On the left side of first FaceTracker toolbar you can find four buttons
Track To Start,
Track To End.
Track To Start and
Track To End launch continuous tracking to the first and last frames respectively, starting from the current frame.
Track Previous and
Track Next launch tracking on the previous and the next frames respectively.
While tracking is happening you can see its results (almost) in real-time in the
Viewer. If you see that something strange is happening (like wrong expressions, or subject’s head is being lost), you can abort the tracking. And here we come to refining process.
In an ideal situation you get clean and precise tracking. But we don’t live in an ideal world, so at times you need to refine the results of automatic tracking. It’s not difficult with FaceTracker. First, you need to find a frame where tracking results became noticeably wrong. Then, using the existing pins, or adding new ones, correct the position of the 3D model to match the picture. You can notice that a keyframe was created. When the model fits the picture press
Refine button on the first toolbar of FaceTracker, it’ll run through frames between existing keyframes and refine the model position. Use this procedure repeatedly to refine tracking results across all frames of a footage. Usually, you don’t need to set a lot of manual keyframes but it depends on the content of the source footage.
Sometimes you might want to remove a keyframe set next or before the current frame, or all keyframes before or after the current frame. You can use keyframe and tracking data cleanup buttons in the middle of the first FaceTracker toolbar.
Clear Between Keyframes clears all tracking data between the keyframes closest to the current frame on the left and on the right side, leaving the keyframes intact.
Clear Backwards removes data and keyframes before the current frame.
Clear All clears all data and all keyframes.
Clear Forwards clears tracking data and keyframes after the current frame. Note that even when you clear all keyframes and data, the pins you’ve set on the models are being kept intact, while model is being reset to the neutral state.
In case you want to restore the neutral face expression manually, use
Unmorph button — it will keep your pins where they are on the mesh, but reset them to the default state of the 3D model.
There’s also the
Unpin button that does an opposite thing — it removes all pins (note that they are shared between all keyframes, so you remove them everywhere), but at the same time the 3D model is kept intact, retaining its latest state.
Disable Pins button can help you when you want to move the model without changing its shape or expression. Press the button — you’ll notice that all pins are yellow, it means they are not really pinned to the picture and adding a new one or dragging an existing one (which becomes enabled) you can reposition the model without changing it.
Spring Pins Back checkbox changes the behaviour of pins. When on, FaceTracker will try to retain the pin on the mesh at any cost. When model is not precise or you can’t find any peculiar features of a model or a picture, you most likely want to keep this checkbox on and work on an iterational basis, slightly modifying the model with a number of pins. But when you have a very precise model and you can spot a number of very precise features, you might try to set the checkbox off and then place your pins right where they should be — FaceTracker will not try to drag them back to the model. The point of working with
Spring Pins Back set to off is that you know precisely where some features of the 3D model should be on the picture — and so, you know that when you set enough pins, the model will be placed precisely and pins will be back on the 3D model.
Tracking With Masks
There are two kinds of masks that you can use with FaceTracker to improve tracking results.
First one should be familiar to you if you ever used masks in Nuke. It helps you to exclude regions of the frame that overlap with the subject, so FaceTracker will not be confused. You just create a mask of any kind you want as you usually do and connect it to the
mask input of FaceTracker placed on the right side of the node shape, just as usual. Then on the main settings tab of FaceTracker in the
Mask section choose
mask alpha or
mask alpha inverted. With
mask alpha FaceTracker will exclude masked regions, with
mask alpha inverted the mask will be inverted before being used. In some special cases you might want to use alpha channel of the source passed to
bg input of FaceTracker, then choose
source alpha or
source alpha inverted according to your needs.
The second type —
Surface Mask — is built-in into FaceTracker, the controls for it can be found on the right side of the first FaceTracker toolbar. Use it when you want to exclude some surfaces of the 3D model from tracking. To mark a surface to exclude, press the brush button and then mark the surfaces of the model with the brush in the
Viewer. You can setup the radius of the brush and switch off the mask temporarily using
ignore checkbox. The
Clear button resets the selection.
Another way of improving tracking quality is smoothing the results. It can help to avoid jitter of motion, transformations and camera movement. On the tab
Smoothing you can find the controls for it. Note that changes in smoothing has to be made prior to tracking process. If you’ve already done tracking and want to add smoothing, you can press
Refine button and tracking results will be updated. Note that refine process is happening between two closest keyframes.
The output of FaceTracker node is morphed and translated 3D model you’ve passed on its input. Usually you just use it connecting FaceTracker output to the
geo input of the next node in the chain.
Also, you can export geometry transformations as
TransformGeo node. To do this, select
TransformGeo in the drop-down of the `Export` section of FaceTracker
Results tab. Then press
Export button. If you switch on
Link Output, the exported
TransformGeo will be linked to FaceTracker node and all the changes applied to FaceTracker node after exporting will be transferred to the exported
TransformGeo node. If you don’t need this kind of behaviour, uncheck
In case you need the estimated camera settings, you can export them in a
Camera (3D) node. Select
Camera in the
Export section of the
Results tab in FaceTracker’s settings. If you leave
Link Output checkbox on, the exported
Camera (3D) node will receive updates if camera settings are changed inside FaceTracker. Note that there’s no point in exporting camera settings if you have a
Camera (3D) node connected to the input of FaceTracker— just use that camera.
You might also want to use the position of camera across the frames where you’ve pinned the 3D model. In that case you can export
Camera Positions. Then the exported
Camera (3D) node will contain all the transformations in the keyframes where you’ve pinned the 3D model.
For some special occasions you might want to export 3D model you’ve built translated in 3D space in a way so the default camera in the default position would see it fitted in the frame where it was built. Then you can use either
Output Transformed Geometry checkbox and the output of FaceTracker node, or export the transformations to a
TransformGeo node and then modify the output of FaceTracker with the transformations stored in the exported node. For example, it might be useful when you create textures.
When you have a number of keyframes among which a model is rotating for more than 360°, we recommend using the
Unbreak rotation button. the algorithm behind it will go through all of the keyframes you have and will make the rotation continuous. It means that if your model was at 355° at the keyframe
#24 and at 10° at the keyframe
#25, pressing the button will add 360° to the keyframe
#25, so the model will be rotated at 370°. It works backwards as well: 355° after 10° will become -5°.
Considering that the
Unbreak Rotation algorithm alters the rotation of object with the closest period (180°), you have to be careful if your model rotates forwards and backwards between keyframes — then the
Unbreak Rotation button will most likely make things worse.