FaceBuilder is a Nuke node that helps you create 3D models of human faces and heads based on a few photos or video frames.
To create a 3D model you need a few portrait pictures taken from different angles, then you place a 3D model on each of them. The resulting model can be used, for example, for rigid tracking with GeoTracker or for facial tracking with FaceTracker.
Setup
First thing you need is a set of pictures of the subject person with neutral face expression (in the ideal case) shot with the same camera and the same Focal Length
. It might be a number of photographs or a video file. You can try creating a 3D model having just one picture, but we recommend having at least 3 of them: frontal one and two pictures from different sides with visible ears. The more pictures from different angles (including top and bottom) you have — the better 3D model you can create. To load pictures as a sequence into Nuke, use numbers in their names (e.g. photo-1.jpg
), then create a Read (Image)
node and load these pictures as a sequence.
If you’re going to build a model using a footage, find a number of the most peculiar frames showing person’s head and face from the most different angles with the most neutral face expression. Use Read (Image)
node to import your footage into Nuke.
Starting from the version 2.0.0, you can now use photographs or video frames with non-neutral face expression, but then you have to remember that the model is better when the expressions are closer to neutral, and still in the ideal case you want neutral expressions.
To make a 3D model precise it’s also recommended to have a Camera (3D)
node set up according to the settings of the camera that was used to take pictures of a subject person. If you don’t know the camera settings, you can switch on Focal Length
estimation in camera settings of FaceBuilder (the main tab), later you’ll be able to export the estimated camera settings as a Camera (3D)
node. Estimation is happening each time you change the 3D model.
Using focal length estimation with non-neutral facial expressions can lead to inaccurate or even weird results, so consider using one or the other, not both.
It’s also very important to undistort
the images you use for building a model, because FaceBuilder assumes that the images you give it are perfect in terms of lens distortion.
To start building your 3D model, connect the Read (Image)
and camera
nodes to FaceBuilder’s bg
and cam
inputs respectively, then connect FaceBuilder node to a Viewer
, find a picture which you want to start with (we recommend starting with 3/4 or front view) and push Center Geo
button on the FaceBuilder’s toolbar over the Viewer
. You will see a generic human head 3D model placed in the middle of the frame. Now you can start pinning this 3D model to your subject person’s head and face.
The tex
input can receive texture that will be added to the 3D model created in FaceBuilder.
Pinning a model
It’s better to start pinning with the parts of the face positions of which you can clearly see on the picture: corners of eyes and lips, nose, ears or chin. Click a corner of an eye or any other point on the mesh and drag it to the corresponding point on the picture. You will see a small red square appear — it’s a pin, and now it’s pinned to a point in the frame. When you add a second point and start dragging it, the 3D model will be scaled to the needed size. The second pin also lets you rotate the model in 2D space if it’s needed. Third point will allow rotation of the model in 3D space. After you have pinned the most obvious points, you can start adjusting minor shape differences like cheeks, forehead, brows, etc. When you’re finished with the first frame, switch to the next one and repeat pinning process. The quality of the model will grow with each pinned frame. You can check how it looks switching to 3D mode in the Viewer
.
It’s useful to know that the shape of 3D model is shared between frames where you pin it, while transformations (rotation, position) are stored in keyframes for each frame. We will discuss exporting them later.
While shaping the model, you might need to tweak mesh appearance. Lit Wireframe
, Adaptive Opacity
, Back-face Culling
and Textured Wireframe
checkboxes might help you customize the mesh appearance according to your needs, you can find them in the main settings tab of FaceBuilder. Switching off Textured Wireframe
returns the wireframe to default bright green color. Back-face Culling
removes “invisible” parts of the wireframe from the view. Adaptive Opacity
makes the wireframe kind of semi-transparent, so it’s easier to distinguish its shape. And finally Lit Wireframe
adds some lighting to the wireframe, so its shape becomes once again slightly easier to perceive.
Sometimes it feels like the 3D model is a bit too rigid, so you can’t pin it properly. In that case you can try switching off Auto Rigidity
on the main settings tab of FaceBuilder and try changing values: 0 means minimum rigidity, 1 is the maximum. For example it may be helpful when you’re finished with the general shape and want to tweak little features.
Exporting
Usually you only need geometry without camera settings or position of the 3D model in 3D space on the input footage or pictures, then you only need to connect the output of FaceBuilder node to the geo
input of any other node where you need this 3D model.
In case you don’t need some parts of the resulting 3D model, you can switch them off on the main settings tab of `FaceBuilder` using checkboxes Ears
, Eyes
, Face
, Head Back
, Jaw
, Mouth
, Neck
.
In case you need the estimated camera settings, you can export them in a Camera (3D)
node. Select Camera
in the Export
section of the Results
tab of FaceBuilder settings. If you leave Link Output
checkbox on, the exported Camera (3D)
node will receive updates if camera settings are changed inside FaceBuilder. Note that there’s no point in exporting camera settings if you have a Camera (3D)
node connected to the input of FaceBuilder — just use that camera.
You might also want to use the position the camera in the frames where you’ve pinned the 3D model. In that case you can export Camera Positions
. Then the exported Camera (3D)
node will contain all the transformations in the keyframes where you’ve pinned the 3D model.
For some special occasions you might want to export 3D model you’ve built translated in 3D space in a way so the default camera in the default position would see it fitted in the frame where it was built. Then you can use either Output Transformed Geometry
checkbox and the output of FaceBuilder node, or export the transformations to a TransformGeo
node and then modify the output of FaceBuilder with the transformations stored in the exported node. For example, it might be useful when you create textures.
When you have a number of keyframes among which a model is rotating for more than 360°, we recommend using the Unbreak Rotation
button. the algorithm behind it will go through all of the keyframes you have and will make the rotation continuous. It means that if your model was at 355° at the keyframe #24
and at 10° at the keyframe #25
, pressing the button will add 360° to the keyframe #25
, so the model will be rotated at 370°. It works backwards as well: 355° after 10° will become -5°.
Considering that the Unbreak Rotation
algorithm alters the rotation of object with the closest period (180°), you have to be careful if your model rotates forwards and backwards between keyframes — then the Unbreak Rotation
button will most likely make things worse.
Exporting FaceTracker
Starting from the version 2.0.0, it’s now possible to export FaceTracker node with keyframes created in FaceBuilder, so you can start tracking right away.
This feature is only available when you turn on facial expressions support in FaceBuilder. Obviously, it only makes sense using this if you’ve created the model using a footage which you’re going to use for tracking. Otherwise you can follow the old way creating an new FaceTracker node and connecting it to the FaceBuilder output.
Links
Download KeenTools package here: keentools.io/downloads
Follow us: Facebook, Twitter, Instagram, YouTube