FaceBuilder is an add-on for Blender for 3D-modelling of human faces and heads based on photos. With FaceBuilder you don’t need to be an experienced 3D modeler to create a quality 3D model with clean topology. You start with getting a few photos of a person from different angles of view and then place a model on each of them to build a head or a face model. The 3D models can be later used for sculpting, animation, tracking or anything else in Blender or exported into a file and imported into any other 3D software.
The complete installation process is covered in this video.
It all starts with downloading a .ZIP file with the add-on from our website. When the zip file is stored locally, you need to open the Preferences window in Blender and go to the Add-ons section (Edit > Preferences > Add-ons). In the right upper corner of this window you’ll see the
Install… button. Click it and choose the downloaded .ZIP file — the add-on will be installed.
The process is a bit different for Mac users. When you press the download button on our website, you get a .DMG file which you need to mount — double click in Finder or open from the web-browser downloads list, Safari often mounts DMG files automatically. The ZIP file which you need for further installation is inside the mounted volume. It’s made this way because many Mac users were having the ZIP file unpacked automatically after downloading.
The next step would be to find “KeenTools FaceBuilder” in the list of add-ons (in the Mesh category) and turn it on using a checkbox in the left upper corner.
Unfortunately, we cannot ship our core library within the add-on package due to Blender license restrictions, but we made it as easy as possible to install it independently. There are two ways of doing that: online and offline. Both are accessible on the preferences pane of our add-on (Edit > Preferences > Add-ons > KeenTools FaceBuilder).
If the machine where you want to install our add-on is connected to the Internet, then you can try automatic online installation — just press the
Install online button and our add-on will download the core library file, install it to the add-on directory and tell you that now you can use the add-on.
You can also download the core library manually from our website and specify the path to the downloaded file using the
Install from disk button. Our add-on will install the downloaded core library file and tell you that you can use the add-on now. Please keep in mind that versions of the add-on and the core library should be the same.
You control FaceBuilder in Blender using panels in the right sidebar of the 3D viewport. You can either bring them to the screen by pressing the
N key, or by clicking the tiny little triangle in the right upper corner of the viewport.
FaceBuilder has eight panels. On the main top panel, you can create, delete and select FaceBuilder objects. We don’t recommend having more than one FaceBuilder object in a scene because it can lead to confusion.
Using seven additional panels you can control the chosen FaceBuilder object in various ways.
Camera panel lets you modify the camera settings.
Views panel you can load, remove and replace photographs, switch to Pin mode for different photographs, turn on facial expressions (more on them later). Here as well you will find most buttons you need in Pin mode once you switch to this mode.
Model panel gives you control over parameters of the 3D model, such as responsiveness to pins for shape and expressions (Rigidity), the visibility of various face parts (nose, ears, neck, etc), geometry scale and resolution (high, middle, or low-poly). Note that while it’s OK to change the model resolution for the modelled head, it’s better to decide which resolution you need before you start shaping it. Every time you change the topology, you’re losing preciseness, by an unnoticeable amount but still.
Next, we have the
Wireframe panels where you can tweak the appearance of pins and the wireframe in the Pin mode.
Texture panel gives you access to the experimental feature of automatic texture generation.
And finally, the
Blendshapes panel has controls for the FACS blendshape functionality. Here as well you can import a CSV-file with pre-recorded animation, and also export the animated model.
Before you create your first FaceBuilder head it’s better to remove all unneeded objects from the scene, especially the ones placed in the center of the scene, because a newly created head will be placed in the center as well and then you can run into situation when the objects will be interfering with each other. For example, by default Blender starts with a Cube, a Camera and a Light in the scene, you can either select them using your mouse and then delete them from the context menu (right mouse button), or you can just use the shortcut sequence: A → X → D (select everything → delete → confirm).
You start working with FaceBuilder creating a head 3D object if it’s not already created. To do this you need to open the sidebar, find FaceBuilder in it, and click
Create a new head button placed on the main FaceBuilder panel.
Once you’ve initialised the model, you can load the photographs of the person. You can do this on the
Views panel by clicking the
Add images button. It’s possible to load multiple image files at once.
The distance to the person on the photos is not important, but it’s advisable to have them taking the most space of the frame and at the same time to keep distortions to a minimum (e.g. don’t use wide-angle lenses close to the subject). Also, it’s important to know that FaceBuilder expects that the pictures have no lens distortion at all, so while it’s not a deal-breaker, still you’ll get much better results if you undistort photos before using them with FaceBuilder (lens distortion in Blender).
Whenever you load a new photo, the
View is being created. It’s an imaginary entity made of a 3D camera and a picture you’ve loaded. So every picture loaded into FaceBuilder in the scene has its own camera.
Camera settings & EXIF
FaceBuilder can automatically set up the focal length for each photo. If there’s EXIF data, it’ll be used and then you get very precise results. If there’s no EXIF data, the focal length estimation will be turned on and used in a smart way taking into account the dimensions of every photo. Still, if you need to set up the focal length manually, we keep the manual control option.
To set up the camera manually you need to know the 35mm equivalent focal length used for each photo. You need to open the
Camera panel, and enter the value in the
Focal Length field.
When you’ve loaded photos, it’s time to decide if you plan to use facial expressions. If the photographed person doesn’t appear to have the neutral facial expression even on one of the photos, you need to turn on facial expressions support.
When photos are loaded, it’s time for what we call “Pinning”: click one of the buttons with an image file name (a View) on the
Views panel — you will switch the Pin mode on. A couple of new buttons will appear on the
Views panel, in the viewport you’ll see the photo you loaded and the mesh of the FaceBuilder 3D model. Now you can start pinning the mesh to the photo.
It can be done manually or using automatic face alignment. For the second option, press the
Align Face button on the
Views panel. Then a couple of neural networks will find a face on the photo and set up all required pins to match the position and the shape. If the facial expression is not neutral — turn on the
Allow facial expressions checkbox before pressing the
Align Face button — then the expression will be matched as well. Repeat this action on every photo you loaded. If there is more than one face on the photo, the add-on will let you choose which face you want to pin. The aligning results are not always 100% accurate at the moment, so you’d likely need to adjust the position creating new pins or moving the already created ones in most cases.
You create pins by clicking anywhere on the mesh — the red square dots that appear over it are what we call “pins”. You don’t need to create many of them at once, instead create them one by one for distinguishable parts of the head (or face) and then drag them to the corresponding positions on the photo. If you need to remove a pin — right-click does it. Don’t forget you can undo and redo most actions, but for pinning you often need to do it twice to take the effect — unfortunately, that’s how Blender works.
If you prefer manual pinning, you can do that just as before. But we really recommend starting with automatic alignment since it reduces pinning time dramatically!
For manual pinning, we recommend starting with a 3/4 view, because it gives you more information about the head: the front and the side views at the same time. But for automatic alignment, it’s usually better to start with the frontal view.
The first three pins change the position and scale of the mesh, with four pins you can change the shape and expression. We recommend starting with corners of eyes, mouth, ears, nose, chin and then switching to another view to repeat the same “draft” pinning. When you have pinned four or five views (e.g. two of 3/4, the frontal and two side views), you can return to the views you pinned earlier and refine the model position and shape on them creating new pins when necessary. Then you can pin more views and repeat the refinement process until you’re satisfied with the quality of the model.
If the model feels too stiff, use the
Shape rigidity and the
Expression rigidity settings (
Model panel) for changing how much pins affect the shape and the expression of the model.
You can delete all pins by pressing the
Remove all pins button on the
Views panel at any time. In this case, all red dots will disappear, the mesh on this view will be reset to the default shape but its position will remain the same.
Usually, you need up to seven views: the frontal one, two of ¾ ones, two side views, one half-bottom view, and one half-top view, but you’re free to add more if you feel that you need them. Having less also works with the obvious outcomes of losing the details that you cannot see.
A couple of notes on taking photos. You can use any type of photos, including ones with non-neutral facial expressions. But it’s important to understand that if you use non-neutral facial expressions and focal length estimation, the FaceBuilder algorithms get too many “degrees of freedom” for their guesswork, so not only the computation becomes slower, but also the preciseness of the model suffers. That’s why if you’re after quality, you need to shoot the person knowing the camera settings and taking care of the person’s appearance.
In an ideal case, you can set up a number of cameras around a person, like if you’re setting up a photogrammetry rig, and then take all photos in one moment. But usually asking a person to sit or stand relaxed and still for 15–30 seconds while you take all the required photos is more than enough. Also worth knowing that if you ask a person to change the position of their head — the shape of their head close to the neck is going to be kind of distorted by tensed muscles, so instead it’s better to walk around with a camera while the person is totally still. And the last thing to keep in mind is that if you’re planning to grab a texture from the photos — you need to set up proper uniform lighting, otherwise the texture brightness and colours will be different in different areas. Usually, it’s enough to walk outside to a wide-open space with no direct sunlight, overcast weather works best in such cases. Also, it’s better to use manual White Balance in a camera, otherwise, colours will likely differ.
The texture extraction algorithm built-in into FaceBuilder for Blender is still in its early experimental stage. We decided to include it to give you a simple approach of getting something good enough to start from.
It works using the views where you’ve pinned the model: projecting each pixel of the UV map to the photo according to the model position.
Before launching the texture creating process, you can set up the texture resolution and the desired UV map on the top of the
Texture panel. FaceBuilder has 4 different UV maps:
Butterfly is aimed at reducing distortions with as few seams as possible. Our
Legacy UV has even less distortions but there are many seams.
Maxface gives you as much resolution for the face as possible.
Spherical is a slightly modified popular ‘cylindrical’ UV with better handling of the top part.
After pressing the
Create Texture button you can choose the views you want to use for texture grabbing (the views that don’t have a pinned model will be ignored automatically), pressing
OK in the dialog window starts the process of grabbing and stitching the texture, which usually takes a lot of processing power and also takes some time, so you need to be a little bit patient.
Once the process is finished you can see the texture is applied to the object if you left the corresponding checkbox checked, if you didn’t — you’ll only see a message in Blender’s status bar telling you that the texture was created — then you can apply the material automatically created with for the texture using the
Apply texture button.
You can also export and delete the texture using the buttons on this panel. Deleting may be useful when you want to transfer the project file without the texture (which changes the file size from kilobytes to megabytes).
Advanced section of the Texture panel you can tweak the texture grabbing algorithm. The most important settings are the
Angle strictness and the
The first one,
Angle strictness, determines how the angle of view affects the pixel colour when it’s grabbed. The possible values are 0–100. At 0 every pixel will have an average colour between colours taken from all pinned views where that pixel is visible. At 100 only the views where ‘we’re looking’ at this pixel at 90° will be used, so the colour becomes more accurate, but you’re losing the information for a lot of pixels at which you don’t have the 90° view. Usually, the best values are between 10 and 20.
Expand edges setting helps you with expanding texture edges using the colour of the edging pixel. The value determines the expansion in percentages of the output format height. Using it may help with hiding seams on the applied texture.
Then we also have two super-experimental functions:
Equalize brightness — which tries to level the brightness of a pixel on different views, and
Equalize color — which levels the colour of pixels. They may help when you have differently lit or coloured parts of the face on different photographs and therefore you get shadow and colour patches on texture. Although sometimes these functions work well, it’s still much better to have an uniformly lit face while shooting if you plan to use the photos for texture grabbing.
Then we also have three super-experimental functions:
Equalize brightness — which tries to level the brightness of different views, and
Equalize color — which levels the colour of pixels. They may help when you have differently lit or coloured parts of the face on different photographs and therefore you get shadow and colour patches on texture. The third one is
Autofill that tries to intellectually fill gaps. Although sometimes these functions work well, it’s still much better to have a uniformly lit face while shooting the face from many angles of view if you plan to use the photos for texture grabbing.
When the head model is ready, you can animate it with 51 built-in FACS-blendshapes. Press the
Create button on the Blendshapes panel to generate shape keys. You’ll see more buttons to control your blendshapes.
Delete removes the shape keys and unlinks the animation.
Reset value resets the values of shape keys to 0 in the current frame, note that it doesn’t alter the animation and it doesn’t create a keyframe — you need to do it manually if you want to save this state in the current frame.
You can always animate blendshapes manually setting up their values for every keyframe. But we made it possible to import pre-recorded animation as a CSV file of Live Link Face format.
To control blendshapes manually, use the
Shape keys tab on the
Object Data Properties panel. The shape key editor in the
Animate panel gives you control over keyframes.
You can also import pre-recorded facial animation using the
Load CSV button on the
Blendshapes panel. The FaceBuilder head will be animated with the blendshape coefficients found in the CSV file. Currently, only the format of Live Link Face app is supported. The app works only on iOS devices equipped with the TrueDeph camera, like iPhone X and newer models.
Note that the animation is being loaded from the current keyframe. That means you can load multiple files into the sequence continuously.
The format of the file is pretty simple, so you can compose it yourself with your custom software solutions. You can also export facial animation in form of a CSV file with ARKit-compatible FACS blendshape coefficients from our FaceTracker node for Nuke.
Project management, Saving & Exporting
You can easily save the project in the middle of the face-building process, then load it and continue the work from the point where you left it.
We recommend keeping the project file in the same folder where you keep the photos used in it or keeping photos in the directory which is placed next to the project file. Just keep in mind that Blender uses relative file paths in projects, so storing files on different hard drives wouldn’t be a future-proof idea.
FaceBuilder doesn’t include the photographs into project files, so when you want to transfer the project to another computer (or a person), you need to transfer not only the project file, but the photographs as well if you or the person you transfer the project to are going to change the shape of the head using the photos.
At the same time, the created texture is stored inside the project file, so first you’re transferring it along with the project file and second, it makes your project file quite heavy. If you don’t want or don’t need to transfer the texture, you can delete it using the
Delete button on the
Texture panel. You can also export it using the
Export button and then delete it from the project.
To export the geometry you need to select it in the 3D viewport and then go to the File > Export menu where you can choose the file type and save the model. We recommend using Wavefront (.obj) or Alembic (.abc) formats, because other ones do not work consistently in Blender. But you’re free to try and choose any other format depending on your workflow, there are plenty of them, including the ones people often use for 3D printing.
If you’re going to use the head for facial tracking with our FaceTracker, you need to keep the topology intact, we rely on vertex order in FaceTracker, so please don’t use any kind of automatic optimisations during exporting. In this case you can use only Wavefront and Alembic formats, because Blender modifies geometry with other formats and there’s no way to prevent it at the moment.
Using Wavefront (.obj) and Collada (.dae) formats you can export the texture along with the model. If you choose the Wavefront format, the texture files (.mtl and .png) will be saved next to the model file, while the Collada file will have everything embedded with the model.
For exporting a geometry with all blendshapes and animation, use the
Export as FBXbutton on the
Blendshapes panel. All settings in the export window will be already configured for importing into Unreal Engine or Unity game engines.