How an Indie Studio Attempted Facial Performance

Jen Re
Well Told Entertainment
5 min readNov 2, 2023

--

Howdy! My name is Jen Re, and I’m the Animation Director at Well Told Entertainment. Today I’ll be diving into the facial animation pipeline I put together for our latest game, The Foglands. Specifically, we’re going to focus on one of our main characters, Ursa, and how we crafted her emotive and reactive performance.

The main goals of determining the pipeline were:

  • To reduce animation scope by using a mocap foundation. We had a tiny, but mighty, animation team of 4 (Jen Re, Aharonit Elior, Quoc Nguyen, and Ashley Chung) who created over 800 handmade character animations for the game’s cast. And the facial animation itself, about an additional 350 animations, were analyzed/retargeted/cleaned up by a team of 1 — moi!
  • To find ways of having more creative control over the data generated performances so that they matched our stylized characters and animation. We strived to add that personal touch that comes from only an animator’s eye.

THE CAPTURE

Now, as I mentioned, Well Told is a tiny studio. We did not have the manpower to hand animate all the facial animation, nor did we have the funds for mocap stages or expensive equipment. Fortunately, Faceware had some more affordable headsets and software options.

The aim was to only capture and utilize the lip sync and some minor brow data. Eyes and emotive brow posing would be implemented in a different way. Utilizing Faceware’s Indie Cam, we were able to capture facial performance while recording our VO for Ursa. And through a custom and very scrappy lighting setup, in a small meeting room of our office — the studio was able to secure some great capture.

ANALYZING AND RETARGETING THE CAPTURE

There were two pieces of software we relied on from Faceware to get our initial results — Analyzer and Retargeter.

Analyzer was where we brought in our capture. We essentially created a data set for Analyzer to learn the face of our actress, so that it could then use that data to analyze the rest of the footage. And Retargeter, a plugin for Autodesk’s Maya, was where we did the setup of our rigged character and created a pose library for Retargeter to pull from.

I utilized Faceware’s batch scripts for their Analyzer and Retargeter tools, this way I could do a lot of the work up front and let the scripts do the rest. I also spent a lot of time making a custom script that utilized tools from Animbot’s API, in order to batch edit curve values across all the animations while the data was being mapped onto Ursa.

For instance, if I wanted Ursa’s jaw to hinge a little to one side so she favors that side of her face, I could blanket add ‘x’ amount of rotational value to her jaw control in each Maya file. I could also use custom scripts to apply certain curve filters across each performance!

Analyzer Mapping Result
Cleaned Up Retargeter Data in Maya — Mouth and Brow Translation

BLINKS

Blinks are one of the major components of facial performance that bring a character to life. What people don’t realize is that humans blink much more frequently than an animated character. Blinks are meant to be placed with intent, so I wanted to make sure I had some level of control over their frequency.

What we did was create a standard blink animation in Maya for the character, this way I could get the timing, squash/stretch in the lids and brows, and overall feel, just right. That blink was then dropped into a system created by our engineers that set the blinks to trigger every ‘x’ seconds. And if we hand placed a blink notify anywhere in one of her animations, the system would note a blink was placed, and reset its timer for the next one. This gave her a more naturalistic look!

Isolated blink that was fed into our blink system.

EMOTE LAYERS

The last piece of the system was emotive layering. This is where the brows and lids come into play. What we did was create 7–8 different one frame facial poses that could be additively layered over any facial performance. If she had a line that was on the gloomier side, we’d add a notify to layer over her ‘sad pose’, where her brows and mouth corners would slightly droop. And if she was supposed to be upbeat, we’d add a ‘happy’ notify where her brows and mouth corners would peel up.

Here’s some examples of the same base performance I made during a very early test capture, with varying emotive poses layered on top:

Annoyed Emote Layer
Happy Emote Layer
Sad Emote Layer

This gave us a huge range of possibilities for the different line reads, and let us easily change her emote mid-line by using Unreal notifies. It also helped extend our body animation library whilst getting more control in stylizing her facial performance!

And with all those layers, we have our final facial performance! Here’s an example of one the conversations in the game:

Thank you so much for stopping by! We hope you enjoy our game and interacting with the characters we’ve loved creating! : )

--

--