Unreal Filmmaking: More Human than Metahuman

Anthony Koithra
Locodrome
Published in
5 min readDec 20, 2022

Anyone with even a passing familiarity with animation has likely heard of the ‘Uncanny Valley’ — the creepy feeling that some people experience in response to almost-lifelike animated characters. Rogue Squadron, my first short, had no facial animation and very little custom body animation. This was by design — it’s a big topic and one I didn’t want to get into as I learned the basics. My next short has a human main character and lots of close-ups and custom animation. The ‘appeal’ of my character is going to be critical.

The Unreal Metahuman Creator interface — the range of options for customization is really impressive

Preparing a good baseline human model with topology that deforms nicely and has realistic proportions is actually a pretty hard thing to do — which is why there are lots of starter base-meshes available for a range of body types and levels of stylization. Unreal’s Metahumans takes that several steps forward — an enormous range of ethnicities, features, skin types, eye colors, hair (including minute details like peach fuzz on facial skin) all mapped onto a standardized body and facial skeleton. You can even remix the various presets to get closer to a look you want for a character. It’s an incredibly powerful tool to (very) quickly build a close-to-photorealistic character and place it in your scene.

Working with a Metahuman face involves a ton of really detailed controls

This is a very subjective evaluation, but for me a lot of the content created with Metahumans — close-up stuff in particular — is still in ‘uncanny’ territory. The materials are fantastic — so much so that still images often look fine. Most of the creepiness is rooted in the face — and it’s usually the facial animation that really breaks the illusion. The lack of natural movement all over the face combined with a very realistic model results in immediate creepiness.

Using the LiveLink app to stream facial performance into Unreal Engine in real-time

There are lots of very impressive and increasingly accessible facial capture tools out there, but they don’t capture a lot of the subtle facial movement that would get it out of the Valley. Manually keyframing the 164 different facial controls on the Metahuman facial control rig is a really daunting prospect too — so some kind of capture is essential. But one other way to achieve better appeal is to reduce the level of realism in the character — by making it more stylized, we automatically reduce our brain’s expectations of realism in the animation.

An art and science unto itself — designing and sculpting appealing stylization into a character

So why use Metahumans at all? For a solo animator like me, learning to rig and set up a custom face mesh will take a completely impractical amount of time. Until faces become as easy to rig as bodies (which really only got good recently — with tools like Accurig) I’m going to have to rely on pre-built stuff — plus it works really well for animating inside Unreal, vs. roundtripping to Maya or Blender.

Stylization (within limits) is possible with Mesh-to-Metahuman

The Mesh-to-Metahuman workflow that Unreal introduced this year allows you to start from a base head shape that you like and turn that into a Metahuman head — excellent for stylization and achieving a distinctive look. It’s a really well built tool with an impressive amount of flexibility, but it tends to round off sharp stylized face planes and edges, in order to make the character more, well, human-looking.

There’s more than facial shapes that go into Arcane characters’ appeal, but there is character and story built into each of these very distinctive designs

A distinctive facial silhouette isn’t just useful for appeal — it’s a character and storytelling tool. A well-designed character’s shape and movement tell you things about who they are, and how they fit into the story. ‘Stylized’ is a frequently abused term, but every animated character I’ve ever loved has a shape that is decidedly non-realistic. I’m not looking to make animated films because my live-action concept is too expensive to shoot live-action. So my problem statement is this: How to (A) keep a distinctive and stylized shape while (B) not building a bunch of rigs from scratch?

The work that Sava’s team is doing is really impressive — this process video was instructive for me

For a while I avoided working with Metahumans because from my experiments with the Mesh-to-Metahuman workflow, the characters came out looking too realistic, and therefore a little creepy. But then I saw this proof-of-concept video from Sava Zivkovic that showed a Metahuman model with a really distinctive look. A little more exploration showed that his team was using a single corrective morph to achieve that look. This is a rapidly evolving area — clearly lots of people much smarter than me are also struggling with this problem. So down the rabbit-hole I went.

Among my early results — using a corrective facial morph to push a Mesh-to-Metahuman model beyond the usual limits of the workflow while retaining the Control Rig

Long story short, I’ve been able to achieve some pretty striking results — and I’ll describe the process in more detail in the next diary entry. It’s complicated — like, really complicated — but I was able to learn a lot about topology and node-based tools and texturing in the process. I am still relatively early in my experiments but this is a very promising start.

As always, if you want to follow my progress more closely, I’m posting dailies pretty regularly to @locodrome on Instagram.

--

--

Anthony Koithra
Locodrome

Filmmaker. Strategic Advisor. Former MD & Partner at BCG Digital Ventures.