Metahumans 101

ft. UE 5.3, Live Link Face (iOS), Quixel Bridge, Metahumans, The Metahuman Animator, Sequencer, Take Recorder, & Movie Render Queue

deacōnline
12 min readNov 15, 2023

Introduction

I made a thing! Wanna see how I did it?

Follow this tutorial and master the basics of metahumans.

Or least learn a trick or two.

Theme

When planning this shot, I had a few things I wanted to experiment with. The Metahuman animator, many characters, my own mocap, high quality renders, etc.

The first question on my mind was how to tie this to some kind of theme and storytelling device.

I’d been into Alan Watts’ interpretation of Monism for a while, and I’d been listening to a lot of Aurora, so I thought I’d lean into the concept of “one-ness”.

So the basic set up for this shot is this:

Many individuals experiencing the same thing at the same time in different universes. Entangled across time and space, these ostensibly disparate people realize that they are, in fact, one being.

Ideally, having things planned out properly before the fact is a good way to go, but this concept really took shape late on in the project. Things kind of clicked at a certain point when my imagination and my capabilities came to an agreement.

Strategy

I liked the look of the metahumans (MH) as soon as they were announced. I mean look at this…

And this…

just a quick screen shot while in the editor…
  • The materials & textures are astounding.
  • The hair looks incredible.
  • Lighting & camera effects are a breeze.

…And it’s all (kind of) free.

An in-editor (UE5.3) shot of ‘Trey’ using my own facial mocap with Live Link

So I dived in. Then I got stuck. Then I got un-stuck.

Let me share some lessons I learned during my UE animation fellowship (October/November 2023).

The Engine

I don’t know about you, but I found that my metahuman projects on Unreal Engine v5.0, v5.1, and v5.2 crashed regularly. (more than normal)

UE 5.3, however, is different.
It’s less crash-y (albeit more freeze-y).

5.3, the version I’ve been waiting for

It also contains the new metahuman animator, which is pretty neat.
More about that a little later.

Getting Started

With the help of a few metahuman template projects & plugins, we can get started relatively quickly. Check these puppies out:

Starting with a leg up

The MH Lighting project is a great start.

It has a handful of nicely lit environments that’ll serve as a handy sandbox for setting up our character(s) and their animation(s).

I’m blown away at how well these levels light our metahumans

NB: Lighting can make or break our metahumans, so be careful.

…be careful not to over-light your MHs. notice how much shadow is used here. Imperfection = realism

Characters

The Quixel Bridge makes downloading and importing our characters really simple.

There are some real gems here

Just download, then import.

Note: The highest level of detail (LOD) version of our MHs are not highly-optimized. This means they’ll hammer our frame-rate in real-time applications, but they’re great for cinematics rendered with the Movie Render Queue (MRQ).

Once imported, drag the MH blueprint into the level, and let the shaders compile.

Maria, looking ahead stoically, for now

Now, let’s animate!

Performance Capture

One important thing to consider before starting our mocap journey…

Metahumans use two skeletons with two separate animation tracks:
a.) Facial animations
b.) Body animations

After much anxiety and frustration, I strongly recommend recording our head and neck movements onto our character’s body.

While this data can be captured in our facial mocap, it’s a serious pain to translate that data to the body skeleton. It’s possible, but it can be a very painful process.

Simply put, arms, legs, spine, neck and head movements should be captured in the body mocap. The facial mocap should just be exclusively for expressions.

Like so:

https://www.youtube.com/watch?v=ee83B7RYSUc

With metahumans, uncanniness is the enemy, so recording facial expressions while performing the bodily performance at the same time is a solid idea. It allows for performance capture that is honest and congruent… But, it may however, require a specialized head-rig as shown above.

Option 1: Live Link

To bring our characters to life, I use the Live Link iOS app.
(Sorry, my Android friends)

TLDR; Big ups to pinkpocketTV for this simple tutorial:

To use Live Link, we’ll first need some plugins.
In the MH Lighting project, some of these are pre-installed.

Download the free Live Link Face app.
(Here’s a quick Live Link setup tutorial.)

To link the phone to the PC, create a “Capture Source” file.
(Further reading.)

Right-click > Metahuman Animator > Capture Source

setting up our Capture Source

Select “LiveLink Face Connection” from the dropdown.

Add your phone’s IP address to the source file and save.
(This can be found in the Live Link app’s settings)

specifying the Capture Source details

Select your MH blueprint, then check the Live Link boxes.

In the Live Link app, if we see a mesh/net on our face (see below), it means our face is being tracked correctly.

fully functional (ARKit-powered) facial mocap courtesy of pinkpocketTV

This can be tested in edit mode, so you don’t even need to hit ‘play’.

Maria, getting into the mocap vibe

To capture our facial performance, the ‘Take Recorder’ is pretty cool.

https://docs.unrealengine.com/5.0/en-US/take-recorder-in-unreal-engine/

Just add the elements you want to track, hit record and act.

The output is a sequence file, which can be fed as a ‘sub-sequence’ into your main level sequence. The rest is sequencing and rendering.

That’s about it. That’s the “old” way to bring your MHs to life.

Option 2: The Metahuman Animator

The MH animator is wild.

It transforms our flat 2D video data into a 3D animation mapped to a metahuman’s facial skeleton.

Here’s a demo:

Introducing the MetaHuman Animator…

There are a few good tutorials on this:

…and this:

…and this:

We start by opening the “Capture Manager”…

… and we add a phone as an input (“MHCapture”) source.
(See the previous step on making a MHCaptureSource, if you missed it.)

Note to self: name your files properly!!!

If the Live Link app is running and can detect a face, a little green light will show that the app is linked and operational.

There are a few ways to process mocap data in the MH animator:

  1. Make a new video recording in the app on your phone
  2. Make a new video recording in the UE editor
  3. Load an old video recording

Once the recording has been added to the project, we’ll need to create a Metahuman Identity to ensure that the video footage can be turned into an animation mapped to a metahuman’s facial mesh.

We start this process by calibrating out MH Identity. We need to take a snapshot of the face from the front, the left, and the right.

Like so:

I’m such a NinjaTheory/Hellblade fan, so these demos get me uber psyched!

Front

Left

Right

And then add teeth and calibrate them…

If you get lost, this little yellow warning will guide you towards the next step…

Once our Metahuman Identity file is ready, we’re ready to create a Metahuman “performance”.

In the performance file, we simply select our footage, a metahuman identity, and a mesh for visualization. Be sure to crop the capture area (clip length) to keep file sizes as low and manageable as possible.

These performances can be exported as an animation or as a level sequence. These animations are great, but they can be tricky to stitch to a body. I suspect this process will get ironed out by Epic at some point. It was the trickiest part of this whole process. Heads get detached, animations don’t play, crashes, blending is a pian, etc. There is light at the end of the tunnel, though.

As I mentioned at the beginning of this blog, I found that exporting a facial performance without head movement is a good idea.

Ideally, we want to get the bodily and facial movements right at the same time. This makes for a believable performance. So we simultaneously track the body movements and their corresponding facial expressions.

Like so:

Since I don’t own a head-rig yet, I was going to have to work around my limitations.

In the sequencer, I added the facial mocap to each of my characters.

The bodily animations were added to the metahuman blueprints separately — just be sure to disable post process animations on the bodies.

Don’t get me wrong, this part took a lot of patience. ;)

Here’s the official video tutorial from Unreal:
https://youtu.be/Nkb4DEoZ_NY?si=hbD1aUVP8DiDK6k6&t=1509

Favourites

This next tip may sound silly, but it saved me a lot of time.

While working on the project, I found that there were a handful of folders I kept returning to.

Namely _D (my own miscellaneous folder), Levels (levels), MetaHumans (characters), and MHSource_Ingested (mocap).

So I color-coded them and added them to my favourites section. (see below)

This meant that most of my important assets were just one click away.

Level Juggling

I have a strong inclination towards environment art, so I thought I’d take this opportunity to explore swapping levels.

Why? Because this is one thing Unreal and Unity have in common.

A “level” file is one file. A “scene” file is a single file.

And we can’t have two people making changes to a single file.

One must overwrite the other… So to cooperate, we must split responsibilities: One person works on Level X, and another person works on Level Y. We can split the contents of these levels however we like. So one person can work on lighting while another works on character animation, for example.

This is not exclusively a metahuman tip, but I found it really handy and won’t ever be turning back. I’m super glad I worked this out.

In this project, basically, I use different levels to handle different jobs.

The “Persistent Level” aka “Level 0" contains my camera, post process volume, fog, music, and so on. All the gizmos that are required by every shot.

Each additional level contains a character and their unique lighting setup.

Note: I found that naming the levels after the character made it easier to track what’s what. It also made me more empathetic to my characters in a funny way. I loved them more by naming them, rather than numbering them… Crazy, I know.

Just be sure to change the “Streaming Method” for different renders.

Selecting “Always Loaded” helped to render each level one by one.

Sequencer & Movie Render Queue

Once I was ready to render out some video, it was time to hit the sequencer and the MRQ.

My sequencer is pretty simple for this project:

A single camera cut with a fade in and fade out sets the tone.

All characters have the same facial mocap animation & idle body animation. The tricky bit here is that each character is a different height. I manually scooched each character up to roughly match one another.
(This is far from a perfect fix to this problem.)

The cinematography is also super simple. I’ve attached the main camera to an empty “Spinner” actor. The spinner rotates 180 degrees and the camera just hangs on.

The camera also uses the “Actor to Track” feature. This just means that it always points itself at an empty “LookAtMe” actor. I’ve also instructed the camera to focus on that object as well.

The Movie Render Queue is another tool I wanted to practice for this project.

Some tricks & tips:

  • The golden rule: Rendering always takes twice as long as you expect it to. Even if take this rule into account.
  • Render at 24 fps for a cool cinematic, filmic feel
  • I use the Apple ProRes plugin to render to a single .mov file
  • Apple ProRes 422 seemed the least crashy for me
  • I use a great Temporal Super Resolution (TSR) anti-aliasing trick that I learned from William Faucher.
    (Spatial Sample Count = 1, Temporal Sample Count = 8)
  • Console variables go deep. I used a combination of hints from William Faucher and RonanMohanArt on Reddit to get my variables right.

Editing

Once I had all my renders done, it was a matter of slapping it all together in DaVinci Resolve.

Our group had established a music track early on in the process, so I knew I had to edit my video accordingly. Each beat drop would switch my characters. Simple.

I sliced the audio track on each beat drop, and started splicing in my video renders. (Slicing the audio track helped my video tracks click into place neatly.)

Since each render uses the same render settings and camera shot, I just needed to edit the start and end point of each video.

In the end, I actually omitted a few of my characters.
They just didn’t make the cut.

My render settings were really simple. H.265 in MP4 format at 1920 x 1080 HD. Easy-peasy.

(I would have preferred a more narrow, cinematic resolution, but this resolution requirement was established early on, most likely to maximize real estate when being viewed on a smartphone.)

Some Other Cool Stuff

Amazing new metahuman clothing: UE5 MetaTailor App — YouTube

Excited for Hellblade 2: Metahuman Demo | State of Unreal 2023 — YouTube

Gotta get one of these: Headrig — Rokoko ROW

Also on my to-do list: Turn Yourself Into a 3D Puppet — ft. @sotomonte

Conclusion

And there we have it. Fellowship done and dusted.
(I’ll add the full video if I can get permission to do so.)

Although I’m pretty exhausted after this fellowship (it’s been such a busy year), I realized a few things about myself that really struck me:

  • I get distracted easily. I found it hard to focus during long training sessions. I guess I’ve always been a doodler, so this isn’t news to me. Its just funny how it cropped up again.
  • I spend a lot of time planning, and little time executing. I think this is my way of carefully ensuring that I don’t end up in a rut. It does mean I need to factor extra time for clean-up at the end of any given project.
  • Deadlines supercharge my motivation. When my focus sets in, it’s hard to perturb me. I’m like a pitbull. I’ll be entering a lot of challenges next year to hone my skills and improve my turnaround time.

Thanks!

Finally, I owe a big thanks to everyone that took the time to share their expertise with me over the past few weeks. Brian Pohl, David Garcia, Amaresh Beuria, Cameron Kostopoulos, Anandh Ramesh, Remano De Beer, Sam Roe, and the rest of the crew. Onwards and upwards.

--

--