Motion Style Transfer For 3D Character Animation

Overview of the paper “Unpaired Motion Style Transfer from Video to Animation” by K Aberman et al.

Chintan Trivedi
deepgamingai

--

Image style transfer is a popular technique in computer vision where we take the content from one image and the style from a completely different image, and produce a combined output. Today, I want to share an AI that also does style transfer, but not for images. This AI performs style transfer for Motion of 3D characters.

At first, it wasn’t immediately clear to me what Content vs Style meant in the context of Motion, but the difference is pretty simply. Consider these three doodles, all of them showing the animation of a guy walking.

Content v/s Style in Walking Motion. [source]

The guy in the middle is clearly looking happier than the other two and the last guy is doing a depressed, sad walk. The “happy” and “sad” part of the motion is what we call style of motion, while the walking animation is defined as the content of motion.

Motion Style Transfer

Now that we understand the meaning of the two, let’s take a look at today’s paper. It is titled “Unpaired Motion Style Transfer from Video to Animation” and is published at this year’s SIGGRAPH conference by researchers from Beijing film academy.

Motion Style Transfer. [source]

They present a framework to transfer style of motion from a video to a 3D character while preserving the content of the source motion of our character. This makes it very easy to programatically add a specific personality to our animated character with almost no manual work required.

GAN Framework

They are using a Style Encoder to extract the motion style from video, and a separate Content Encoder from the 3D body points which are then combined and fed into the Generator.

In addition to the adversarial loss of the GAN framework, they are also using triplet loss and a content-consistency loss which ensures that the generated output motion is smooth and natural-looking with temporal consistency.

Style Interpolation

This method also allows us to interpolate styles with latent space manipulation, and the observed results are fantastic as you can see from these results.

Style Interpolation from “Proud” to “Depressed”. [source]

In the last two to three years, we have seen tremendous advancements in automatically creating animations with very little time and efforts. This work makes it just that much easier to do such with the help of deep learning, and I can’t wait to see what’s next!

Thank you for reading. If you liked this article, you may follow more of my work on Medium, GitHub, or subscribe to my YouTube channel.

--

--