Realistic Clothing Animations With AI

Overview of the paper “TailorNet: Predicting Clothing in 3D as a Function of Human Pose, Shape and Garment Style” by Patel et al.

Chintan Trivedi
deepgamingai

--

Automating the design and development process of 3D animations with Machine Learning has multiple benefits. It enables us to create such animations with very little requirement of artistic skills and reduce the process to a single click of a button. In this article, I want to cover another paper that is almost tailor made for this line of research because it automates another sub-task involved in the creation of these 3D animations: clothing animations.

source

The paper is titled “TailorNet: Predicting Clothing in 3D as a Function of Human Pose, Shape and Garment Style” and is published by researchers at Max Planck Institute in Germany. It introduces a method to predict the deformations and wrinkles in clothing during the movement of the person wearing it, without using any physics based calculations, making it thousands of times more computationally efficient than other methods being used today.

This method claims to be one of the first approaches to combine 3 different driving factors in generating 3D clothing animation: the body pose, the shape of the human body wearing these clothes and the styling or material of the clothes. They use a neural network model that they have named TailorNet to produce animations with highly-detailed geometry that includes low-frequency and high-frequency clothing deformations like wrinkles and folds.

source

This model outputs extremely realistic animations for different poses and body styles, making it ideal for rendering virtual character animations without running an entire physics based simulation. This makes it so much easier and quicker, making it ideal for game development.

Now imagine combining this technique to our previous two papers from this series. We can generate full body pose and lip movement animations from just speech input, and then add clothing to our virtual character. Once we add textures to these pipelines, we can completely render a virtual game character talking scene without having to design anything, and all at a click of a button. The future is going to be really incredible, isn’t it?

Thank you for reading. If you liked this article, you may follow more of my work on Medium, GitHub, or subscribe to my YouTube channel.

--

--

Chintan Trivedi
deepgamingai

AI, ML for Digital Games Researcher. Founder at DG AI Research Lab, India. Visit our publication homepage medium.com/deepgamingai for weekly AI & Games content!