Published in


Shake Your Booty: AI Deepfakes Dance Moves From a Single Picture

Do you have two left feet? Do you avoid the dance floor out of fear of embarrassment? If you’ve ever secretly wished you could move your body like Joaquín Cortés — well, at least in a video — a new AI-powered 3D body mesh recovery module called Liquid Warping GAN can give you a leg up. The method, proposed in a new paper from ShanghaiTech University and Tencent AI Lab that’s been accepted by ICCV 2019, requires only a single photo and a video clip of the target dance.

Current human image synthesis approaches can struggle for example with identification of clothing in different styles, colours and textures; the large spatial and geometric changes of the human body; and multiple source inputs.

Liquid Warping GAN addresses these challenges with body mesh recovery, flow composition and a GAN module with Liquid Warping Block (LWB). Unlike previous human image synthesis methods, Liquid Warping GAN can not only model joint locations and rotations but also characterize a personalized body shape from a single picture and video clip input.

Liquid Warping GAN’s human motion imitation, appearance transfer and novel view synthesis involves (left to right) a source image, reference condition such as an image or novel camera view, and the synthesized results.

To evaluate Liquid Warping GAN performance, researchers had 30 subjects with diverse body shapes, heights, genders and clothing demonstrate random movements to built a new dataset called Impersonator (iPER) with 206 video sequences and 241,564 frames. Trained on the iPER dataset, Liquid Warping GAN outperformed existing motion imitation methods such as PG2, DSC and SHUP.

In August a team from UC Berkeley published similar research in their paper Everybody Dance Now. They used a video-to-video translation approach with the pose as an intermediate representation, and also released an open-source dataset of videos for training and motion transfer.

The paper Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer and Novel View Synthesis is on arXiv. The related PyTorch implementation can be found on GitHub.

Author: Yuqing Li | Editor: Michael Sarazen

We know you don’t want to miss any stories. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

Need a comprehensive review of the past, present and future of modern AI research development? Trends of AI Technology Development Report is out!

2018 Fortune Global 500 Public Company AI Adaptivity Report is out!
Purchase a Kindle-formatted report on Amazon.
Apply for Insight Partner Program to get a complimentary full PDF report.



We produce professional, authoritative, and thought-provoking content relating to artificial intelligence, machine intelligence, emerging technologies and industrial insights.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store

AI Technology & Industry Review — | Newsletter: | Share My Research | Twitter: @Synced_Global