Shake Your Booty: AI Deepfakes Dance Moves From a Single Picture

Synced
Synced
Oct 14, 2019 · 3 min read

Do you have two left feet? Do you avoid the dance floor out of fear of embarrassment? If you’ve ever secretly wished you could move your body like Joaquín Cortés — well, at least in a video — a new AI-powered 3D body mesh recovery module called Liquid Warping GAN can give you a leg up. The method, proposed in a new paper from ShanghaiTech University and Tencent AI Lab that’s been accepted by ICCV 2019, requires only a single photo and a video clip of the target dance.

Current human image synthesis approaches can struggle for example with identification of clothing in different styles, colours and textures; the large spatial and geometric changes of the human body; and multiple source inputs.

Liquid Warping GAN addresses these challenges with body mesh recovery, flow composition and a GAN module with Liquid Warping Block (LWB). Unlike previous human image synthesis methods, Liquid Warping GAN can not only model joint locations and rotations but also characterize a personalized body shape from a single picture and video clip input.

Liquid Warping GAN’s human motion imitation, appearance transfer and novel view synthesis involves (left to right) a source image, reference condition such as an image or novel camera view, and the synthesized results.

To evaluate Liquid Warping GAN performance, researchers had 30 subjects with diverse body shapes, heights, genders and clothing demonstrate random movements to built a new dataset called Impersonator (iPER) with 206 video sequences and 241,564 frames. Trained on the iPER dataset, Liquid Warping GAN outperformed existing motion imitation methods such as PG2, DSC and SHUP.

In August a team from UC Berkeley published similar research in their paper Everybody Dance Now. They used a video-to-video translation approach with the pose as an intermediate representation, and also released an open-source dataset of videos for training and motion transfer.

The paper Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer and Novel View Synthesis is on arXiv. The related PyTorch implementation can be found on GitHub.


Author: Yuqing Li | Editor: Michael Sarazen


We know you don’t want to miss any stories. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.


Need a comprehensive review of the past, present and future of modern AI research development? Trends of AI Technology Development Report is out!


2018 Fortune Global 500 Public Company AI Adaptivity Report is out!
Purchase a Kindle-formatted report on Amazon.
Apply for Insight Partner Program to get a complimentary full PDF report.

SyncedReview

We produce professional, authoritative, and thought-provoking content relating to artificial intelligence, machine intelligence, emerging technologies and industrial insights.

Synced

Written by

Synced

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global

SyncedReview

We produce professional, authoritative, and thought-provoking content relating to artificial intelligence, machine intelligence, emerging technologies and industrial insights.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade