Upgrade your DeepFakes with LivePortrait
Full article: 2407.03168 (arxiv.org)
Citation: @article{guo2024liveportrait, title = {LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control}, author = {Guo, Jianzhu and Zhang, Dingyun and Liu, Xiaoqiang and Zhong, Zhizhou and Zhang, Yuan and Wan, Pengfei and Zhang, Di}, journal = {arXiv preprint arXiv:2407.03168}, year = {2024} }
In the age of smartphones and digital media, people frequently capture portraits to preserve cherished memories. Devices like iPhones have introduced features such as Live Photos, which animate static images by recording moments 1.5 seconds before and after the picture is taken. While this functionality relies on short video recordings, recent advances in Generative Adversarial Networks (GANs) and Diffusion Models have revolutionized portrait animation, allowing static images to be brought to life without requiring specialized hardware.
Methodology
1. FaceVid2Vid Framework
The FaceVid2Vid framework animates static portraits by extracting motion features from a driving video sequence. The process is initiated by mapping the source image into a 3D appearance feature volume. The animation is driven by transforming the 3D keypoints of both the source and driving images, followed by warping the source…