Creating Personalized Photo-Realistic Talking Head Models

Christopher Dossman
AI³ | Theory, Practice, Business
2 min readMay 29, 2019

--

Highly realistic human head images have been obtained by training convolutional neural networks (CNNs). However, creating a personalized talking head model requires training on a large dataset of images of a single person. It is also crucial for such personalized talking head models to be learned from only a few image views of a person or even a single image.

Few-Shot Adversarial Learning of Realistic Neural Talking Head Models

Researchers have now presented a framework for meta-learning of adversarial generative models, which is able to train highly realistic virtual talking heads in the form of deep generator networks. The framework is but a handful of photographs that can be used to create a new model.

Meta-learning architecture

A talking model trained on 32 images achieves perfect realism and personalization score. The new algorithm is able to modify the parameters of both the generator and the discriminator in a person-specific way so that training can be quick and done with only a few images.

Potential Uses and Effects

The proposed framework is able to learn highly realistic and personalized talking. It performs lengthy meta-learning on huge video datasets and therefore able to frame few- and one-shot learning of neural talking head models of previously unseen people as adversarial training problems with high capacity generators and discriminators.

The approach can be used to enhance and achieve state of the art in systems that can synthesize plausible video-sequences of speech expressions in human-computer interaction, natural language processing, and automated dialogue systems.

It also has practical applications for telepresence, including video conferencing and multi-player games, as well as the special effects industry.

Read more: https://arxiv.org/pdf/1905.08233.pdf

Thanks for reading. Please comment, share and remember to subscribe to our weekly newsletter for the most recent and interesting research papers! You can also follow me on Twitter and LinkedIn. Remember to 👏 if you enjoyed this article. Cheers

--

--