Neural Style-Preserving Visual Dubbing

Hyeongwoo Kim, Mohamed Elgharib, Michael Zollhöfer, H.-P. Seidel, Thabo Beeler, Christian Richardt, Christian Theobalt

Research output: Contribution to journalArticlepeer-review

48 Citations (SciVal)

Abstract

Dubbing is a technique for translating video content from one language to another. However, state-of-the-art visual dubbing techniques directly copy facial expressions from source to target actors without considering identity-specific idiosyncrasies such as a unique type of smile. We present a style-preserving visual dubbing approach from single video inputs, which maintains the signature style of target actors when modifying facial expressions, including mouth motions, to match foreign languages. At the heart of our approach is the concept of motion style, in particular for facial expressions, i.e., the person-specific expression change that is yet another essential factor beyond visual accuracy in face editing applications. Our method is based on a recurrent generative adversarial network that captures the spatiotemporal co-activation of facial expressions, and enables generating and modifying the facial expressions of the target actor while preserving their style. We train our model with unsynchronized source and target videos in an unsupervised manner using cycle-consistency and mouth expression losses, and synthesize photorealistic video frames using a layered neural face renderer. Our approach generates temporally coherent results, and handles dynamic backgrounds. Our results show that our dubbing approach maintains the idiosyncratic style of the target actor better than previous approaches, even for widely differing source and target actors.
Original languageEnglish
Article number178
Number of pages13
JournalACM Transactions on Graphics
Volume38
Issue number6
Early online date6 Sept 2019
DOIs
Publication statusPublished - 17 Nov 2019
EventSIGGRAPH Asia 2019 - Brisbane, Australia
Duration: 17 Nov 201920 Nov 2019
https://sa2019.siggraph.org/

Fingerprint

Dive into the research topics of 'Neural Style-Preserving Visual Dubbing'. Together they form a unique fingerprint.

Cite this