Projects per year
Abstract
Dubbing is a technique for translating video content from one language to another. However, state-of-the-art visual dubbing techniques directly copy facial expressions from source to target actors without considering identity-specific idiosyncrasies such as a unique type of smile. We present a style-preserving visual dubbing approach from single video inputs, which maintains the signature style of target actors when modifying facial expressions, including mouth motions, to match foreign languages. At the heart of our approach is the concept of motion style, in particular for facial expressions, i.e., the person-specific expression change that is yet another essential factor beyond visual accuracy in face editing applications. Our method is based on a recurrent generative adversarial network that captures the spatiotemporal co-activation of facial expressions, and enables generating and modifying the facial expressions of the target actor while preserving their style. We train our model with unsynchronized source and target videos in an unsupervised manner using cycle-consistency and mouth expression losses, and synthesize photorealistic video frames using a layered neural face renderer. Our approach generates temporally coherent results, and handles dynamic backgrounds. Our results show that our dubbing approach maintains the idiosyncratic style of the target actor better than previous approaches, even for widely differing source and target actors.
Original language | English |
---|---|
Article number | 178 |
Number of pages | 13 |
Journal | ACM Transactions on Graphics |
Volume | 38 |
Issue number | 6 |
Early online date | 6 Sept 2019 |
DOIs | |
Publication status | Published - 17 Nov 2019 |
Event | SIGGRAPH Asia 2019 - Brisbane, Australia Duration: 17 Nov 2019 → 20 Nov 2019 https://sa2019.siggraph.org/ |
Fingerprint
Dive into the research topics of 'Neural Style-Preserving Visual Dubbing'. Together they form a unique fingerprint.Projects
- 2 Finished
-
Fellowship - Towards Immersive 360° VR Video with Motion Parallax
Richardt, C. (PI)
Engineering and Physical Sciences Research Council
25/06/18 → 24/12/21
Project: Research council
-
Centre for the Analysis of Motion, Entertainment Research and Applications (CAMERA)
Cosker, D. (PI), Bilzon, J. (CoI), Campbell, N. (CoI), Cazzola, D. (CoI), Colyer, S. (CoI), Fincham Haines, T. (CoI), Hall, P. (CoI), Kim, K. I. (CoI), Lutteroth, C. (CoI), McGuigan, P. (CoI), O'Neill, E. (CoI), Richardt, C. (CoI), Salo, A. (CoI), Seminati, E. (CoI), Tabor, A. (CoI) & Yang, Y. (CoI)
Engineering and Physical Sciences Research Council
1/09/15 → 28/02/21
Project: Research council