Abstract
In this work, we investigate the problem of lip-syncing a talking face video of an arbitrary identity to match a target speech segment. Current works excel at producing accurate lip movements on a static image or videos of specific people seen during the training phase. However, they fail to accurately morph the lip movements of arbitrary identities in dynamic, unconstrained talking face videos, resulting in significant parts of the video being out-of-sync with the new audio. We identify key reasons pertaining to this and hence resolve them by learning from a powerful lip-sync discriminator. Next, we propose new, rigorous evaluation benchmarks and metrics to accurately measure lip synchronization in unconstrained videos. Extensive quantitative evaluations on our challenging benchmarks show that the lip-sync accuracy of the videos generated by our Wav2Lip model is almost as good as real synced videos. We provide a demo video clearly showing the substantial impact of our Wav2Lip model, and also publicly release the code, models, and evaluation benchmarks on our website.
Original language | English |
---|---|
Title of host publication | 28th ACM International Conference on Multimedia (ACM MM) |
Place of Publication | Seattle, USA |
Publisher | Association for Computing Machinery |
Pages | 484–492 |
Number of pages | 9 |
DOIs | |
Publication status | Published - 12 Oct 2020 |
Fingerprint
Dive into the research topics of 'A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild'. Together they form a unique fingerprint.Profiles
-
Vinay Namboodiri
- Department of Computer Science - Senior Lecturer
- Visual Computing
- Bath Institute for the Augmented Human
Person: Research & Teaching, Affiliate staff