Towards Accurate Lip-to-Speech Synthesis in-the-Wild

Sindhu Hegde, Rudrabha Mukhopadhyay, C. V. Jawahar, Vinay Namboodiri

Research output: Chapter or section in a book/report/conference proceedingChapter in a published conference proceeding

3 Citations (SciVal)

Abstract

In this paper, we introduce a novel approach to address the task of synthesizing speech from silent videos of any in-the-wild speaker solely based on lip movements. The traditional approach of directly generating speech from lip videos faces the challenge of not being able to learn a robust language model from speech alone, resulting in unsatisfactory outcomes. To overcome this issue, we propose incorporating noisy text supervision using a state-of-the-art lip-to-text network that instills language information into our model. The noisy text is generated using a pre-trained lip-to-text model, enabling our approach to work without text annotations during inference. We design a visual text-to-speech network that utilizes the visual stream to generate accurate speech, which is in-sync with the silent input video. We perform extensive experiments and ablation studies, demonstrating our approach's superiority over the current state-of-the-art methods on various benchmark datasets. Further, we demonstrate an essential practical application of our method in assistive technology by generating speech for an ALS patient who has lost the voice but can make mouth movements. Our demo video, code, and additional details can be found at http://cvit.iiit.ac.in/research/projects/cvit-projects/ms-l2s-itw.

Original languageEnglish
Title of host publicationMM 2023 - Proceedings of the 31st ACM International Conference on Multimedia
Place of PublicationNew York, U. S. A.
PublisherAssociation for Computing Machinery
Pages5523-5531
Number of pages9
ISBN (Electronic)9798400701085
DOIs
Publication statusPublished - 29 Oct 2023
Event31st ACM International Conference on Multimedia, MM 2023 - Ottawa, Canada
Duration: 29 Oct 20233 Nov 2023

Conference

Conference31st ACM International Conference on Multimedia, MM 2023
Country/TerritoryCanada
CityOttawa
Period29/10/233/11/23

Funding

Acknowledgement: This work is supported by MeitY, Government of India

FundersFunder number
MeitY, Government of India

    Keywords

    • assistive technology
    • lip-reading
    • lip-to-speech
    • speech generation

    ASJC Scopus subject areas

    • Artificial Intelligence
    • Computer Graphics and Computer-Aided Design
    • Human-Computer Interaction
    • Software

    Fingerprint

    Dive into the research topics of 'Towards Accurate Lip-to-Speech Synthesis in-the-Wild'. Together they form a unique fingerprint.

    Cite this