Abstract
In this paper, we present an audio-visual model to perform speech super-resolution at large scale-factors (8× and 16×). Previous works attempted to solve this problem using only the audio modality as input, and thus were limited to low scale-factors of 2× and 4×. In contrast, we propose to incorporate both visual and auditory signals to super-resolve speech of sampling rates as low as 1kHz. In such challenging situations, the visual features assist in learning the content, and improves the quality of the generated speech. Further, we demonstrate the applicability of our approach to arbitrary speech signals where the visual stream is not accessible. Our “pseudo-visual network” precisely synthesizes the visual stream solely from the low-resolution speech input. Extensive experiments illustrate our method's remarkable results and benefits over state-of-the-art audio-only speech super-resolution approaches. Our project website can be found at http://cvit.iiit.ac.in/research/projects/cvit- projects/audio- visual- speech- super- resolution.
| Original language | English |
|---|---|
| Publication status | Published - 25 Nov 2021 |
| Event | 32nd British Machine Vision Conference, BMVC 2021 - Virtual, Online Duration: 22 Nov 2021 → 25 Nov 2021 |
Conference
| Conference | 32nd British Machine Vision Conference, BMVC 2021 |
|---|---|
| City | Virtual, Online |
| Period | 22/11/21 → 25/11/21 |
Bibliographical note
Publisher Copyright:© 2021. The copyright of this document resides with its authors.
ASJC Scopus subject areas
- Artificial Intelligence
- Computer Vision and Pattern Recognition