TY - GEN
T1 - Extreme-scale Talking-Face Video Upsampling with Audio-Visual Priors
AU - Hegde, Sindhu B.
AU - Mukhopadhyay, Rudrabha
AU - Namboodiri, Vinay P.
AU - Jawahar, C. V.
PY - 2022/10/10
Y1 - 2022/10/10
N2 - In this paper, we explore an interesting question of what can be obtained from an 8x8 pixel video sequence. Surprisingly, it turns out to be quite a lot. We show that when we process this 8x8 video with the right set of audio and image priors, we can obtain a full-length, 256x256 video. We achieve this 32x scaling of an extremely low-resolution input using our novel audio-visual upsampling network. The audio prior helps to recover the elemental facial details and precise lip shapes and a single high-resolution target identity image prior provides us with rich appearance details. Our approach is an end-to-end multi-stage framework. The first stage produces a coarse intermediate output video that can be then used to animate single target identity image and generate realistic, accurate and high-quality outputs. Our approach is simple and performs exceedingly well (an 8x improvement in FID score) compared to previous super-resolution methods. We also extend our model to talking-face video compression, and show that we obtain a 3.5x improvement in terms of bits/pixel over the previous state-of-the-art. The results from our network are thoroughly analyzed through extensive ablation experiments (in the paper and supplementary material). We also provide the demo video along with code and models on our http://cvit.iiit.ac.in/research/projects/cvit-projects/talking-face-video-upsampling.
AB - In this paper, we explore an interesting question of what can be obtained from an 8x8 pixel video sequence. Surprisingly, it turns out to be quite a lot. We show that when we process this 8x8 video with the right set of audio and image priors, we can obtain a full-length, 256x256 video. We achieve this 32x scaling of an extremely low-resolution input using our novel audio-visual upsampling network. The audio prior helps to recover the elemental facial details and precise lip shapes and a single high-resolution target identity image prior provides us with rich appearance details. Our approach is an end-to-end multi-stage framework. The first stage produces a coarse intermediate output video that can be then used to animate single target identity image and generate realistic, accurate and high-quality outputs. Our approach is simple and performs exceedingly well (an 8x improvement in FID score) compared to previous super-resolution methods. We also extend our model to talking-face video compression, and show that we obtain a 3.5x improvement in terms of bits/pixel over the previous state-of-the-art. The results from our network are thoroughly analyzed through extensive ablation experiments (in the paper and supplementary material). We also provide the demo video along with code and models on our http://cvit.iiit.ac.in/research/projects/cvit-projects/talking-face-video-upsampling.
KW - audio-visual learning
KW - talking-face videos
KW - video compression
KW - video super-resolution
KW - video upsampling
UR - http://www.scopus.com/inward/record.url?scp=85151164774&partnerID=8YFLogxK
U2 - 10.1145/3503161.3548080
DO - 10.1145/3503161.3548080
M3 - Chapter in a published conference proceeding
AN - SCOPUS:85151164774
T3 - MM 2022 - Proceedings of the 30th ACM International Conference on Multimedia
SP - 6511
EP - 6520
BT - MM 2022 - Proceedings of the 30th ACM International Conference on Multimedia
PB - Association for Computing Machinery
T2 - 30th ACM International Conference on Multimedia, MM 2022
Y2 - 10 October 2022 through 14 October 2022
ER -