Abstract
For a long time, stereoscopic 3D videos were usually encoded and shown in the anaglyph format. This format combines the two stereo views into a single color image by splitting its color spectrum and assigning each view to one half of the spectrum, for example red for the left and cyan (blue+green) for the right view. Glasses with matching color filters then separate the color channels again to provide the appropriate view to each eye. This simplicity made anaglyph stereo a popular choice for showing stereoscopic content, as it works with existing screens, projectors and print media. However, modern stereo displays and projectors natively support two full-color views, and avoid the viewing discomfort associated with anaglyph videos.
Our work investigates how to convert existing anaglyph videos to the full-color stereo format used by modern displays. Anaglyph videos only contain half the color information compared to the full-color videos, and the missing color channels need to be reconstructed from the existing ones in a plausible and temporally coherent fashion. Joulin and Kang [2013] propose an approach that works well for images, but their extension to video is limited by the heavy computational complexity of their approach. Other techniques only support single images and when applied to each frame of a video generally produce flickering results.
In our approach, we put the temporal coherence of the stereo results front and center by expressing Joulin and Kang’s approach within the practical temporal consistency framework of Lang et al. [2012]. As a result, our approach is both efficient and temporally coherent. In addition, it computes temporally coherent optical flow and disparity maps that can be used for various post-processing tasks.
Our work investigates how to convert existing anaglyph videos to the full-color stereo format used by modern displays. Anaglyph videos only contain half the color information compared to the full-color videos, and the missing color channels need to be reconstructed from the existing ones in a plausible and temporally coherent fashion. Joulin and Kang [2013] propose an approach that works well for images, but their extension to video is limited by the heavy computational complexity of their approach. Other techniques only support single images and when applied to each frame of a video generally produce flickering results.
In our approach, we put the temporal coherence of the stereo results front and center by expressing Joulin and Kang’s approach within the practical temporal consistency framework of Lang et al. [2012]. As a result, our approach is both efficient and temporally coherent. In addition, it computes temporally coherent optical flow and disparity maps that can be used for various post-processing tasks.
Original language | English |
---|---|
Pages | 75 |
Number of pages | 1 |
DOIs | |
Publication status | Published - Aug 2014 |
Event | ACM Special Interest Group on Computer Graphics and Interactive Techniques Conference, SIGGRAPH 2014 - Vamcouver , Canada Duration: 10 Aug 2014 → 14 Aug 2014 |
Conference
Conference | ACM Special Interest Group on Computer Graphics and Interactive Techniques Conference, SIGGRAPH 2014 |
---|---|
Country/Territory | Canada |
City | Vamcouver |
Period | 10/08/14 → 14/08/14 |