Temporally Coherent Video De-Anaglyph

Joal Sol Roo, Christian Richardt

Research output: Contribution to conferencePoster

1 Citation (Scopus)

Abstract

For a long time, stereoscopic 3D videos were usually encoded and shown in the anaglyph format. This format combines the two stereo views into a single color image by splitting its color spectrum and assigning each view to one half of the spectrum, for example red for the left and cyan (blue+green) for the right view. Glasses with matching color filters then separate the color channels again to provide the appropriate view to each eye. This simplicity made anaglyph stereo a popular choice for showing stereoscopic content, as it works with existing screens, projectors and print media. However, modern stereo displays and projectors natively support two full-color views, and avoid the viewing discomfort associated with anaglyph videos.

Our work investigates how to convert existing anaglyph videos to the full-color stereo format used by modern displays. Anaglyph videos only contain half the color information compared to the full-color videos, and the missing color channels need to be reconstructed from the existing ones in a plausible and temporally coherent fashion. Joulin and Kang [2013] propose an approach that works well for images, but their extension to video is limited by the heavy computational complexity of their approach. Other techniques only support single images and when applied to each frame of a video generally produce flickering results.

In our approach, we put the temporal coherence of the stereo results front and center by expressing Joulin and Kang’s approach within the practical temporal consistency framework of Lang et al. [2012]. As a result, our approach is both efficient and temporally coherent. In addition, it computes temporally coherent optical flow and disparity maps that can be used for various post-processing tasks.
Original languageEnglish
Pages75
Number of pages1
DOIs
Publication statusPublished - Aug 2014
EventACM Special Interest Group on Computer Graphics and Interactive Techniques Conference, SIGGRAPH 2014 - Vamcouver , Canada
Duration: 10 Aug 201414 Aug 2014

Conference

ConferenceACM Special Interest Group on Computer Graphics and Interactive Techniques Conference, SIGGRAPH 2014
CountryCanada
CityVamcouver
Period10/08/1414/08/14

Fingerprint

Color
Display devices
Flickering
Color matching
Optical flows
Computational complexity
Glass
Processing

Cite this

Roo, J. S., & Richardt, C. (2014). Temporally Coherent Video De-Anaglyph. 75. Poster session presented at ACM Special Interest Group on Computer Graphics and Interactive Techniques Conference, SIGGRAPH 2014, Vamcouver , Canada. https://doi.org/10.1145/2614106.2614125

Temporally Coherent Video De-Anaglyph. / Roo, Joal Sol; Richardt, Christian.

2014. 75 Poster session presented at ACM Special Interest Group on Computer Graphics and Interactive Techniques Conference, SIGGRAPH 2014, Vamcouver , Canada.

Research output: Contribution to conferencePoster

Roo, JS & Richardt, C 2014, 'Temporally Coherent Video De-Anaglyph' ACM Special Interest Group on Computer Graphics and Interactive Techniques Conference, SIGGRAPH 2014, Vamcouver , Canada, 10/08/14 - 14/08/14, pp. 75. https://doi.org/10.1145/2614106.2614125
Roo JS, Richardt C. Temporally Coherent Video De-Anaglyph. 2014. Poster session presented at ACM Special Interest Group on Computer Graphics and Interactive Techniques Conference, SIGGRAPH 2014, Vamcouver , Canada. https://doi.org/10.1145/2614106.2614125
Roo, Joal Sol ; Richardt, Christian. / Temporally Coherent Video De-Anaglyph. Poster session presented at ACM Special Interest Group on Computer Graphics and Interactive Techniques Conference, SIGGRAPH 2014, Vamcouver , Canada.1 p.
@conference{d46332ed2f40488a955c007fab47de3b,
title = "Temporally Coherent Video De-Anaglyph",
abstract = "For a long time, stereoscopic 3D videos were usually encoded and shown in the anaglyph format. This format combines the two stereo views into a single color image by splitting its color spectrum and assigning each view to one half of the spectrum, for example red for the left and cyan (blue+green) for the right view. Glasses with matching color filters then separate the color channels again to provide the appropriate view to each eye. This simplicity made anaglyph stereo a popular choice for showing stereoscopic content, as it works with existing screens, projectors and print media. However, modern stereo displays and projectors natively support two full-color views, and avoid the viewing discomfort associated with anaglyph videos.Our work investigates how to convert existing anaglyph videos to the full-color stereo format used by modern displays. Anaglyph videos only contain half the color information compared to the full-color videos, and the missing color channels need to be reconstructed from the existing ones in a plausible and temporally coherent fashion. Joulin and Kang [2013] propose an approach that works well for images, but their extension to video is limited by the heavy computational complexity of their approach. Other techniques only support single images and when applied to each frame of a video generally produce flickering results.In our approach, we put the temporal coherence of the stereo results front and center by expressing Joulin and Kang’s approach within the practical temporal consistency framework of Lang et al. [2012]. As a result, our approach is both efficient and temporally coherent. In addition, it computes temporally coherent optical flow and disparity maps that can be used for various post-processing tasks.",
author = "Roo, {Joal Sol} and Christian Richardt",
year = "2014",
month = "8",
doi = "10.1145/2614106.2614125",
language = "English",
pages = "75",
note = "ACM Special Interest Group on Computer Graphics and Interactive Techniques Conference, SIGGRAPH 2014 ; Conference date: 10-08-2014 Through 14-08-2014",

}

TY - CONF

T1 - Temporally Coherent Video De-Anaglyph

AU - Roo, Joal Sol

AU - Richardt, Christian

PY - 2014/8

Y1 - 2014/8

N2 - For a long time, stereoscopic 3D videos were usually encoded and shown in the anaglyph format. This format combines the two stereo views into a single color image by splitting its color spectrum and assigning each view to one half of the spectrum, for example red for the left and cyan (blue+green) for the right view. Glasses with matching color filters then separate the color channels again to provide the appropriate view to each eye. This simplicity made anaglyph stereo a popular choice for showing stereoscopic content, as it works with existing screens, projectors and print media. However, modern stereo displays and projectors natively support two full-color views, and avoid the viewing discomfort associated with anaglyph videos.Our work investigates how to convert existing anaglyph videos to the full-color stereo format used by modern displays. Anaglyph videos only contain half the color information compared to the full-color videos, and the missing color channels need to be reconstructed from the existing ones in a plausible and temporally coherent fashion. Joulin and Kang [2013] propose an approach that works well for images, but their extension to video is limited by the heavy computational complexity of their approach. Other techniques only support single images and when applied to each frame of a video generally produce flickering results.In our approach, we put the temporal coherence of the stereo results front and center by expressing Joulin and Kang’s approach within the practical temporal consistency framework of Lang et al. [2012]. As a result, our approach is both efficient and temporally coherent. In addition, it computes temporally coherent optical flow and disparity maps that can be used for various post-processing tasks.

AB - For a long time, stereoscopic 3D videos were usually encoded and shown in the anaglyph format. This format combines the two stereo views into a single color image by splitting its color spectrum and assigning each view to one half of the spectrum, for example red for the left and cyan (blue+green) for the right view. Glasses with matching color filters then separate the color channels again to provide the appropriate view to each eye. This simplicity made anaglyph stereo a popular choice for showing stereoscopic content, as it works with existing screens, projectors and print media. However, modern stereo displays and projectors natively support two full-color views, and avoid the viewing discomfort associated with anaglyph videos.Our work investigates how to convert existing anaglyph videos to the full-color stereo format used by modern displays. Anaglyph videos only contain half the color information compared to the full-color videos, and the missing color channels need to be reconstructed from the existing ones in a plausible and temporally coherent fashion. Joulin and Kang [2013] propose an approach that works well for images, but their extension to video is limited by the heavy computational complexity of their approach. Other techniques only support single images and when applied to each frame of a video generally produce flickering results.In our approach, we put the temporal coherence of the stereo results front and center by expressing Joulin and Kang’s approach within the practical temporal consistency framework of Lang et al. [2012]. As a result, our approach is both efficient and temporally coherent. In addition, it computes temporally coherent optical flow and disparity maps that can be used for various post-processing tasks.

UR - http://richardt.name/publications/video-deanaglyph/

UR - http://richardt.name/video-deanaglyph/VideoDeAnaglyph-abstract.pdf

U2 - 10.1145/2614106.2614125

DO - 10.1145/2614106.2614125

M3 - Poster

SP - 75

ER -