Dense Wide-Baseline Scene Flow From Two Handheld Video Cameras

Christian Richardt, Hyeongwoo Kim, Levi Valgaerts, Christian Theobalt

Research output: Chapter in Book/Report/Conference proceedingChapter

  • 6 Citations

Abstract

We propose a new technique for computing dense scene flow from two handheld videos with wide camera baselines and different photometric properties due to different sensors or camera settings like exposure and white balance. Our technique innovates in two ways over existing methods: (1) it supports independently moving cameras, and (2) it computes dense scene flow for wide-baseline scenarios.We achieve this by combining state-of-the-art wide-baseline correspondence finding with a variational scene flow formulation. First, we compute dense, wide-baseline correspondences using DAISY descriptors for matching between cameras and over time. We then detect and replace occluded pixels in the correspondence fields using a novel edge-preserving Laplacian correspondence completion technique. We finally refine the computed correspondence fields in a variational scene flow formulation. We show dense scene flow results computed from challenging datasets with independently moving, handheld cameras of varying camera settings.
LanguageEnglish
Title of host publicationProceedings of the 2016 Fourth International Conference on 3D Vision
Subtitle of host publication25–28 October 2016 Stanford, California, USA
Place of PublicationLos Alamitos, CA, USA
PublisherIEEE
Pages276-285
Number of pages10
ISBN (Print)978-1-5090-5407-7
DOIs
StatusPublished - 25 Oct 2016
EventInternational Conference on 3D Vision - Stanford University, Palo Alto, USA United States
Duration: 25 Oct 201628 Oct 2016
http://3dv.stanford.edu/

Conference

ConferenceInternational Conference on 3D Vision
Abbreviated title3DV
CountryUSA United States
CityPalo Alto
Period25/10/1628/10/16
Internet address

Fingerprint

Video cameras
Cameras
Pixels
Sensors

Cite this

Richardt, C., Kim, H., Valgaerts, L., & Theobalt, C. (2016). Dense Wide-Baseline Scene Flow From Two Handheld Video Cameras. In Proceedings of the 2016 Fourth International Conference on 3D Vision : 25–28 October 2016 Stanford, California, USA (pp. 276-285). Los Alamitos, CA, USA: IEEE. https://doi.org/10.1109/3DV.2016.36

Dense Wide-Baseline Scene Flow From Two Handheld Video Cameras. / Richardt, Christian; Kim, Hyeongwoo; Valgaerts, Levi; Theobalt, Christian.

Proceedings of the 2016 Fourth International Conference on 3D Vision : 25–28 October 2016 Stanford, California, USA. Los Alamitos, CA, USA : IEEE, 2016. p. 276-285.

Research output: Chapter in Book/Report/Conference proceedingChapter

Richardt, C, Kim, H, Valgaerts, L & Theobalt, C 2016, Dense Wide-Baseline Scene Flow From Two Handheld Video Cameras. in Proceedings of the 2016 Fourth International Conference on 3D Vision : 25–28 October 2016 Stanford, California, USA. IEEE, Los Alamitos, CA, USA, pp. 276-285, International Conference on 3D Vision, Palo Alto, USA United States, 25/10/16. https://doi.org/10.1109/3DV.2016.36
Richardt C, Kim H, Valgaerts L, Theobalt C. Dense Wide-Baseline Scene Flow From Two Handheld Video Cameras. In Proceedings of the 2016 Fourth International Conference on 3D Vision : 25–28 October 2016 Stanford, California, USA. Los Alamitos, CA, USA: IEEE. 2016. p. 276-285 https://doi.org/10.1109/3DV.2016.36
Richardt, Christian ; Kim, Hyeongwoo ; Valgaerts, Levi ; Theobalt, Christian. / Dense Wide-Baseline Scene Flow From Two Handheld Video Cameras. Proceedings of the 2016 Fourth International Conference on 3D Vision : 25–28 October 2016 Stanford, California, USA. Los Alamitos, CA, USA : IEEE, 2016. pp. 276-285
@inbook{02d08ba098bd40889dbb9c7ebef4b7d0,
title = "Dense Wide-Baseline Scene Flow From Two Handheld Video Cameras",
abstract = "We propose a new technique for computing dense scene flow from two handheld videos with wide camera baselines and different photometric properties due to different sensors or camera settings like exposure and white balance. Our technique innovates in two ways over existing methods: (1) it supports independently moving cameras, and (2) it computes dense scene flow for wide-baseline scenarios.We achieve this by combining state-of-the-art wide-baseline correspondence finding with a variational scene flow formulation. First, we compute dense, wide-baseline correspondences using DAISY descriptors for matching between cameras and over time. We then detect and replace occluded pixels in the correspondence fields using a novel edge-preserving Laplacian correspondence completion technique. We finally refine the computed correspondence fields in a variational scene flow formulation. We show dense scene flow results computed from challenging datasets with independently moving, handheld cameras of varying camera settings.",
author = "Christian Richardt and Hyeongwoo Kim and Levi Valgaerts and Christian Theobalt",
year = "2016",
month = "10",
day = "25",
doi = "10.1109/3DV.2016.36",
language = "English",
isbn = "978-1-5090-5407-7",
pages = "276--285",
booktitle = "Proceedings of the 2016 Fourth International Conference on 3D Vision",
publisher = "IEEE",
address = "USA United States",

}

TY - CHAP

T1 - Dense Wide-Baseline Scene Flow From Two Handheld Video Cameras

AU - Richardt, Christian

AU - Kim, Hyeongwoo

AU - Valgaerts, Levi

AU - Theobalt, Christian

PY - 2016/10/25

Y1 - 2016/10/25

N2 - We propose a new technique for computing dense scene flow from two handheld videos with wide camera baselines and different photometric properties due to different sensors or camera settings like exposure and white balance. Our technique innovates in two ways over existing methods: (1) it supports independently moving cameras, and (2) it computes dense scene flow for wide-baseline scenarios.We achieve this by combining state-of-the-art wide-baseline correspondence finding with a variational scene flow formulation. First, we compute dense, wide-baseline correspondences using DAISY descriptors for matching between cameras and over time. We then detect and replace occluded pixels in the correspondence fields using a novel edge-preserving Laplacian correspondence completion technique. We finally refine the computed correspondence fields in a variational scene flow formulation. We show dense scene flow results computed from challenging datasets with independently moving, handheld cameras of varying camera settings.

AB - We propose a new technique for computing dense scene flow from two handheld videos with wide camera baselines and different photometric properties due to different sensors or camera settings like exposure and white balance. Our technique innovates in two ways over existing methods: (1) it supports independently moving cameras, and (2) it computes dense scene flow for wide-baseline scenarios.We achieve this by combining state-of-the-art wide-baseline correspondence finding with a variational scene flow formulation. First, we compute dense, wide-baseline correspondences using DAISY descriptors for matching between cameras and over time. We then detect and replace occluded pixels in the correspondence fields using a novel edge-preserving Laplacian correspondence completion technique. We finally refine the computed correspondence fields in a variational scene flow formulation. We show dense scene flow results computed from challenging datasets with independently moving, handheld cameras of varying camera settings.

UR - http://richardt.name/publications/wide-baseline-scene-flow/

U2 - 10.1109/3DV.2016.36

DO - 10.1109/3DV.2016.36

M3 - Chapter

SN - 978-1-5090-5407-7

SP - 276

EP - 285

BT - Proceedings of the 2016 Fourth International Conference on 3D Vision

PB - IEEE

CY - Los Alamitos, CA, USA

ER -