360MonoDepth: High-Resolution 360° Monocular Depth Estimation

Manuel Rey-Area, Mingze Yuan, Christian Richardt

Research output: Chapter or section in a book/report/conference proceedingChapter in a published conference proceeding

6 Downloads (Pure)

Abstract

360° cameras can capture complete environments in a single shot, which makes 360° imagery alluring in many computer vision tasks. However, monocular depth estimation remains a challenge for 360° data, particularly for high resolutions like 2K (2048×1024) and beyond that are important for novel-view synthesis and virtual reality applications. Current CNN-based methods do not support such high resolutions due to limited GPU memory. In this work, we propose a flexible framework for monocular depth estimation from high- resolution 360° images using tangent images. We project the 360° input image onto a set of tangent planes that produce perspective views, which are suitable for the latest, most accurate state-of-the-art perspective monocular depth estimators. To achieve globally consistent disparity estimates, we recombine the individual depth estimates using deformable multi-scale alignment followed by gradient-domain blending. The result is a dense, high-resolution 360° depth map with a high level of detail, also for outdoor scenes which are not supported by existing methods.
Original languageEnglish
Title of host publicationConference on Computer Vision and Pattern Recognition (CVPR)
PublisherIEEE
ISBN (Electronic)978-1-6654-6946-3
DOIs
Publication statusE-pub ahead of print - 27 Sep 2022

Publication series

NameIEEE Computer Society Conference on Computer Vision and Pattern Recognition
PublisherIEEE
ISSN (Print)1063-6919
ISSN (Electronic)2575-7075

Fingerprint

Dive into the research topics of '360MonoDepth: High-Resolution 360° Monocular Depth Estimation'. Together they form a unique fingerprint.

Cite this