Video Depth-From-Defocus

Hyeongwoo Kim, Christian Richardt, Christian Theobalt

Research output: Contribution to conferencePaper

2 Citations (Scopus)
27 Downloads (Pure)

Abstract

Many compelling video post-processing effects, in particular aesthetic focus editing and refocusing effects, are feasible if per-frame depth information is available. Existing computational methods to capture RGB and depth either purposefully modify the optics (coded aperture, light-field imaging), or employ active RGB-D cameras. Since these methods are less practical for users with normal cameras, we present an algorithm to capture all-in-focus RGB-D video of dynamic scenes with an unmodified commodity video camera. Our algorithm turns the often unwanted defocus blur into a valuable signal. The input to our method is a video in which the focus plane is continuously moving back and forth during capture, and thus defocus blur is provoked and strongly visible. This can be achieved by manually turning the focus ring of the lens during recording. The core algorithmic ingredient is a new video-based depth-from-defocus algorithm that computes space-time-coherent depth maps, deblurred all-in-focus video, and the focus distance for each frame. We extensively evaluate our approach, and show that it enables compelling video post-processing effects, such as different types of refocusing.
Original languageEnglish
Pages370-379
Number of pages10
DOIs
Publication statusPublished - 25 Oct 2016
EventInternational Conference on 3D Vision - Stanford University, Palo Alto, USA United States
Duration: 25 Oct 201628 Oct 2016
http://3dv.stanford.edu/

Conference

ConferenceInternational Conference on 3D Vision
Abbreviated title3DV
CountryUSA United States
CityPalo Alto
Period25/10/1628/10/16
Internet address

Fingerprint

Cameras
Video cameras
Processing
Computational methods
Lenses
Optics
Imaging techniques

Cite this

Kim, H., Richardt, C., & Theobalt, C. (2016). Video Depth-From-Defocus. 370-379. Paper presented at International Conference on 3D Vision, Palo Alto, USA United States. https://doi.org/10.1109/3DV.2016.46

Video Depth-From-Defocus. / Kim, Hyeongwoo; Richardt, Christian; Theobalt, Christian.

2016. 370-379 Paper presented at International Conference on 3D Vision, Palo Alto, USA United States.

Research output: Contribution to conferencePaper

Kim, H, Richardt, C & Theobalt, C 2016, 'Video Depth-From-Defocus' Paper presented at International Conference on 3D Vision, Palo Alto, USA United States, 25/10/16 - 28/10/16, pp. 370-379. https://doi.org/10.1109/3DV.2016.46
Kim H, Richardt C, Theobalt C. Video Depth-From-Defocus. 2016. Paper presented at International Conference on 3D Vision, Palo Alto, USA United States. https://doi.org/10.1109/3DV.2016.46
Kim, Hyeongwoo ; Richardt, Christian ; Theobalt, Christian. / Video Depth-From-Defocus. Paper presented at International Conference on 3D Vision, Palo Alto, USA United States.10 p.
@conference{365bd5e0c96048b985332a0a90731d95,
title = "Video Depth-From-Defocus",
abstract = "Many compelling video post-processing effects, in particular aesthetic focus editing and refocusing effects, are feasible if per-frame depth information is available. Existing computational methods to capture RGB and depth either purposefully modify the optics (coded aperture, light-field imaging), or employ active RGB-D cameras. Since these methods are less practical for users with normal cameras, we present an algorithm to capture all-in-focus RGB-D video of dynamic scenes with an unmodified commodity video camera. Our algorithm turns the often unwanted defocus blur into a valuable signal. The input to our method is a video in which the focus plane is continuously moving back and forth during capture, and thus defocus blur is provoked and strongly visible. This can be achieved by manually turning the focus ring of the lens during recording. The core algorithmic ingredient is a new video-based depth-from-defocus algorithm that computes space-time-coherent depth maps, deblurred all-in-focus video, and the focus distance for each frame. We extensively evaluate our approach, and show that it enables compelling video post-processing effects, such as different types of refocusing.",
author = "Hyeongwoo Kim and Christian Richardt and Christian Theobalt",
year = "2016",
month = "10",
day = "25",
doi = "10.1109/3DV.2016.46",
language = "English",
pages = "370--379",
note = "International Conference on 3D Vision, 3DV ; Conference date: 25-10-2016 Through 28-10-2016",
url = "http://3dv.stanford.edu/",

}

TY - CONF

T1 - Video Depth-From-Defocus

AU - Kim, Hyeongwoo

AU - Richardt, Christian

AU - Theobalt, Christian

PY - 2016/10/25

Y1 - 2016/10/25

N2 - Many compelling video post-processing effects, in particular aesthetic focus editing and refocusing effects, are feasible if per-frame depth information is available. Existing computational methods to capture RGB and depth either purposefully modify the optics (coded aperture, light-field imaging), or employ active RGB-D cameras. Since these methods are less practical for users with normal cameras, we present an algorithm to capture all-in-focus RGB-D video of dynamic scenes with an unmodified commodity video camera. Our algorithm turns the often unwanted defocus blur into a valuable signal. The input to our method is a video in which the focus plane is continuously moving back and forth during capture, and thus defocus blur is provoked and strongly visible. This can be achieved by manually turning the focus ring of the lens during recording. The core algorithmic ingredient is a new video-based depth-from-defocus algorithm that computes space-time-coherent depth maps, deblurred all-in-focus video, and the focus distance for each frame. We extensively evaluate our approach, and show that it enables compelling video post-processing effects, such as different types of refocusing.

AB - Many compelling video post-processing effects, in particular aesthetic focus editing and refocusing effects, are feasible if per-frame depth information is available. Existing computational methods to capture RGB and depth either purposefully modify the optics (coded aperture, light-field imaging), or employ active RGB-D cameras. Since these methods are less practical for users with normal cameras, we present an algorithm to capture all-in-focus RGB-D video of dynamic scenes with an unmodified commodity video camera. Our algorithm turns the often unwanted defocus blur into a valuable signal. The input to our method is a video in which the focus plane is continuously moving back and forth during capture, and thus defocus blur is provoked and strongly visible. This can be achieved by manually turning the focus ring of the lens during recording. The core algorithmic ingredient is a new video-based depth-from-defocus algorithm that computes space-time-coherent depth maps, deblurred all-in-focus video, and the focus distance for each frame. We extensively evaluate our approach, and show that it enables compelling video post-processing effects, such as different types of refocusing.

UR - http://richardt.name/publications/video-depth-from-defocus/

U2 - 10.1109/3DV.2016.46

DO - 10.1109/3DV.2016.46

M3 - Paper

SP - 370

EP - 379

ER -