Optical flow estimation is a difficult task given real-world video footage with camera and object blur. In this paper, we combine a 3D pose&position tracker with an RGB sensor allowing us to capture video footage together with 3D camera motion. We show that the additional camera motion information can be embedded into a hybrid optical flow framework by interleaving an iterative blind deconvolution and warping based minimization scheme. Such a hybrid framework significantly improves the accuracy of optical flow estimation in scenes with strong blur. Our approach yields improved overall performance against three state-of-the-art baseline methods applied to our proposed ground truth sequences as well as in several other real-world cases.
|Publication status||Published - 30 Jan 2013|
|Event||IEEE Winter Conference on Applications of Computer Vision - , UK United Kingdom|
Duration: 30 Jul 2013 → …
|Conference||IEEE Winter Conference on Applications of Computer Vision|
|Country||UK United Kingdom|
|Period||30/07/13 → …|