Learn to Model Blurry Motion via Directional Similarity and Filtering

Wenbin Li, Da Chen, Lv. Zhihan, Yan Yan, Darren Cosker

Research output: Contribution to journalArticle

2 Citations (Scopus)
25 Downloads (Pure)

Abstract

It is difficult to recover the motion field from a real-world footage given a mixture of camera shake and other photometric effects. In this paper we propose a hybrid framework by interleaving a Convolutional Neural Network (CNN) and a traditional optical flow energy. We first conduct a CNN architecture using a novel learnable directional filtering layer. Such layer encodes the angle and distance similarity matrix between blur and camera motion, which is able to enhance the blur features of the camera-shake footages. The proposed CNNs are then integrated into an iterative optical flow framework, which enable the capability of modeling and solving both the blind deconvolution and the optical flow estimation problems simultaneously. Our framework is trained end-to-end on a synthetic dataset and yields competitive precision and performance against the state-of-the-art approaches.

Original languageEnglish
Pages (from-to)327-338
Number of pages12
JournalPattern Recognition
Volume75
Early online date22 Apr 2017
DOIs
Publication statusPublished - 1 Mar 2018

Keywords

  • Convolutional Neural Network (CNN)
  • Directional filtering
  • Optical flow
  • Video/image deblurring

ASJC Scopus subject areas

  • Software
  • Signal Processing
  • Computer Vision and Pattern Recognition
  • Artificial Intelligence

Cite this

Learn to Model Blurry Motion via Directional Similarity and Filtering. / Li, Wenbin; Chen, Da; Zhihan, Lv.; Yan, Yan; Cosker, Darren.

In: Pattern Recognition, Vol. 75, 01.03.2018, p. 327-338.

Research output: Contribution to journalArticle

@article{1717d96a8bbb4d76b65790dd63cdd18a,
title = "Learn to Model Blurry Motion via Directional Similarity and Filtering",
abstract = "It is difficult to recover the motion field from a real-world footage given a mixture of camera shake and other photometric effects. In this paper we propose a hybrid framework by interleaving a Convolutional Neural Network (CNN) and a traditional optical flow energy. We first conduct a CNN architecture using a novel learnable directional filtering layer. Such layer encodes the angle and distance similarity matrix between blur and camera motion, which is able to enhance the blur features of the camera-shake footages. The proposed CNNs are then integrated into an iterative optical flow framework, which enable the capability of modeling and solving both the blind deconvolution and the optical flow estimation problems simultaneously. Our framework is trained end-to-end on a synthetic dataset and yields competitive precision and performance against the state-of-the-art approaches.",
keywords = "Convolutional Neural Network (CNN), Directional filtering, Optical flow, Video/image deblurring",
author = "Wenbin Li and Da Chen and Lv. Zhihan and Yan Yan and Darren Cosker",
year = "2018",
month = "3",
day = "1",
doi = "10.1016/j.patcog.2017.04.020",
language = "English",
volume = "75",
pages = "327--338",
journal = "Pattern Recognition",
issn = "0031-3203",
publisher = "Elsevier",

}

TY - JOUR

T1 - Learn to Model Blurry Motion via Directional Similarity and Filtering

AU - Li, Wenbin

AU - Chen, Da

AU - Zhihan, Lv.

AU - Yan, Yan

AU - Cosker, Darren

PY - 2018/3/1

Y1 - 2018/3/1

N2 - It is difficult to recover the motion field from a real-world footage given a mixture of camera shake and other photometric effects. In this paper we propose a hybrid framework by interleaving a Convolutional Neural Network (CNN) and a traditional optical flow energy. We first conduct a CNN architecture using a novel learnable directional filtering layer. Such layer encodes the angle and distance similarity matrix between blur and camera motion, which is able to enhance the blur features of the camera-shake footages. The proposed CNNs are then integrated into an iterative optical flow framework, which enable the capability of modeling and solving both the blind deconvolution and the optical flow estimation problems simultaneously. Our framework is trained end-to-end on a synthetic dataset and yields competitive precision and performance against the state-of-the-art approaches.

AB - It is difficult to recover the motion field from a real-world footage given a mixture of camera shake and other photometric effects. In this paper we propose a hybrid framework by interleaving a Convolutional Neural Network (CNN) and a traditional optical flow energy. We first conduct a CNN architecture using a novel learnable directional filtering layer. Such layer encodes the angle and distance similarity matrix between blur and camera motion, which is able to enhance the blur features of the camera-shake footages. The proposed CNNs are then integrated into an iterative optical flow framework, which enable the capability of modeling and solving both the blind deconvolution and the optical flow estimation problems simultaneously. Our framework is trained end-to-end on a synthetic dataset and yields competitive precision and performance against the state-of-the-art approaches.

KW - Convolutional Neural Network (CNN)

KW - Directional filtering

KW - Optical flow

KW - Video/image deblurring

UR - http://www.scopus.com/inward/record.url?scp=85018303198&partnerID=8YFLogxK

U2 - 10.1016/j.patcog.2017.04.020

DO - 10.1016/j.patcog.2017.04.020

M3 - Article

VL - 75

SP - 327

EP - 338

JO - Pattern Recognition

JF - Pattern Recognition

SN - 0031-3203

ER -