Learn to Model Blurry Motion via Directional Similarity and Filtering

Wenbin Li, Da Chen, Lv. Zhihan, Yan Yan, Darren Cosker

Research output: Contribution to journalArticlepeer-review

3 Citations (SciVal)
84 Downloads (Pure)


It is difficult to recover the motion field from a real-world footage given a mixture of camera shake and other photometric effects. In this paper we propose a hybrid framework by interleaving a Convolutional Neural Network (CNN) and a traditional optical flow energy. We first conduct a CNN architecture using a novel learnable directional filtering layer. Such layer encodes the angle and distance similarity matrix between blur and camera motion, which is able to enhance the blur features of the camera-shake footages. The proposed CNNs are then integrated into an iterative optical flow framework, which enable the capability of modeling and solving both the blind deconvolution and the optical flow estimation problems simultaneously. Our framework is trained end-to-end on a synthetic dataset and yields competitive precision and performance against the state-of-the-art approaches.

Original languageEnglish
Pages (from-to)327-338
Number of pages12
JournalPattern Recognition
Early online date22 Apr 2017
Publication statusPublished - 1 Mar 2018


  • Convolutional Neural Network (CNN)
  • Directional filtering
  • Optical flow
  • Video/image deblurring

ASJC Scopus subject areas

  • Software
  • Signal Processing
  • Computer Vision and Pattern Recognition
  • Artificial Intelligence


Dive into the research topics of 'Learn to Model Blurry Motion via Directional Similarity and Filtering'. Together they form a unique fingerprint.

Cite this