Depiction invariant object matching

Anupriya Balikai, Peter M. Hall

Research output: Contribution to conferencePaperpeer-review

1 Citation (SciVal)

Abstract

We are interested in matching objects in photographs, paintings, sketches and so on; after all, humans have a remarkable ability to recognise objects in images, no matter how they are depicted. We conduct experiments in matching, and conclude that the key to robustness lies in object description. The existing literature consists of numerous feature descriptors that rely heavily on photometric properties such as colour and illumination to describe objects. Although these methods achieve high rates of accuracy in applications such as detection and retrieval of photographs, they fail to generalise datasets consisting of mixed depictions. Here, we propose a more general approach for describing objects invariant to depictive style. We use structure at a global level, which is combined with simple non-photometric descriptors at a local level. There is no need for any prior learning. Our descriptor achieves results on par with existing state of the art, when applied to object matching on a standard dataset consisting of photographs alone and outperforms the state of the art when applied to depiction-invariant object matching.

Original languageEnglish
DOIs
Publication statusPublished - 1 Jan 2012
Event2012 23rd British Machine Vision Conference, BMVC 2012 - Guildford, Surrey, UK United Kingdom
Duration: 3 Sept 20127 Sept 2012

Conference

Conference2012 23rd British Machine Vision Conference, BMVC 2012
Country/TerritoryUK United Kingdom
CityGuildford, Surrey
Period3/09/127/09/12

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'Depiction invariant object matching'. Together they form a unique fingerprint.

Cite this