Depiction invariant object matching

Anupriya Balikai, Peter M. Hall

Research output: Contribution to conferencePaper

1 Citation (Scopus)

Abstract

We are interested in matching objects in photographs, paintings, sketches and so on; after all, humans have a remarkable ability to recognise objects in images, no matter how they are depicted. We conduct experiments in matching, and conclude that the key to robustness lies in object description. The existing literature consists of numerous feature descriptors that rely heavily on photometric properties such as colour and illumination to describe objects. Although these methods achieve high rates of accuracy in applications such as detection and retrieval of photographs, they fail to generalise datasets consisting of mixed depictions. Here, we propose a more general approach for describing objects invariant to depictive style. We use structure at a global level, which is combined with simple non-photometric descriptors at a local level. There is no need for any prior learning. Our descriptor achieves results on par with existing state of the art, when applied to object matching on a standard dataset consisting of photographs alone and outperforms the state of the art when applied to depiction-invariant object matching.

Original languageEnglish
DOIs
Publication statusPublished - 1 Jan 2012
Event2012 23rd British Machine Vision Conference, BMVC 2012 - Guildford, Surrey, UK United Kingdom
Duration: 3 Sep 20127 Sep 2012

Conference

Conference2012 23rd British Machine Vision Conference, BMVC 2012
CountryUK United Kingdom
CityGuildford, Surrey
Period3/09/127/09/12

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition

Cite this

Balikai, A., & Hall, P. M. (2012). Depiction invariant object matching. Paper presented at 2012 23rd British Machine Vision Conference, BMVC 2012, Guildford, Surrey, UK United Kingdom. https://doi.org/10.5244/C.26.56

Depiction invariant object matching. / Balikai, Anupriya; Hall, Peter M.

2012. Paper presented at 2012 23rd British Machine Vision Conference, BMVC 2012, Guildford, Surrey, UK United Kingdom.

Research output: Contribution to conferencePaper

Balikai, A & Hall, PM 2012, 'Depiction invariant object matching' Paper presented at 2012 23rd British Machine Vision Conference, BMVC 2012, Guildford, Surrey, UK United Kingdom, 3/09/12 - 7/09/12, . https://doi.org/10.5244/C.26.56
Balikai A, Hall PM. Depiction invariant object matching. 2012. Paper presented at 2012 23rd British Machine Vision Conference, BMVC 2012, Guildford, Surrey, UK United Kingdom. https://doi.org/10.5244/C.26.56
Balikai, Anupriya ; Hall, Peter M. / Depiction invariant object matching. Paper presented at 2012 23rd British Machine Vision Conference, BMVC 2012, Guildford, Surrey, UK United Kingdom.
@conference{ef993613f4ec4195a6d2901c34216161,
title = "Depiction invariant object matching",
abstract = "We are interested in matching objects in photographs, paintings, sketches and so on; after all, humans have a remarkable ability to recognise objects in images, no matter how they are depicted. We conduct experiments in matching, and conclude that the key to robustness lies in object description. The existing literature consists of numerous feature descriptors that rely heavily on photometric properties such as colour and illumination to describe objects. Although these methods achieve high rates of accuracy in applications such as detection and retrieval of photographs, they fail to generalise datasets consisting of mixed depictions. Here, we propose a more general approach for describing objects invariant to depictive style. We use structure at a global level, which is combined with simple non-photometric descriptors at a local level. There is no need for any prior learning. Our descriptor achieves results on par with existing state of the art, when applied to object matching on a standard dataset consisting of photographs alone and outperforms the state of the art when applied to depiction-invariant object matching.",
author = "Anupriya Balikai and Hall, {Peter M.}",
year = "2012",
month = "1",
day = "1",
doi = "10.5244/C.26.56",
language = "English",
note = "2012 23rd British Machine Vision Conference, BMVC 2012 ; Conference date: 03-09-2012 Through 07-09-2012",

}

TY - CONF

T1 - Depiction invariant object matching

AU - Balikai, Anupriya

AU - Hall, Peter M.

PY - 2012/1/1

Y1 - 2012/1/1

N2 - We are interested in matching objects in photographs, paintings, sketches and so on; after all, humans have a remarkable ability to recognise objects in images, no matter how they are depicted. We conduct experiments in matching, and conclude that the key to robustness lies in object description. The existing literature consists of numerous feature descriptors that rely heavily on photometric properties such as colour and illumination to describe objects. Although these methods achieve high rates of accuracy in applications such as detection and retrieval of photographs, they fail to generalise datasets consisting of mixed depictions. Here, we propose a more general approach for describing objects invariant to depictive style. We use structure at a global level, which is combined with simple non-photometric descriptors at a local level. There is no need for any prior learning. Our descriptor achieves results on par with existing state of the art, when applied to object matching on a standard dataset consisting of photographs alone and outperforms the state of the art when applied to depiction-invariant object matching.

AB - We are interested in matching objects in photographs, paintings, sketches and so on; after all, humans have a remarkable ability to recognise objects in images, no matter how they are depicted. We conduct experiments in matching, and conclude that the key to robustness lies in object description. The existing literature consists of numerous feature descriptors that rely heavily on photometric properties such as colour and illumination to describe objects. Although these methods achieve high rates of accuracy in applications such as detection and retrieval of photographs, they fail to generalise datasets consisting of mixed depictions. Here, we propose a more general approach for describing objects invariant to depictive style. We use structure at a global level, which is combined with simple non-photometric descriptors at a local level. There is no need for any prior learning. Our descriptor achieves results on par with existing state of the art, when applied to object matching on a standard dataset consisting of photographs alone and outperforms the state of the art when applied to depiction-invariant object matching.

UR - http://www.scopus.com/inward/record.url?scp=84898459479&partnerID=8YFLogxK

U2 - 10.5244/C.26.56

DO - 10.5244/C.26.56

M3 - Paper

ER -