Gaze movement inference for user adapted image annotation and retrieval

S.N.H. Mirza, E. Izquierdo, M. Proulx

Research output: Chapter or section in a book/report/conference proceedingOther chapter contribution

4 Citations (SciVal)


In media personalisation the media provider needs to receive feedbacks from its users to adapt media contents used for interaction. At the current stage this feedback is limited to mouse clicks and keyboard entries. This report explores the possible solutions to include the gaze movements of a user as a form of feedback for media personalisation and adaptation. Features are extracted from the gaze trajectory of users while they are searching in an image database for a Target Concept(TC). These features are used to measure a user's visual attention to every image appeared on the screen called user interest level(UIL). Because the reaction of different people to the same content are different, for every new user a new adapted processing interface is developed automatically. In average our interface could detect 10% of the images belonging to the TC class with no error and it could identify 40% of them with only 20% error. We show in this paper that the gaze movement is a reliable feedback to be used for measuring one's interest to images which help to personalise image annotation and retrieval.
Original languageEnglish
Title of host publicationMM'11 - Proceedings of the 2011 ACM Multimedia Conference and Co-Located Workshops - ACM Workshop on Social Behavioural Networked Media Access 2011, SBNMA'11
Number of pages6
Publication statusPublished - 2011
EventMM '11 ACM Multimedia Conference - Scottsdale, AZ, USA United States
Duration: 28 Nov 20111 Dec 2011


ConferenceMM '11 ACM Multimedia Conference
Country/TerritoryUSA United States
CityScottsdale, AZ


Dive into the research topics of 'Gaze movement inference for user adapted image annotation and retrieval'. Together they form a unique fingerprint.

Cite this