SEMBED: Semantic Embedding of Egocentric Action Videos

Michael Wray, Davide Moltisanti, Walterio Mayol-Cuevas, Dima Damen

Research output: Chapter or section in a book/report/conference proceedingChapter in a published conference proceeding

7 Citations (SciVal)

Abstract

We present SEMBED, an approach for embedding an egocentric object interaction video in a semantic-visual graph to estimate the probability distribution over its potential semantic labels. When object interactions are annotated using unbounded choice of verbs, we embrace the wealth and ambiguity of these labels by capturing the semantic relationships as well as the visual similarities over motion and appearance features. We show how SEMBED can interpret a challenging dataset of 1225 freely annotated egocentric videos, outperforming SVM classification by more than 5 %.
Original languageEnglish
Title of host publicationEuropean Conference onf Computer Vision - Workshops
Pages532-545
DOIs
Publication statusPublished - 18 Sept 2016

Fingerprint

Dive into the research topics of 'SEMBED: Semantic Embedding of Egocentric Action Videos'. Together they form a unique fingerprint.

Cite this