Encoding actions via quantized vocabulary of averaged silhouettes

Liang Wang, Christopher Leckie

Research output: Chapter or section in a book/report/conference proceedingChapter or section

6 Citations (SciVal)

Abstract

Human action recognition from video clips has received increasing attention in recent years. This paper proposes a simple yet effective method for the problem of action recognition. The method aims to encode human actions using the quantized vocabulary of averaged silhouettes that are derived from space-time windowed shapes and implicitly capture local temporal motion as well as global body shape. Experimental results on the publicly available Weizmann dataset have demonstrated that, despite its simplicity, our method is effective for recognizing actions, and is comparable to other state-of-the-art methods.
Original languageEnglish
Title of host publicationProceedings - 2010 20th International Conference on Pattern Recognition, ICPR 2010
PublisherIEEE
Pages3657-3660
Number of pages4
ISBN (Electronic)978-1-4244-7541-4
ISBN (Print)978-1-4244-7542-1
DOIs
Publication statusPublished - Aug 2010
Event2010 20th International Conference on Pattern Recognition, ICPR 2010, August 23, 2010 - August 26, 2010 - Istanbul, Turkey
Duration: 1 Aug 2010 → …

Publication series

NameProceedings - International Conference on Pattern Recognition
PublisherInstitute of Electrical and Electronics Engineers

Conference

Conference2010 20th International Conference on Pattern Recognition, ICPR 2010, August 23, 2010 - August 26, 2010
Country/TerritoryTurkey
CityIstanbul
Period1/08/10 → …

Fingerprint

Dive into the research topics of 'Encoding actions via quantized vocabulary of averaged silhouettes'. Together they form a unique fingerprint.

Cite this