Human action recognition from video clips has received increasing attention in recent years. This paper proposes a simple yet effective method for the problem of action recognition. The method aims to encode human actions using the quantized vocabulary of averaged silhouettes that are derived from space-time windowed shapes and implicitly capture local temporal motion as well as global body shape. Experimental results on the publicly available Weizmann dataset have demonstrated that, despite its simplicity, our method is effective for recognizing actions, and is comparable to other state-of-the-art methods.
|Name||Proceedings - International Conference on Pattern Recognition|
|Publisher||Institute of Electrical and Electronics Engineers|
|Conference||2010 20th International Conference on Pattern Recognition, ICPR 2010, August 23, 2010 - August 26, 2010|
|Period||1/08/10 → …|