View Invariance for Human Action Recognition |
| |
Authors: | Vasu Parameswaran Rama Chellappa |
| |
Affiliation: | (1) Center for Automation Research, University of Maryland, College Park, MD 20742-3275, USA;(2) Present address: Siemens Corporate Research, Princeton, NJ |
| |
Abstract: | This paper presents an approach for viewpoint invariant human action recognition, an area that has received scant attention so far, relative to the overall body of work in human action recognition. It has been established previously that there exist no invariants for 3D to 2D projection. However, there exist a wealth of techniques in 2D invariance that can be used to advantage in 3D to 2D projection. We exploit these techniques and model actions in terms of view-invariant canonical body poses and trajectories in 2D invariance space, leading to a simple and effective way to represent and recognize human actions from a general viewpoint. We first evaluate the approach theoretically and show why a straightforward application of the 2D invariance idea will not work. We describe strategies designed to overcome inherent problems in the straightforward approach and outline the recognition algorithm. We then present results on 2D projections of publicly available human motion capture data as well on manually segmented real image sequences. In addition to robustness to viewpoint change, the approach is robust enough to handle different people, minor variabilities in a given action, and the speed of aciton (and hence, frame-rate) while encoding sufficient distinction among actions. This work was done when the author was a graduate student in the Department of Computer Science and was partially supported by the NSF Grant ECS-02-5475. The author is curently with Siemens Corporate Research, Princeton, NJ. Dr. Chellappa is with the Department of Electrical and Computer Engineering. |
| |
Keywords: | human action recognition 2D invariance invariance space trajectories |
本文献已被 SpringerLink 等数据库收录! |
|