Mixed graph convolution and residual transformation network for skeleton-based action recognition |
| |
Authors: | Liu Shuhua Bai Xiaoying Fang Ming Li Lanting Hung Chih-Cheng |
| |
Affiliation: | 1.School of Information Science and Technology, Northeast Normal University, Changchun, China ;2.Center for Machine Vision and Security Research, Kennesaw State University, Marietta, GA, USA ; |
| |
Abstract: | Action recognition based on a human skeleton is an extremely challenging research problem. The temporal information contained in the human skeleton is more difficult to extract than the spatial information. Many researchers focus on graph convolution networks and apply them to action recognition. In this study, an action recognition method based on a two-stream network called RNXt-GCN is proposed on the basis of the Spatial-Temporal Graph Convolutional Network (ST-GCN). The human skeleton is converted first into a spatial-temporal graph and a SkeleMotion image which are input into ST-GCN and ResNeXt, respectively, for performing the spatial-temporal convolution. The convolved features are then fused. The proposed method models the temporal information in action from the amplitude and direction of the action and addresses the shortcomings of isolated temporal information in the ST-GCN. The experiments are comprehensively performed on the four datasets: 1) UTD-MHAD, 2) Northwestern-UCLA, 3) NTU RGB-D 60, and 4) NTU RGB-D 120. The proposed model shows very competitive results compared with other models in our experiments. On the experiments of NTU RGB?+?D 120 dataset, our proposed model outperforms those of the state-of-the-art two-stream models. |
| |
Keywords: | |
本文献已被 SpringerLink 等数据库收录! |
|