首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
3.
Action recognition solely based on video data has known to be very sensitive to background activity, and also lacks the ability to discriminate complex 3D motion. With the development of commercial depth cameras, skeleton-based action recognition is becoming more and more popular. However, the skeleton-based approach is still very challenging because of the large variation in human actions and temporal dynamics. In this paper, we propose a hierarchical model for action recognition. To handle confusing motions, a motion-based grouping method is proposed, which can efficiently assign each video a group label, and then for each group, a pre-trained classifier is used for frame-labeling. Unlike previous methods, we adopt a bottom-up approach that first performs action recognition for each frame. The final action label is obtained by fusing the classification to its frames, with the effect of each frame being adaptively adjusted based on its local properties. To achieve online real-time performance and suppressing noise, bag-of-words is used to represent the classification features. The proposed method is evaluated using two challenge datasets captured by a Kinect. Experiments show that our method can robustly recognize actions in real-time.  相似文献   

4.
Action recognition on large categories of unconstrained videos taken from the web is a very challenging problem compared to datasets like KTH (6 actions), IXMAS (13 actions), and Weizmann (10 actions). Challenges like camera motion, different viewpoints, large interclass variations, cluttered background, occlusions, bad illumination conditions, and poor quality of web videos cause the majority of the state-of-the-art action recognition approaches to fail. Also, an increased number of categories and the inclusion of actions with high confusion add to the challenges. In this paper, we propose using the scene context information obtained from moving and stationary pixels in the key frames, in conjunction with motion features, to solve the action recognition problem on a large (50 actions) dataset with videos from the web. We perform a combination of early and late fusion on multiple features to handle the very large number of categories. We demonstrate that scene context is a very important feature to perform action recognition on very large datasets. The proposed method does not require any kind of video stabilization, person detection, or tracking and pruning of features. Our approach gives good performance on a large number of action categories; it has been tested on the UCF50 dataset with 50 action categories, which is an extension of the UCF YouTube Action (UCF11) dataset containing 11 action categories. We also tested our approach on the KTH and HMDB51 datasets for comparison.  相似文献   

5.
目的人类行为识别是计算机视觉领域的一个重要研究课题。由于背景复杂、摄像机抖动等原因,在自然环境视频中识别人类行为存在困难。针对上述问题,提出一种基于显著鲁棒轨迹的人类行为识别算法。方法该算法使用稠密光流技术在多尺度空间中跟踪显著特征点,并使用梯度直方图(HOG)、光流直方图(HOF)和运动边界直方图(MBH)特征描述显著轨迹。为了有效消除摄像机运动带来的影响,使用基于自适应背景分割的摄像机运动估计技术增强显著轨迹的鲁棒性。然后,对于每一类特征分别使用Fisher Vector模型将一个视频表示为一个Fisher向量,并使用线性支持向量机对视频进行分类。结果在4个公开数据集上,显著轨迹算法比Dense轨迹算法的实验结果平均高1%。增加摄像机运动消除技术后,显著鲁棒轨迹算法比显著轨迹算法的实验结果平均高2%。在4个数据集(即Hollywood2、You Tube、Olympic Sports和UCF50)上,显著鲁棒轨迹算法的实验结果分别是65.8%、91.6%、93.6%和92.1%,比目前最好的实验结果分别高1.5%、2.6%、2.5%和0.9%。结论实验结果表明,该算法能够有效地识别自然环境视频中的人类行为,并且具有较低的时间复杂度。  相似文献   

6.
7.
8.
9.
10.
11.
Matching actions in presence of camera motion   总被引:1,自引:0,他引:1  
When the camera viewing an action is moving, the motion observed in the video not only contains the motion of the actor but also the motion of the camera. At each time instant, in addition to the camera motion, a different view of the action is observed. In this paper, we propose a novel method to perform action recognition in presence of camera motion. Proposed method is based on the epipolar geometry between any two views. However, instead of relating two static views using the standard fundamental matrix, we model the motions of independently moving cameras in the equations governing the epipolar geometry and derive a new relation which is referred to as the “temporal fundamental matrix.” Using the temporal fundamental matrix, a matching score between two actions is computed by evaluating the quality of the recovered geometry. We demonstrate the versatility of the proposed approach for action recognition in a number of challenging sequences.  相似文献   

12.
13.
14.
To accurately recognize human actions in less computational time is one important aspect for practical usage. This paper presents an efficient framework for recognizing actions by a RGB-D camera. The novel action patterns in the framework are extracted via computing position offset of 3D skeletal body joints locally in the temporal extent of video. Action recognition is then performed by assembling these offset vectors using a bag-of-words framework and also by considering the spatial independence of body joints. We conducted extensive experiments on two benchmarking datasets: UCF dataset and MSRC-12 dataset, to demonstrate the effectiveness of the proposed framework. Experimental results suggest that the proposed framework 1) is very fast to extract action patterns and very simple in implementation; and 2) can achieve a comparable or a better performance in recognition accuracy compared with the state-of-the-art approaches.  相似文献   

15.
16.
17.
18.
19.
Current state-of-the-art action classification methods aggregate space–time features globally, from the entire video clip under consideration. However, the features extracted may in part be due to irrelevant scene context, or movements shared amongst multiple action classes. This motivates learning with local discriminative parts, which can help localise which parts of the video are significant. Exploiting spatio-temporal structure in the video should also improve results, just as deformable part models have proven highly successful in object recognition. However, whereas objects have clear boundaries which means we can easily define a ground truth for initialisation, 3D space–time actions are inherently ambiguous and expensive to annotate in large datasets. Thus, it is desirable to adapt pictorial star models to action datasets without location annotation, and to features invariant to changes in pose such as bag-of-feature and Fisher vectors, rather than low-level HoG. Thus, we propose local deformable spatial bag-of-features in which local discriminative regions are split into a fixed grid of parts that are allowed to deform in both space and time at test-time. In our experimental evaluation we demonstrate that by using local space–time action parts in a weakly supervised setting, we are able to achieve state-of-the-art classification performance, whilst being able to localise actions even in the most challenging video datasets.  相似文献   

20.
基于混合特征的人体动作识别改进算法   总被引:1,自引:0,他引:1  
运动特征的选择直接影响人体动作识别方法的识别效果.单一特征往往受到人体外观、环境、摄像机设置等因素的影响不同,其适用范围不同,识别效果也是有限的.在研究人体动作的表征与识别的基础上,充分考虑不同特征的优缺点,提出一种结合全局的剪影特征和局部的光流特征的混合特征,并用于人体动作识别.实验结果表明,该算法得到了理想的识别结果,对于Weizmann数据库中的动作可以达到100%的正确识别率.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号