首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Action recognition on large categories of unconstrained videos taken from the web is a very challenging problem compared to datasets like KTH (6 actions), IXMAS (13 actions), and Weizmann (10 actions). Challenges like camera motion, different viewpoints, large interclass variations, cluttered background, occlusions, bad illumination conditions, and poor quality of web videos cause the majority of the state-of-the-art action recognition approaches to fail. Also, an increased number of categories and the inclusion of actions with high confusion add to the challenges. In this paper, we propose using the scene context information obtained from moving and stationary pixels in the key frames, in conjunction with motion features, to solve the action recognition problem on a large (50 actions) dataset with videos from the web. We perform a combination of early and late fusion on multiple features to handle the very large number of categories. We demonstrate that scene context is a very important feature to perform action recognition on very large datasets. The proposed method does not require any kind of video stabilization, person detection, or tracking and pruning of features. Our approach gives good performance on a large number of action categories; it has been tested on the UCF50 dataset with 50 action categories, which is an extension of the UCF YouTube Action (UCF11) dataset containing 11 action categories. We also tested our approach on the KTH and HMDB51 datasets for comparison.  相似文献   

3.
4.

Human action recognition based on silhouette images has wide applications in computer vision, human computer interaction and intelligent surveillance. It is a challenging task due to the complex actions in nature. In this paper, a human action recognition method is proposed which is based on the distance transform and entropy features of human silhouettes. In the first stage, background subtraction is performed by applying correlation coefficient based frame difference technique to extract silhouette images. In the second stage, distance transform based features and entropy features are extracted from the silhouette images. The distance transform based features and entropy features provide the shape and local variation information. These features are given as input to neural networks to recognize various human actions. The proposed method is tested on three different datasets viz., Weizmann, KTH and UCF50. The proposed method obtains an accuracy of 92.5%, 91.4% and 80% for Weizmann, KTH and UCF50 datasets respectively. The experimental results show that the proposed method for human action recognition is comparable to other state-of-the-art human action recognition methods.

  相似文献   

5.
6.
This paper presents a novel and efficient framework for human action recognition based on modeling the motion of human body-parts. Intuitively, a collective understanding of human body-part movements can lead to better understanding and representation of any human action. In this paper, we propose a generative representation of the motion of human body-parts to learn and classify human actions. The proposed representation combines the advantages of both local and global representations, encoding the relevant motion information as well as being robust to local appearance changes. Our work is motivated by the pictorial structures model and the framework of sparse representations for recognition. Human body-part movements are represented efficiently through quantization in the polar space. The key discrimination within each action is efficiently encoded by sparse representation for classification. The proposed framework is evaluated on both the KTH and the UCF Sport action datasets and results compared against several state-of-the-art methods.  相似文献   

7.
8.

Deep learning models have attained great success for an extensive range of computer vision applications including image and video classification. However, the complex architecture of the most recently developed networks imposes certain memory and computational resource limitations, especially for human action recognition applications. Unsupervised deep convolutional neural networks such as PCANet can alleviate these limitations and hence significantly reduce the computational complexity of the whole recognition system. In this work, instead of using 3D convolutional neural network architecture to learn temporal features of video actions, the unsupervised convolutional PCANet model is extended into (PCANet-TOP) which effectively learn spatiotemporal features from Three Orthogonal Planes (TOP). For each video sequence, spatial frames (XY) and temporal planes (XT and YT) are utilized to train three different PCANet models. Then, the learned features are fused after reducing their dimensionality using whitening PCA to obtain spatiotemporal feature representation of the action video. Finally, Support Vector Machine (SVM) classifier is applied for action classification process. The proposed method is evaluated on four benchmarks and well-known datasets, namely, Weizmann, KTH, UCF Sports, and YouTube action datasets. The recognition results show that the proposed PCANet-TOP provides discriminative and complementary features using three orthogonal planes and able to achieve promising and comparable results with state-of-the-art methods.

  相似文献   

9.
10.
11.
A variety of recognizing architectures based on deep convolutional neural networks have been devised for labeling videos containing human motion with action labels. However, so far, most works cannot properly deal with the temporal dynamics encoded in multiple contiguous frames, which distinguishes action recognition from other recognition tasks. This paper develops a temporal extension of convolutional neural networks to exploit motion-dependent features for recognizing human action in video. Our approach differs from other recent attempts in that it uses multiplicative interactions between convolutional outputs to describe motion information across contiguous frames. Interestingly, the representation of image content arises when we are at work on extracting motion pattern, which makes our model effectively incorporate both of them to analysis video. Additional theoretical analysis proves that motion and content-dependent features arise simultaneously from the developed architecture, whereas previous works mostly deal with the two separately. Our architecture is trained and evaluated on the standard video actions benchmarks of KTH and UCF101, where it matches the state of the art and has distinct advantages over previous attempts to use deep convolutional architectures for action recognition.  相似文献   

12.
Human action recognition is a challenging computer vision task and many efforts have been made to improve the performance. Most previous work has concentrated on the hand-crafted features or spatial-temporal features learned from multiple contiguous frames. In this paper, we present a dual-channel model to decouple the spatial and temporal feature extraction. More specifically, we propose to capture the complementary static form information from single frame and dynamic motion information from multi-frame differences in two separate channels. In both channels we use two stacked classical subspace networks to learn hierarchical representations, which are subsequently fused for action recognition. Our model is trained and evaluated on three typical benchmarks: KTH, UCF and Hollywood2 datasets. The experimental results illustrate that our approach achieves comparable performances to the state-of-the-art methods. In addition, both feature analysis and control experiments are also carried out to demonstrate the effectiveness of the proposed approach for feature extraction and thereby action recognition.  相似文献   

13.
针对全局运动特征难以准确提取的问题,本文采用局部时空特征对人体行为进行表征。针对传统词袋中硬分类的方法量化误差大的不足,本文借鉴模糊聚类的思想,提出软分类的方法。根据兴趣点检测算法从视频中提取出视觉词汇,用K means算法对其进行聚类,建立码本。在计算分类特征时,首先计算待分类视觉词汇到码本中各个码字的距离,根据距离计算这个视觉词汇隶属于各个码字的概率,最后统计得到每个视频中各码字出现的频率。在Weizmann和KTH数据库对本文提出的人体行为识别算法进行验证,Weizmann库的识别率比传统的词袋算法提高8%,KTH库的识别率比传统的词袋算法提高9%,因此本文提出的算法能更有效地对人体行为进行识别。  相似文献   

14.
15.
Computational neuroscience studies have examined the human visual system through functional magnetic resonance imaging (fMRI) and identified a model where the mammalian brain pursues two independent pathways for recognizing biological movement tasks. On the one hand, the dorsal stream analyzes the motion information by applying optical flow, which considers the fast features. On the other hand, the ventral stream analyzes the form information with slow features. The proposed approach suggests that the motion perception of the human visual system comprises fast and slow feature interactions to identify biological movements. The form features in the visual system follow the application of the active basis model (ABM) with incremental slow feature analysis (IncSFA). Episodic observation is required to extract the slowest features, whereas the fast features update the processing of motion information in every frame. Applying IncSFA provides an opportunity to abstract human actions and use action prototypes. However, the fast features are obtained from the optical flow division, which gives an opportunity to interact with the system as the final recognition is performed through a combination of the optical flow and ABM-IncSFA information and through the application of kernel extreme learning machine. Applying IncSFA into the ventral stream and involving slow and fast features in the recognition mechanism are the major contributions of this research. The two human action datasets for benchmarking (KTH and Weizmann) and the results highlight the promising performance of this approach in model modification.  相似文献   

16.
17.
为了高效、准确地获得视频中的行为类别和运动信息,减少计算的复杂度,文中提出一种融合特征传播和时域分割网络的视频行为识别算法.首先将视频分为3个小片段,分别从相应片段中提取关键帧,从而实现对长时间视频的建模;然后设计一个包含特征传播表观信息流和FlowNet运动信息流的改进时域分割网络(P-TSN),分别以RGB关键帧、RGB非关键帧、光流图为输入提取视频的表观信息流和运动信息流;最后将改进时域分割网络的BN-Inception描述子进行平均加权融合后送入Softmax层进行行为识别.在UCF101和HMDB51这2个数据集上分别取得了94.6%和69.4%的识别准确率,表明该算法能够有效地获得视频中空域表观信息和时域运动信息,提高了视频行为识别的准确率.  相似文献   

18.
19.
为有效地表征人体行为的时空特征,将骨骼特征通过Hough变换后建立人体的动作表示.具体来说,采用OpenPose获取视频帧人体骨骼关键点,之后构建骨骼关节并映射到Hough空间,将骨骼关节轨迹转换为点迹,然后角度和轨迹特征的FV(Fisher vector)编码融合作为线性SVM分类器的输入.在经典公共数据集KTH、Weizmann、KARD和Drone-Action上,实验结果表明Hough变换提升了特征的鲁棒性,提高了人体行为识别的性能.  相似文献   

20.
In recent years, the bag-of-words (BoW) video representations have achieved promising results in human action recognition in videos. By vector quantizing local spatial temporal (ST) features, the BoW video representation brings in simplicity and efficiency, but limitations too. First, the discretization of feature space in BoW inevitably results in ambiguity and information loss in video representation. Second, there exists no universal codebook for BoW representation. The codebook needs to be re-built when video corpus is changed. To tackle these issues, this paper explores a localized, continuous and probabilistic video representation. Specifically, the proposed representation encodes the visual and motion information of an ensemble of local ST features of a video into a distribution estimated by a generative probabilistic model. Furthermore, the probabilistic video representation naturally gives rise to an information-theoretic distance metric of videos. This makes the representation readily applicable to most discriminative classifiers, such as the nearest neighbor schemes and the kernel based classifiers. Experiments on two datasets, KTH and UCF sports, show that the proposed approach could deliver promising results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号