首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
3.
Action recognition on large categories of unconstrained videos taken from the web is a very challenging problem compared to datasets like KTH (6 actions), IXMAS (13 actions), and Weizmann (10 actions). Challenges like camera motion, different viewpoints, large interclass variations, cluttered background, occlusions, bad illumination conditions, and poor quality of web videos cause the majority of the state-of-the-art action recognition approaches to fail. Also, an increased number of categories and the inclusion of actions with high confusion add to the challenges. In this paper, we propose using the scene context information obtained from moving and stationary pixels in the key frames, in conjunction with motion features, to solve the action recognition problem on a large (50 actions) dataset with videos from the web. We perform a combination of early and late fusion on multiple features to handle the very large number of categories. We demonstrate that scene context is a very important feature to perform action recognition on very large datasets. The proposed method does not require any kind of video stabilization, person detection, or tracking and pruning of features. Our approach gives good performance on a large number of action categories; it has been tested on the UCF50 dataset with 50 action categories, which is an extension of the UCF YouTube Action (UCF11) dataset containing 11 action categories. We also tested our approach on the KTH and HMDB51 datasets for comparison.  相似文献   

4.
5.
基于时空关注度LSTM的行为识别   总被引:1,自引:0,他引:1  
针对现有基于视频整体序列结构建模的行为识别方法中,存在着大量时空背景混杂信息,而引起的行为表达的判决能力低和行为类别错误判定的问题,提出一种基于双流特征的时空关注度长短时记忆网络模型.首先,本文定义了一种基于双流的时空关注度模块,其中,空间关注度用于抑制空间背景混杂,时间关注度用于抑制低信息量的视频帧.其次,本文为双流...  相似文献   

6.
Action recognition using 3D DAISY descriptor   总被引:1,自引:0,他引:1  
  相似文献   

7.
为了高效、准确地获得视频中的行为类别和运动信息,减少计算的复杂度,文中提出一种融合特征传播和时域分割网络的视频行为识别算法.首先将视频分为3个小片段,分别从相应片段中提取关键帧,从而实现对长时间视频的建模;然后设计一个包含特征传播表观信息流和FlowNet运动信息流的改进时域分割网络(P-TSN),分别以RGB关键帧、RGB非关键帧、光流图为输入提取视频的表观信息流和运动信息流;最后将改进时域分割网络的BN-Inception描述子进行平均加权融合后送入Softmax层进行行为识别.在UCF101和HMDB51这2个数据集上分别取得了94.6%和69.4%的识别准确率,表明该算法能够有效地获得视频中空域表观信息和时域运动信息,提高了视频行为识别的准确率.  相似文献   

8.
针对视频动作识别中的时空建模问题,在深度学习框架下提出基于融合时空特征的时序增强动作识别方法.首先对输入视频应用稀疏时序采样策略,适应视频时长变化,降低视频级别时序建模成本.在识别阶段计算相邻特征图间的时序差异,以差异计算结果增强特征级别的运动信息.最后,利用残差结构与时序增强结构的组合方式提升网络整体时空建模能力.实验表明,文中算法在UCF101、HMDB51数据集上取得较高准确率,并在实际工业操作动作识别场景下,以较小的网络规模达到较优的识别效果.  相似文献   

9.
基于稀疏编码的时空金字塔匹配的动作识别   总被引:1,自引:0,他引:1  
针对复杂场景下的动作识别,提出一种基于稀疏编码的时空金字塔匹配的动作识别方法.通过稀疏编码的方法学习更具有判别性的码书和计算局部块(cuboids)的稀疏表示;然后基于max pooling的时空金字塔匹配进行动作分类.该方法在KTH和YouTube两大公开数据集上进行了评价,实验结果表明,与基于K-means的时空金字塔匹配方法相比,该方法提高了2%-7%左右的识别率,在复杂的视频中取得了较好的识别效果.  相似文献   

10.
11.
12.
王萍  庞文浩 《计算机应用》2019,39(7):2081-2086
针对原始空时双通道卷积神经网络(CNN)模型对长时段复杂视频中行为识别率低的问题,提出了一种基于视频分段的空时双通道卷积神经网络的行为识别方法。首先将视频分成多个等长不重叠的分段,对每个分段随机采样得到代表视频静态特征的帧图像和代表运动特征的堆叠光流图像;然后将这两种图像分别输入到空域和时域卷积神经网络进行特征提取,再在两个通道分别融合各视频分段特征得到空域和时域的类别预测特征;最后集成双通道的预测特征得到视频行为识别结果。通过实验讨论了多种数据增强方法和迁移学习方案以解决训练样本不足导致的过拟合问题,分析了不同分段数、预训练网络、分段特征融合方案和双通道集成策略对行为识别性能的影响。实验结果显示所提模型在UCF101数据集上的行为识别准确率达到91.80%,比原始的双通道模型提高了3.8个百分点;同时在HMDB51数据集上的行为识别准确率也比原模型提高,达到61.39%,这表明所提模型能够更好地学习和表达长时段复杂视频中人体行为特征。  相似文献   

13.
Due to the promising applications including video surveillance, video annotation, and interaction gaming, human action recognition from videos has attracted much research interest. Although various works have been proposed for human action recognition, there still exist many challenges such as illumination condition, viewpoint, camera motion and cluttered background. Extracting discriminative representation is one of the main ways to handle these challenges. In this paper, we propose a novel action recognition method that simultaneously learns middle-level representation and classifier by jointly training a multinomial logistic regression (MLR) model and a discriminative dictionary. In the proposed method, sparse code of low-level representation, conducting as latent variables of MLR, can capture the structure of low-level feature and thus is more discriminate. Meanwhile, the training of dictionary and MLR model are integrated into one objective function for considering the information of categories. By optimizing this objective function, we can learn a discriminative dictionary modulated by MLR and a MLR model driven by sparse coding. The proposed method is evaluated on YouTube action dataset and HMDB51 dataset. Experimental results demonstrate that our method is comparable with mainstream methods.  相似文献   

14.
15.
深度学习在人物动作识别方面已取得较好的成效,但当前仍然需要充分利用视频中人物的外形信息和运动信息。为利用视频中的空间信息和时间信息来识别人物行为动作,提出一种时空双流视频人物动作识别模型。该模型首先利用两个卷积神经网络分别抽取视频动作片段空间和时间特征,接着融合这两个卷积神经网络并提取中层时空特征,最后将提取的中层特征输入到3D卷积神经网络来完成视频中人物动作的识别。在数据集UCF101和HMDB51上,进行视频人物动作识别实验。实验结果表明,所提出的基于时空双流的3D卷积神经网络模型能够有效地识别视频人物动作。  相似文献   

16.
17.
18.
受人脑视觉感知机制启发,在深度学习框架下提出基于注意力机制的时间分组深度网络行为识别算法.针对局部时序信息在描述持续时间较长的复杂动作上的不足,使用视频分组稀疏抽样策略,以更低的成本进行视频级时间建模.在识别阶段引入通道注意力映射,进一步利用全局特征信息和捕捉分类兴趣点,执行通道特征重新校准,提高网络的表达能力.实验表明,文中算法在UCF101、HMDB51数据集上的识别准确率较高.  相似文献   

19.
针对人体行为识别中传统行为信息获取方法需要繁琐步骤和各类假设的问题,结合卷积神经网络(CNN)在图像视频处理中的优越性能,提出了一种基于低秩行为信息(LAI)和多尺度卷积神经网络(MCNN)的人体行为识别方法。首先,对行为视频进行分段,并分别对每个视频段进行低秩学习以提取到相应的LAI,然后在时间轴上对这些LAI进行连接以获取整个视频的LAI,进而有效捕获视频中的行为信息,避免了繁琐的提取步骤和各类假设。其次,针对LAI的特点,设计了MCNN模型。该模型通过多尺度卷积核获取不同感受野下的LAI行为特征,并合理设计各卷积层、池化层及全连接层来进一步提炼特征并最终输出行为类别。将所提出的方法在KTH和HMDB51两个基准数据库上进行性能验证,同时设计和进行了三组对比实验。实验结果表明,所提方法在两个数据库上分别取得了97.33%和72.05%的识别率,与双重变换(TFT)方法和深时间嵌入网络(DTEN)方法相比,识别率分别至少提高了0.67和1.15个百分点。所提方法能进一步促进行为识别技术在安防、人机交互等领域的广泛应用。  相似文献   

20.
Slow Feature Analysis (SFA) extracts slowly varying features from a quickly varying input signal. It has been successfully applied to modeling the visual receptive fields of the cortical neurons. Sufficient experimental results in neuroscience suggest that the temporal slowness principle is a general learning principle in visual perception. In this paper, we introduce the SFA framework to the problem of human action recognition by incorporating the discriminative information with SFA learning and considering the spatial relationship of body parts. In particular, we consider four kinds of SFA learning strategies, including the original unsupervised SFA (U-SFA), the supervised SFA (S-SFA), the discriminative SFA (D-SFA), and the spatial discriminative SFA (SD-SFA), to extract slow feature functions from a large amount of training cuboids which are obtained by random sampling in motion boundaries. Afterward, to represent action sequences, the squared first order temporal derivatives are accumulated over all transformed cuboids into one feature vector, which is termed the Accumulated Squared Derivative (ASD) feature. The ASD feature encodes the statistical distribution of slow features in an action sequence. Finally, a linear support vector machine (SVM) is trained to classify actions represented by ASD features. We conduct extensive experiments, including two sets of control experiments, two sets of large scale experiments on the KTH and Weizmann databases, and two sets of experiments on the CASIA and UT-interaction databases, to demonstrate the effectiveness of SFA for human action recognition. Experimental results suggest that the SFA-based approach (1) is able to extract useful motion patterns and improves the recognition performance, (2) requires less intermediate processing steps but achieves comparable or even better performance, and (3) has good potential to recognize complex multiperson activities.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号