首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
In this paper, a novel generalized framework of activity representation and recognition based on a ‘string of feature graphs (SFG)’ model is introduced. The proposed framework represents a visual activity as a string of feature graphs, where the string elements are initially matched using a graph-based spectral technique, followed by a dynamic programming scheme for matching the complete strings. The framework is motivated by success of time sequence analysis approaches in speech recognition, but modified in order to capture the spatio-temporal properties of individual actions, the interactions between objects, and speed of activity execution. This framework can be adapted to various spatio-temporal motion features, and we show details on using STIP features and track features. Furthermore, we show how this SFG model can be embedded within a switched dynamical system (SDS) that is able to automatically choose the most efficient features for a particular video segment. This allows us to analyze a variety of activities in natural videos in a computationally efficient manner. Experimental results on the basic SFG model as well as its integration with the SDS are shown on some of the most challenging multi-object datasets available to the activity analysis community.  相似文献   

2.
基于时空关注度LSTM的行为识别   总被引:1,自引:0,他引:1  
针对现有基于视频整体序列结构建模的行为识别方法中,存在着大量时空背景混杂信息,而引起的行为表达的判决能力低和行为类别错误判定的问题,提出一种基于双流特征的时空关注度长短时记忆网络模型.首先,本文定义了一种基于双流的时空关注度模块,其中,空间关注度用于抑制空间背景混杂,时间关注度用于抑制低信息量的视频帧.其次,本文为双流...  相似文献   

3.
4.
传统人体动作识别算法无法充分利用视频中人体动作的时空信息,且识别准确率较低。提出一种新的三维密集卷积网络人体动作识别方法。将双流网络作为基本框架,在空间网络中运用添加注意力机制的三维密集网络提取视频中动作的表观信息特征,结合时间网络对连续视频序列运动光流的运动信息进行特征提取,经过时空特征和分类层的融合后得到最终的动作识别结果。同时为更准确地提取特征并对时空网络之间的相互作用进行建模,在双流网络之间加入跨流连接对时空网络进行卷积层的特征融合。在UCF101和HMDB51数据集上的实验结果表明,该模型识别准确率分别为94.52%和69.64%,能够充分利用视频中的时空信息,并提取运动的关键信息。  相似文献   

5.
提出一种基于多视角非负矩阵分解的视角不变特征提取方法用于融合多视角信息并进行人体行为识别。通过提取每个视频帧的时空描述符,有效描述了视频场景中的运动和形态信息;为了解决观测角度改变对识别的影响,在不同视角下构建基于时空描述符的时空矩阵,并利用多视角非负矩阵分解构建多视角的目标函数以得到融合了多视角信息的共识矩阵;计算共识矩阵的最大相关系数进行人体行为分类。该方法在WVU数据集、i3Dpose数据集上进行了验证,并与其他方法进行比较,结果表明了该方法在行为识别方面的有效性。  相似文献   

6.
目的 在行为识别任务中,妥善利用时空建模与通道之间的相关性对于捕获丰富的动作信息至关重要。尽管图卷积网络在基于骨架信息的行为识别方面取得了稳步进展,但以往的注意力机制应用于图卷积网络时,其分类效果并未获得明显提升。基于兼顾时空交互与通道依赖关系的重要性,提出了多维特征嵌合注意力机制(multi-dimensional feature fusion attention mechanism, M2FA)。方法 不同于现今广泛应用的行为识别框架研究理念,如卷积块注意力模块(convolutional block attention module, CBAM)、双流自适应图卷积网络(two-stream adaptive graph convolutional network, 2s-AGCN)等,M2FA通过嵌入在注意力机制框架中的特征融合模块显式地获取综合依赖信息。对于给定的特征图,M2FA沿着空间、时间和通道维度使用全局平均池化操作推断相应维度的特征描述符。特征图使用多维特征描述符的融合结果进行过滤学习以达到细化自适应特征的目的,并通过压缩全局动态信息的全局特征分支与仅使用逐点卷积层的局...  相似文献   

7.
针对视频动作识别中的时空建模问题,在深度学习框架下提出基于融合时空特征的时序增强动作识别方法.首先对输入视频应用稀疏时序采样策略,适应视频时长变化,降低视频级别时序建模成本.在识别阶段计算相邻特征图间的时序差异,以差异计算结果增强特征级别的运动信息.最后,利用残差结构与时序增强结构的组合方式提升网络整体时空建模能力.实验表明,文中算法在UCF101、HMDB51数据集上取得较高准确率,并在实际工业操作动作识别场景下,以较小的网络规模达到较优的识别效果.  相似文献   

8.
Video recordings of earthmoving construction operations provide understandable data that can be used for benchmarking and analyzing their performance. These recordings further support project managers to take corrective actions on performance deviations and in turn improve operational efficiency. Despite these benefits, manual stopwatch studies of previously recorded videos can be labor-intensive, may suffer from biases of the observers, and are impractical after substantial period of observations. This paper presents a new computer vision based algorithm for recognizing single actions of earthmoving construction equipment. This is particularly a challenging task as equipment can be partially occluded in site video streams and usually come in wide variety of sizes and appearances. The scale and pose of the equipment actions can also significantly vary based on the camera configurations. In the proposed method, a video is initially represented as a collection of spatio-temporal visual features by extracting space–time interest points and describing each feature with a Histogram of Oriented Gradients (HOG). The algorithm automatically learns the distributions of the spatio-temporal features and action categories using a multi-class Support Vector Machine (SVM) classifier. This strategy handles noisy feature points arisen from typical dynamic backgrounds. Given a video sequence captured from a fixed camera, the multi-class SVM classifier recognizes and localizes equipment actions. For the purpose of evaluation, a new video dataset is introduced which contains 859 sequences from excavator and truck actions. This dataset contains large variations of equipment pose and scale, and has varied backgrounds and levels of occlusion. The experimental results with average accuracies of 86.33% and 98.33% show that our supervised method outperforms previous algorithms for excavator and truck action recognition. The results hold the promise for applicability of the proposed method for construction activity analysis.  相似文献   

9.
10.
This paper presents a novel approach for action recognition, localization and video matching based on a hierarchical codebook model of local spatio-temporal video volumes. Given a single example of an activity as a query video, the proposed method finds similar videos to the query in a target video dataset. The method is based on the bag of video words (BOV) representation and does not require prior knowledge about actions, background subtraction, motion estimation or tracking. It is also robust to spatial and temporal scale changes, as well as some deformations. The hierarchical algorithm codes a video as a compact set of spatio-temporal volumes, while considering their spatio-temporal compositions in order to account for spatial and temporal contextual information. This hierarchy is achieved by first constructing a codebook of spatio-temporal video volumes. Then a large contextual volume containing many spatio-temporal volumes (ensemble of volumes) is considered. These ensembles are used to construct a probabilistic model of video volumes and their spatio-temporal compositions. The algorithm was applied to three available video datasets for action recognition with different complexities (KTH, Weizmann, and MSR II) and the results were superior to other approaches, especially in the case of a single training example and cross-dataset1 action recognition.  相似文献   

11.
Action recognition is usually a central problem for many practical applications, such as video annotations, video surveillance and human computer interaction. Most action recognition approaches are based on localized spatio-temporal features that can vary significantly when the viewpoint changes. However, their performance rapidly drops when the viewpoints of the training and testing data are different. In this paper, we propose a transfer learning framework for view-invariant action recognition by the way of sharing image stitching feature among different views. Experimental results on multi-view action recognition IXMAS dataset demonstrate that our method produces remarkably good results and outperforms baseline methods.  相似文献   

12.
石祥滨  李怡颖  刘芳  代钦 《计算机应用研究》2021,38(4):1235-1239,1276
针对双流法进行视频动作识别时忽略特征通道间的相互联系、特征存在大量冗余的时空信息等问题,提出一种基于双流时空注意力机制的端到端的动作识别模型T-STAM,实现了对视频关键时空信息的充分利用。首先,将通道注意力机制引入到双流基础网络中,通过对特征通道间的依赖关系进行建模来校准通道信息,提高特征的表达能力。其次,提出一种基于CNN的时间注意力模型,使用较少的参数学习每帧的注意力得分,重点关注运动幅度明显的帧。同时提出一种多空间注意力模型,从不同角度计算每帧中各个位置的注意力得分,提取多个运动显著区域,并且对时空特征进行融合进一步增强视频的特征表示。最后,将融合后的特征输入到分类网络,按不同权重融合两流输出得到动作识别结果。在数据集HMDB51和UCF101上的实验结果表明T-STAM能有效地识别视频中的动作。  相似文献   

13.
Motion, as a feature of video that changes in temporal sequences, is crucial to visual understanding. The powerful video representation and extraction models are typically able to focus attention on motion features in challenging dynamic environments to complete more complex video understanding tasks. However, previous approaches discriminate mainly based on similar features in the spatial or temporal domain, ignoring the interdependence of consecutive video frames. In this paper, we propose the motion sensitive self-supervised collaborative network, a video representation learning framework that exploits a pretext task to assist feature comparison and strengthen the spatiotemporal discrimination power of the model. Specifically, we first propose the motion-aware module, which extracts consecutive motion features from the spatial regions by frame difference. The global–local contrastive module is then introduced, with context and enhanced video snippets being defined as appropriate positive samples for a broader feature similarity comparison. Finally, we introduce the snippet operation prediction module, which further assists contrastive learning to obtain more reliable global semantics by sensing changes in continuous frame features. Experimental results demonstrate that our work can effectively extract robust motion features and achieve competitive performance compared with other state-of-the-art self-supervised methods on downstream action recognition and video retrieval tasks.  相似文献   

14.
最近,基于骨架的动作识别研究受到了广泛关注.因为图卷积网络可以更好地建模非规则数据的内部依赖,ST-GCN (spatial temporal graph convolutional network)已经成为该领域的首选网络框架.针对目前大多数基于ST-GCN的改进方法忽视了骨架序列所蕴含的几何特征.本文利用骨架关节几何特征,作为ST-GCN框架的特征补充,其具有视觉不变性和无需添加额外参数学习即可获取的优势,进一步地,利用时空图卷积网络建模骨架关节几何特征和早期特征融合方法,构成了融合几何特征的时空图卷积网络框架.最后,实验结果表明,与ST-GCN、2s-AGCN和SGN等动作识别模型相比,我们提出的框架在NTU-RGB+D数据集和NTU-RGB+D 120数据集上都取得了更高准确率的效果.  相似文献   

15.
针对现有人体动作识别方法需输入固定长度的视频段、未充分利用时空信息等问题,提出一种基于时空金字塔和注意力机制相结合的深度神经网络模型,将包含时空金字塔的3D-CNN和添加时空注意力机制的LSTM模型相结合,实现了对视频段的多尺度处理和对动作的复杂时空信息的充分利用。以RGB图像和光流场作为空域和时域的输入,以融合金字塔池化层的运动和外观特征后的融合特征作为融合域的输入,最后采用决策融合策略获得最终动作识别结果。在UCF101和HMDB51数据集上进行实验,分别取得了94.2%和70.5%的识别准确率。实验结果表明,改进的网络模型在基于视频的人体动作识别任务上获得了较高的识别准确率。  相似文献   

16.
Deep learning approaches emphasized on learning spatio-temporal features for action recognition. Different to previous works, we separate the spatio-temporal feature learning unity into the spatial feature learning and the spatial/temporal feature pooling procedures. Using the temporal slowness regularized independent subspace analysis network, we learn invariant spatial features from sampled video cubes. To be robust to the cluttered backgrounds, we incorporate the denoising criterion to our network. The local spatio-temporal features are obtained by pooling features from the spatial and the temporal aspects. The key points are that we learn spatial features from video cubes and pool features from spatial feature sequences. We evaluate the learned local spatio-temporal features on three benchmark action datasets. Extensive experiments demonstrate the effectiveness of the novel feature learning architecture.  相似文献   

17.
基于深度学习的视频超分辨率方法主要关注视频帧内和帧间的时空关系,但以往的方法在视频帧的特征对齐和融合方面存在运动信息估计不精确、特征融合不充分等问题。针对这些问题,采用反向投影原理并结合多种注意力机制和融合策略构建了一个基于注意力融合网络(AFN)的视频超分辨率模型。首先,在特征提取阶段,为了处理相邻帧和参考帧之间的多种运动,采用反向投影结构来获取运动信息的误差反馈;然后,使用时间、空间和通道注意力融合模块来进行多维度的特征挖掘和融合;最后,在重建阶段,将得到的高维特征经过卷积重建出高分辨率的视频帧。通过学习视频帧内和帧间特征的不同权重,充分挖掘了视频帧之间的相关关系,并利用迭代网络结构采取渐进的方式由粗到精地处理提取到的特征。在两个公开的基准数据集上的实验结果表明,AFN能够有效处理包含多种运动和遮挡的视频,与一些主流方法相比在量化指标上提升较大,如对于4倍重建任务,AFN产生的视频帧的峰值信噪比(PSNR)在Vid4数据集上比帧循环视频超分辨率网络(FRVSR)产生的视频帧的PSNR提高了13.2%,在SPMCS数据集上比动态上采样滤波视频超分辨率网络(VSR-DUF)产生的视频帧的PSNR提高了15.3%。  相似文献   

18.
谢飞  龚声蓉  刘纯平  季怡 《计算机科学》2015,42(11):293-298
基于视觉单词的人物行为识别由于在特征中加入了中层语义信息,因此提高了识别的准确性。然而,视觉单词提取时由于前景和背景存在相互干扰,使得视觉单词的表达能力受到影响。提出一种结合局部和全局特征的视觉单词生成方法。该方法首先用显著图检测出前景人物区域,采用提出的动态阈值矩阵对人物区域用不同的阈值来分别检测时空兴趣点,并计算周围的3D-SIFT特征来描述局部信息。在此基础上,采用光流直方图特征描述行为的全局运动信息。通过谱聚类将局部和全局特征融合成视觉单词。实验证明,相对于流行的局部特征视觉单词生成方法,所提出的方法在简单背景的KTH数据集上的识别率比平均识别率提高了6.4%,在复杂背景的UCF数据集上的识别率比平均识别率提高了6.5%。  相似文献   

19.
This work presents a theory and methodology for simultaneous detection of local spatial and temporal scales in video data. The underlying idea is that if we process video data by spatio-temporal receptive fields at multiple spatial and temporal scales, we would like to generate hypotheses about the spatial extent and the temporal duration of the underlying spatio-temporal image structures that gave rise to the feature responses. For two types of spatio-temporal scale-space representations, (i) a non-causal Gaussian spatio-temporal scale space for offline analysis of pre-recorded video sequences and (ii) a time-causal and time-recursive spatio-temporal scale space for online analysis of real-time video streams, we express sufficient conditions for spatio-temporal feature detectors in terms of spatio-temporal receptive fields to deliver scale-covariant and scale-invariant feature responses. We present an in-depth theoretical analysis of the scale selection properties of eight types of spatio-temporal interest point detectors in terms of either: (i)–(ii) the spatial Laplacian applied to the first- and second-order temporal derivatives, (iii)–(iv) the determinant of the spatial Hessian applied to the first- and second-order temporal derivatives, (v) the determinant of the spatio-temporal Hessian matrix, (vi) the spatio-temporal Laplacian and (vii)–(viii) the first- and second-order temporal derivatives of the determinant of the spatial Hessian matrix. It is shown that seven of these spatio-temporal feature detectors allow for provable scale covariance and scale invariance. Then, we describe a time-causal and time-recursive algorithm for detecting sparse spatio-temporal interest points from video streams and show that it leads to intuitively reasonable results. An experimental quantification of the accuracy of the spatio-temporal scale estimates and the amount of temporal delay obtained from these spatio-temporal interest point detectors is given, showing that: (i) the spatial and temporal scale selection properties predicted by the continuous theory are well preserved in the discrete implementation and (ii) the spatial Laplacian or the determinant of the spatial Hessian applied to the first- and second-order temporal derivatives leads to much shorter temporal delays in a time-causal implementation compared to the determinant of the spatio-temporal Hessian or the first- and second-order temporal derivatives of the determinant of the spatial Hessian matrix.  相似文献   

20.
目的 由于光流估算的缺陷、噪声干扰以及现有运动注意力模型的局限性,导致运动注意力计算结果不能准确反映运动的显著性特征,制约了运动显著图的进一步应用。为提高运动注意力计算的准确性,提出一种基于时—空多尺度分析的运动注意力计算方法。方法 该方法根据视觉运动注意力来自于时—空运动反差的注意力形成机理构建运动注意力模型;通过时间尺度滤波去除噪声影响;鉴于视觉观测对尺度的依赖性,通过对视频帧的多尺度分解,在多个空间尺度进行运动注意力的计算,根据宏块像素值的相关系数大小对低尺度、中低尺度和原始尺度的运动注意力计算结果进行融合,得到最终的运动注意力显著图。结果 对多个视频测试序列的测试,测试结果表明,本文方法比同类方法更能真实有效地反映出视频场景中的运动显著性特征,大大提高了运动显著图的准确性。结论 为有效提高运动注意力计算不准确的问题,提出一种基于时—空多尺度分析的运动注意力计算方法,对于不同复杂视频运动场景,该方法能明显增强运动注意力计算的准确性,为视觉运动注意力的进一步应用奠定了良好基础。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号