首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
姬晓飞  左鑫孟 《计算机应用》2016,36(8):2287-2291
针对双人交互行为识别算法中普遍存在的算法计算复杂度高、识别准确性低的问题,提出一种新的基于关键帧特征库统计特征的双人交互行为识别方法。首先,对预处理后的交互视频分别提取全局GIST和分区域方向梯度直方图(HOG)特征。然后,采用k-means聚类算法对每类动作训练视频的所有帧的特征表示进行聚类,得到若干个近似描述同类动作视频的关键帧特征,构造出训练动作类别对应的关键帧特征库;同时,根据相似性度量统计出特征库中各个关键帧在交互视频中出现的频率,得到一个动作视频的统计直方图特征表示。最后,利用训练后的直方图相交核支持向量机(SVM),对待识别视频采用决策级加权融合的方法得到交互行为的识别结果。在标准数据库测试的结果表明,该方法简单有效,对交互行为的正确识别率达到了85%。  相似文献   

2.
Action recognition on large categories of unconstrained videos taken from the web is a very challenging problem compared to datasets like KTH (6 actions), IXMAS (13 actions), and Weizmann (10 actions). Challenges like camera motion, different viewpoints, large interclass variations, cluttered background, occlusions, bad illumination conditions, and poor quality of web videos cause the majority of the state-of-the-art action recognition approaches to fail. Also, an increased number of categories and the inclusion of actions with high confusion add to the challenges. In this paper, we propose using the scene context information obtained from moving and stationary pixels in the key frames, in conjunction with motion features, to solve the action recognition problem on a large (50 actions) dataset with videos from the web. We perform a combination of early and late fusion on multiple features to handle the very large number of categories. We demonstrate that scene context is a very important feature to perform action recognition on very large datasets. The proposed method does not require any kind of video stabilization, person detection, or tracking and pruning of features. Our approach gives good performance on a large number of action categories; it has been tested on the UCF50 dataset with 50 action categories, which is an extension of the UCF YouTube Action (UCF11) dataset containing 11 action categories. We also tested our approach on the KTH and HMDB51 datasets for comparison.  相似文献   

3.
针对疲劳驾驶的六种表情 ,提出几何规范化结合 Gabor滤波提取表情特征 ,使用支持向量机对疲劳驾驶的面部表情分类识别的系统。首先对视频图像预处理进行几何规范化 ,利用二维 Gabor核函数构造最优滤波器 48个,获取 48个面部表情特征点 ,最后利用支持向量机进行面部表情分类识别。实验结果表明径向基函数的 SVM性能最好。  相似文献   

4.
Machine based human action recognition has become very popular in the last decade. Automatic unattended surveillance systems, interactive video games, machine learning and robotics are only few of the areas that involve human action recognition. This paper examines the capability of a known transform, the so-called Trace, for human action recognition and proposes two new feature extraction methods based on the specific transform. The first method extracts Trace transforms from binarized silhouettes, representing different stages of a single action period. A final history template composed from the above transforms, represents the whole sequence containing much of the valuable spatio-temporal information contained in a human action. The second, involves Trace for the construction of a set of invariant features that represent the action sequence and can cope with variations usually appeared in video capturing. The specific method takes advantage of the natural specifications of the Trace transform, to produce noise robust features that are invariant to translation, rotation, scaling and are effective, simple and fast to create. Classification experiments performed on two well known and challenging action datasets (KTH and Weizmann) using Radial Basis Function (RBF) Kernel SVM provided very competitive results indicating the potentials of the proposed techniques.  相似文献   

5.
目的 为了提高视频中动作识别的准确度,提出基于动作切分和流形度量学习的视频动作识别算法。方法 首先利用基于人物肢体伸展程度分析的动作切分方法对视频中的动作进行切分,将动作识别的对象具体化;然后从动作片段中提取归一化之后的全局时域特征和空域特征、光流特征、帧内的局部旋度特征和散度特征,构造一种7×7的协方差矩阵描述子对提取出的多种特征进行融合;最后结合流形度量学习方法有监督式地寻找更优的距离度量算法提高动作的识别分类效果。结果 对Weizmann公共视频集的切分实验统计结果表明本文提出的视频切分方法具有很好的切分能力,能够作好动作识别前的预处理;在Weizmann公共视频数据集上进行了流形度量学习前后的识别效果对比,结果表明利用流形度量学习方法对动作识别效果提升2.8%;在Weizmann和KTH两个公共视频数据集上的平均识别率分别为95.6%和92.3%,与现有方法的比较表明,本文提出的动作识别方法有更好的识别效果。结论 多次实验结果表明本文算法在预处理过程中动作切分效果理想,描述动作所构造协方差矩阵对动作的表达有良好的多特征融合能力,而且光流信息和旋度、散度信息的加入使得人体各部位的运动方向信息具有了更多细节的描述,有效提高了协方差矩阵的描述能力,结合流形度量学习方法对动作识别的准确性有明显提高。  相似文献   

6.
健身动作识别是智能健身系统的核心环节.为了提高健身动作识别算法的精度和速度,并减少健身动作中人体整体位移对识别结果的影响,提出了一种基于人体骨架特征编码的健身动作识别方法.该方法包括三个步骤:首先,构建精简的人体骨架模型,并利用人体姿态估计技术提取骨架模型中各关节点的坐标信息;其次,利用人体中心投影法提取动作特征区域以...  相似文献   

7.
吴峰  王颖 《计算机应用》2017,37(8):2240-2243
针对词袋(BoW)模型方法基于信息增益的视觉词典建立方法未考虑词频对动作识别的影响,为提高动作识别准确率,提出了基于改进信息增益建立视觉词典的方法。首先,基于3D Harris提取人体动作视频时空兴趣点并利用K均值聚类建立初始视觉词典;然后引入类内词频集中度和类间词频分散度改进信息增益,计算初始词典中词汇的改进信息增益,选择改进信息增益大的视觉词汇建立新的视觉词典;最后基于支持向量机(SVM)采用改进信息增益建立的视觉词典进行人体动作识别。采用KTH和Weizmann人体动作数据库进行实验验证。相比传统信息增益,两个数据库利用改进信息增益建立的视觉词典动作识别准确率分别提高了1.67%和3.45%。实验结果表明,提出的基于改进信息增益的视觉词典建立方法能够选择动作识别能力强的视觉词汇,提高动作识别准确率。  相似文献   

8.
Detecting suspicious behavior from high definition (HD) videos is always a complex and time-consuming process. To solve that problem, a fast suspicious behavior recognition method is proposed based on motion vectors. In this paper, the data format and decoding features of HD videos are analyzed. Then, the characteristics of suspicious activities and the ways of obtaining motion vectors directly from the video stream are concluded. Besides, the motion vectors are normalized by taking the reference frames into account. The feature vectors that display the inter-frame and intra-frame information of the region of interest are extracted. Gaussian radial basis function is employed as the kernel function of the support vector machines (SVM). It also realizes the detection and classification of suspicious behavior in HD videos. Finally, an extensive set of experiments are performed and this method is compared with some of the most recent approaches in the field using publicly available datasets as well as a new annotated human action dataset including actions performed in complex scenarios.  相似文献   

9.

Deep learning models have attained great success for an extensive range of computer vision applications including image and video classification. However, the complex architecture of the most recently developed networks imposes certain memory and computational resource limitations, especially for human action recognition applications. Unsupervised deep convolutional neural networks such as PCANet can alleviate these limitations and hence significantly reduce the computational complexity of the whole recognition system. In this work, instead of using 3D convolutional neural network architecture to learn temporal features of video actions, the unsupervised convolutional PCANet model is extended into (PCANet-TOP) which effectively learn spatiotemporal features from Three Orthogonal Planes (TOP). For each video sequence, spatial frames (XY) and temporal planes (XT and YT) are utilized to train three different PCANet models. Then, the learned features are fused after reducing their dimensionality using whitening PCA to obtain spatiotemporal feature representation of the action video. Finally, Support Vector Machine (SVM) classifier is applied for action classification process. The proposed method is evaluated on four benchmarks and well-known datasets, namely, Weizmann, KTH, UCF Sports, and YouTube action datasets. The recognition results show that the proposed PCANet-TOP provides discriminative and complementary features using three orthogonal planes and able to achieve promising and comparable results with state-of-the-art methods.

  相似文献   

10.
稠密轨迹的人体行为识别对每一帧全图像密集采样导致特征维数高、计算量大且包含了无关的背景信息。提出基于显著性检测和稠密轨迹的人体行为识别方法。首先对视频帧进行多尺度静态显著性检测获取动作主体位置,并与对视频动态显著性检测的结果线性融合获取主体动作区域,通过仅在主体动作区域内提取稠密轨迹来改进原算法;然后采用Fisher Vector取代词袋模型对特征编码增强特征表达充分性;最后利用支持向量机实现人体行为识别。在KTH数据集和UCF Sports数据集上进行仿真实验,结果表明改进的算法相比于原算法识别准确率有所提升。  相似文献   

11.
为了提高人体手部运动模式识别的准确性,提出了一种基于人工鱼群算法优化支持向量机( SVM)的模式识别方法.该方法对采集的表面肌电信号( sEMG)去噪后提取小波系数最大值作为特征样本,将提取后的特征输入到SVM进行动作模式识别,同时采用人工鱼群算法优化SVM( AFSVM)的惩罚参数和核函数参数,避免参数选择的盲目性,提高模型的识别精度.通过对内翻、外翻、握拳、展拳四种动作仿真结果表明:该方法与传统的SVM方法相比具有更高的识别率.  相似文献   

12.
Unsupervised Learning of Human Action Categories Using Spatial-Temporal Words   总被引:16,自引:0,他引:16  
We present a novel unsupervised learning method for human action categories. A video sequence is represented as a collection of spatial-temporal words by extracting space-time interest points. The algorithm automatically learns the probability distributions of the spatial-temporal words and the intermediate topics corresponding to human action categories. This is achieved by using latent topic models such as the probabilistic Latent Semantic Analysis (pLSA) model and Latent Dirichlet Allocation (LDA). Our approach can handle noisy feature points arisen from dynamic background and moving cameras due to the application of the probabilistic models. Given a novel video sequence, the algorithm can categorize and localize the human action(s) contained in the video. We test our algorithm on three challenging datasets: the KTH human motion dataset, the Weizmann human action dataset, and a recent dataset of figure skating actions. Our results reflect the promise of such a simple approach. In addition, our algorithm can recognize and localize multiple actions in long and complex video sequences containing multiple motions.  相似文献   

13.
目的 对人体行为的描述是行为识别中的关键问题,为了能够充分利用训练数据从而保证特征对行为的高描述性,提出了基于局部时空特征方向加权的人体行为识别方法。方法 首先,将局部时空特征的亮度梯度特征分解为3个方向(XYZ)分别来描述行为, 通过直接构造视觉词汇表分别得到不同行为3方向特征描述子集合的标准视觉词汇码本,并利用训练视频得到每个行为的标准3方向词汇分布;进而,根据不同行为3方向特征描述子集合的标准视觉词汇码本,分别计算测试视频相应的3方向的词汇分布,并利用与各行为标准3方向词汇分布的加权相似性度量进行行为识别;结果 在Weizmann数据库和KTH数据库中进行实验,Weizmann数据库中的平均识别率高达96.04%,KTH数据库中的平均识别率也高达96.93%。结论 与其他行为识别方法相比可以明显提高行为平均识别率。  相似文献   

14.
Video recordings of earthmoving construction operations provide understandable data that can be used for benchmarking and analyzing their performance. These recordings further support project managers to take corrective actions on performance deviations and in turn improve operational efficiency. Despite these benefits, manual stopwatch studies of previously recorded videos can be labor-intensive, may suffer from biases of the observers, and are impractical after substantial period of observations. This paper presents a new computer vision based algorithm for recognizing single actions of earthmoving construction equipment. This is particularly a challenging task as equipment can be partially occluded in site video streams and usually come in wide variety of sizes and appearances. The scale and pose of the equipment actions can also significantly vary based on the camera configurations. In the proposed method, a video is initially represented as a collection of spatio-temporal visual features by extracting space–time interest points and describing each feature with a Histogram of Oriented Gradients (HOG). The algorithm automatically learns the distributions of the spatio-temporal features and action categories using a multi-class Support Vector Machine (SVM) classifier. This strategy handles noisy feature points arisen from typical dynamic backgrounds. Given a video sequence captured from a fixed camera, the multi-class SVM classifier recognizes and localizes equipment actions. For the purpose of evaluation, a new video dataset is introduced which contains 859 sequences from excavator and truck actions. This dataset contains large variations of equipment pose and scale, and has varied backgrounds and levels of occlusion. The experimental results with average accuracies of 86.33% and 98.33% show that our supervised method outperforms previous algorithms for excavator and truck action recognition. The results hold the promise for applicability of the proposed method for construction activity analysis.  相似文献   

15.
This paper proposes a boosting EigenActions algorithm for human action recognition. A spatio-temporal Information Saliency Map (ISM) is calculated from a video sequence by estimating pixel density function. A continuous human action is segmented into a set of primitive periodic motion cycles from information saliency curve. Each cycle of motion is represented by a Salient Action Unit (SAU), which is used to determine the EigenAction using principle component analysis. A human action classifier is developed using multi-class Adaboost algorithm with Bayesian hypothesis as the weak classifier. Given a human action video sequence, the proposed method effectively locates the SAUs in the video, and recognizes the human actions by categorizing the SAUs. Two publicly available human action databases, namely KTH and Weizmann, are selected for evaluation. The average recognition accuracy are 81.5% and 98.3% for KTH and Weizmann databases, respectively. Comparative results with two recent methods and robustness test results are also reported.  相似文献   

16.
17.
目的 为了进一步提高智能监控场景下行为识别的准确率和时间效率,提出了一种基于YOLO(you only look once:unified,real-time object detection)并结合LSTM(long short-term memory)和CNN(convolutional neural network)的人体行为识别算法LC-YOLO(LSTM and CNN based on YOLO)。方法 利用YOLO目标检测的实时性,首先对监控视频中的特定行为进行即时检测,获取目标大小、位置等信息后进行深度特征提取;然后,去除图像中无关区域的噪声数据;最后,结合LSTM建模处理时间序列,对监控视频中的行为动作序列做出最终的行为判别。结果 在公开行为识别数据集KTH和MSR中的实验表明,各行为平均识别率达到了96.6%,平均识别速度达到215 ms,本文方法在智能监控的行为识别上具有较好效果。结论 提出了一种行为识别算法,实验结果表明算法有效提高了行为识别的实时性和准确率,在实时性要求较高和场景复杂的智能监控中有较好的适应性和广泛的应用前景。  相似文献   

18.
19.
基于累积边缘图像的现实人体动作识别   总被引:2,自引:0,他引:2  
为了从现实环境下识别出人体动作,本文研究了从无约束视频中提取特征表征人体动作的问题. 首先,在无约束的视频上使用形态学梯度操作消除部分背景,获得人体的轮廓形状; 其次,提取某一段视频上每一帧形状的边缘特征,累积到一幅图像中,称之为累积边缘图像 (Accumulative edge image, AEI); 然后,在该累积边缘图像上计算基于网格的方向梯度直方图(Histograms of orientation gradients, HOG),形成特征向量表征人体的动作, 送入分类器进行分类. YouTube数据集上的实验结果表明,本文的方法比其他方法更加有效.  相似文献   

20.
目的 基于骨骼的动作识别技术由于在光照变化、动态视角和复杂背景等情况下具有更强的鲁棒性而成为研究热点。利用骨骼/关节数据识别人体相似动作时,因动作间关节特征差异小,且缺少其他图像语义信息,易导致识别混乱。针对该问题,提出一种基于显著性图像特征强化的中心连接图卷积网络(saliency image feature enhancement based center-connected graph convolutional network,SIFE-CGCN)模型。方法 首先,设计一种骨架中心连接拓扑结构,建立所有关节点到骨架中心的连接,以捕获相似动作中关节运动的细微差异;其次,利用高斯混合背景建模算法将每一帧图像与实时更新的背景模型对比,分割出动态图像区域并消除背景干扰作为显著性图像,通过预训练的VGG-Net(Visual Geometry Group network)提取特征图,并进行动作语义特征匹配分类;最后,设计一种融合算法利用分类结果对中心连接图卷积网络的识别结果强化修正,提高对相似动作的识别能力。此外,提出了一种基于骨架的动作相似度的计算方法,并建立一个相似动作数据集。结果 ...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号