首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
深度学习在人物动作识别方面已取得较好的成效,但当前仍然需要充分利用视频中人物的外形信息和运动信息。为利用视频中的空间信息和时间信息来识别人物行为动作,提出一种时空双流视频人物动作识别模型。该模型首先利用两个卷积神经网络分别抽取视频动作片段空间和时间特征,接着融合这两个卷积神经网络并提取中层时空特征,最后将提取的中层特征输入到3D卷积神经网络来完成视频中人物动作的识别。在数据集UCF101和HMDB51上,进行视频人物动作识别实验。实验结果表明,所提出的基于时空双流的3D卷积神经网络模型能够有效地识别视频人物动作。  相似文献   

2.
We consider developing a taxonomic shape driven algorithm to solve the problem of human action recognition and develop a new feature extraction technique using hull convexity defects. To test and validate this approach, we use silhouettes of subjects performing ten actions from a commonly used video database by action recognition researchers. A morphological algorithm is used to filter noise from the silhouette. A convex hull is then created around the silhouette frame, from which convex defects will be used as the features for analysis. A complete feature consists of thirty individual values which represent the five largest convex hull defects areas. A consecutive sequence of these features form a complete action. Action frame sequences are preprocessed to separate the data into two sets based on perspective planes and bilateral symmetry. Features are then normalized to create a final set of action sequences. We then formulate and investigate three methods to classify ten actions from the database. Testing and training of the nine test subjects is performed using a leave one out methodology. Classification utilizes both PCA and minimally encoded neural networks. Performance evaluation results show that the Hull Convexity Defect Algorithm provides comparable results with less computational complexity. This research can lead to a real time performance application that can be incorporated to include distinguishing more complex actions and multiple person interaction.  相似文献   

3.
目的 利用深度图序列进行人体行为识别是机器视觉和人工智能中的一个重要研究领域,现有研究中存在深度图序列冗余信息过多以及生成的特征图中时序信息缺失等问题。针对深度图序列中冗余信息过多的问题,提出一种关键帧算法,该算法提高了人体行为识别算法的运算效率;针对时序信息缺失的问题,提出了一种新的深度图序列特征表示方法,即深度时空能量图(depth spatial-temporal energy map,DSTEM),该算法突出了人体行为特征的时序性。方法 关键帧算法根据差分图像序列的冗余系数剔除深度图序列的冗余帧,得到足以表述人体行为的关键帧序列。DSTEM算法根据人体外形及运动特点建立能量场,获得人体能量信息,再将能量信息投影到3个正交轴获得DSTEM。结果 在MSR_Action3D数据集上的实验结果表明,关键帧算法减少冗余量,各算法在关键帧算法处理后运算效率提高了20% 30%。对DSTEM提取的方向梯度直方图(histogram of oriented gradient,HOG)特征,不仅在只有正序行为的数据库上识别准确率达到95.54%,而且在同时具有正序和反序行为的数据库上也能保持82.14%的识别准确率。结论 关键帧算法减少了深度图序列中的冗余信息,提高了特征图提取速率;DSTEM不仅保留了经过能量场突出的人体行为的空间信息,而且完整地记录了人体行为的时序信息,在带有时序信息的行为数据上依然保持较高的识别准确率。  相似文献   

4.
Unsupervised Learning of Human Action Categories Using Spatial-Temporal Words   总被引:16,自引:0,他引:16  
We present a novel unsupervised learning method for human action categories. A video sequence is represented as a collection of spatial-temporal words by extracting space-time interest points. The algorithm automatically learns the probability distributions of the spatial-temporal words and the intermediate topics corresponding to human action categories. This is achieved by using latent topic models such as the probabilistic Latent Semantic Analysis (pLSA) model and Latent Dirichlet Allocation (LDA). Our approach can handle noisy feature points arisen from dynamic background and moving cameras due to the application of the probabilistic models. Given a novel video sequence, the algorithm can categorize and localize the human action(s) contained in the video. We test our algorithm on three challenging datasets: the KTH human motion dataset, the Weizmann human action dataset, and a recent dataset of figure skating actions. Our results reflect the promise of such a simple approach. In addition, our algorithm can recognize and localize multiple actions in long and complex video sequences containing multiple motions.  相似文献   

5.
6.
7.
This paper proposes a boosting EigenActions algorithm for human action recognition. A spatio-temporal Information Saliency Map (ISM) is calculated from a video sequence by estimating pixel density function. A continuous human action is segmented into a set of primitive periodic motion cycles from information saliency curve. Each cycle of motion is represented by a Salient Action Unit (SAU), which is used to determine the EigenAction using principle component analysis. A human action classifier is developed using multi-class Adaboost algorithm with Bayesian hypothesis as the weak classifier. Given a human action video sequence, the proposed method effectively locates the SAUs in the video, and recognizes the human actions by categorizing the SAUs. Two publicly available human action databases, namely KTH and Weizmann, are selected for evaluation. The average recognition accuracy are 81.5% and 98.3% for KTH and Weizmann databases, respectively. Comparative results with two recent methods and robustness test results are also reported.  相似文献   

8.
This work describes a computational approach for a typical machine-vision application, that of human action recognition from video streams. We present a method that has the following advantages: (a) no human intervention in pre-processing stages, (b) a reduced feature set, (c) modularity of the recognition system and (d) control of the model’s complexity in acceptable for real-time operation levels. Representation of each video frame and feature extraction procedure are formulated in the lattice theory context. The recognition system consists of two components: an ensemble of neural network predictors which correspond to the training video sequences and one classifier, based on the PREMONN approach, capable of deciding at each time instant which known video source has potentially generated a new sequence of frames. Extensive experimental study on three well known benchmarks validates the flexibility and robustness of the proposed approach.  相似文献   

9.
Automatic analysis of human facial expression is a challenging problem with many applications. Most of the existing automated systems for facial expression analysis attempt to recognize a few prototypic emotional expressions, such as anger and happiness. Instead of representing another approach to machine analysis of prototypic facial expressions of emotion, the method presented in this paper attempts to handle a large range of human facial behavior by recognizing facial muscle actions that produce expressions. Virtually all of the existing vision systems for facial muscle action detection deal only with frontal-view face images and cannot handle temporal dynamics of facial actions. In this paper, we present a system for automatic recognition of facial action units (AUs) and their temporal models from long, profile-view face image sequences. We exploit particle filtering to track 15 facial points in an input face-profile sequence, and we introduce facial-action-dynamics recognition from continuous video input using temporal rules. The algorithm performs both automatic segmentation of an input video into facial expressions pictured and recognition of temporal segments (i.e., onset, apex, offset) of 27 AUs occurring alone or in a combination in the input face-profile video. A recognition rate of 87% is achieved.  相似文献   

10.
In this paper, we present a method for human action recognition from multi-view image sequences that uses the combined motion and shape flow information with variability consideration. A combined local–global (CLG) optic flow is used to extract motion flow feature and invariant moments with flow deviations are used to extract the global shape flow feature from the image sequences. In our approach, human action is represented as a set of multidimensional CLG optic flow and shape flow feature vectors in the spatial–temporal action boundary. Actions are modeled by using a set of multidimensional HMMs for multiple views using the combined features, which enforce robust view-invariant operation. We recognize different human actions in daily life successfully in the indoor and outdoor environment using the maximum likelihood estimation approach. The results suggest robustness of the proposed method with respect to multiple views action recognition, scale and phase variations, and invariant analysis of silhouettes.  相似文献   

11.
Human action recognition, defined as the understanding of the human basic actions from video streams, has a long history in the area of computer vision and pattern recognition because it can be used for various applications. We propose a novel human action recognition methodology by extracting the human skeletal features and separating them into several human body parts such as face, torso, and limbs to efficiently visualize and analyze the motion of human body parts.Our proposed human action recognition system consists of two steps: (i) automatic skeletal feature extraction and splitting by measuring the similarity between neighbor pixels in the space of diffusion tensor fields, and (ii) human action recognition by using multiple kernel based Support Vector Machine. Experimental results on a set of test database show that our proposed method is very efficient and effective to recognize the actions using few parameters.  相似文献   

12.
针对传统RGB视频中动作识别算法时间复杂度高而识别准确率低的问题,提出一种基于深度图像的动作识别方法。该方法首先对深度图像在三投影面系中进行投影,然后对三个投影图分别提取Gabor特征,最后使用这些特征训练极限学习机分类器,从而完成动作分类。在公开数据集MSR Action3D上进行了实验验证,该方法在三组实验上的平均准确率分别为97.80%、99.10%和88.35%,识别单个深度视频的用时小于1 s。实验结果表明,该方法能够对深度图像序列中的人体动作进行有效识别,并基本满足深度序列识别的实时性要求。  相似文献   

13.
目的 人体行为识别在视频监控、环境辅助生活、人机交互和智能驾驶等领域展现出了极其广泛的应用前景。由于目标物体遮挡、视频背景阴影、光照变化、视角变化、多尺度变化、人的衣服和外观变化等问题,使得对视频的处理与分析变得非常困难。为此,本文利用时间序列正反演构造基于张量的线性动态模型,估计模型的参数作为动作序列描述符,构造更加完备的观测矩阵。方法 首先从深度图像提取人体关节点,建立张量形式的人体骨骼正反向序列。然后利用基于张量的线性动态系统和Tucker分解学习参数元组(AF,AI,C),其中C表示人体骨架信息的空间信息,AFAI分别描述正向和反向时间序列的动态性。通过参数元组构造观测矩阵,一个动作就可以表示为观测矩阵的子空间,对应着格拉斯曼流形上的一点。最后通过在格拉斯曼流形上进行字典学习和稀疏编码完成动作识别。结果 实验结果表明,在MSR-Action 3D数据集上,该算法比Eigenjoints算法高13.55%,比局部切从支持向量机(LTBSVM)算法高2.79%,比基于张量的线性动态系统(tLDS)算法高1%。在UT-Kinect数据集上,该算法的行为识别率比LTBSVM算法高5.8%,比tLDS算法高1.3%。结论 通过大量实验评估,验证了基于时间序列正反演构造出来的tLDS模型很好地解决了上述问题,提高了人体动作识别率。  相似文献   

14.
Recognition of human actions is a very important, task in many applications such as Human Computer Interaction, Content based video retrieval and indexing, Intelligent video surveillance, Gesture Recognition, Robot learning and control, etc. An efficient action recognition system using Difference Intensity Distance Group Pattern (DIDGP) method and recognition using Support Vector Machines (SVM) classifier is presented. Initially, Region of Interest (ROI) is extracted from the difference frame, where it represents the motion information. The extracted ROI is divided into two blocks B1 and B2. The proposed DIDGP feature is applied on the maximum intensity block of the ROI to discriminate the each action from video sequences. The feature vectors obtained from the DIDGP are recognized using SVM with polynomial and RBF kernel. The proposed work has been evaluated on KTH action dataset which consists of actions like walking, running, jogging, hand waving, clapping and boxing. The proposed method has been experimentally tested on KTH dataset and an overall accuracy of 94.67% for RBF kernel.  相似文献   

15.
A shape-motion prototype-based approach is introduced for action recognition. The approach represents an action as a sequence of prototypes for efficient and flexible action matching in long video sequences. During training, an action prototype tree is learned in a joint shape and motion space via hierarchical K-means clustering and each training sequence is represented as a labeled prototype sequence; then a look-up table of prototype-to-prototype distances is generated. During testing, based on a joint probability model of the actor location and action prototype, the actor is tracked while a frame-to-prototype correspondence is established by maximizing the joint probability, which is efficiently performed by searching the learned prototype tree; then actions are recognized using dynamic prototype sequence matching. Distance measures used for sequence matching are rapidly obtained by look-up table indexing, which is an order of magnitude faster than brute-force computation of frame-to-frame distances. Our approach enables robust action matching in challenging situations (such as moving cameras, dynamic backgrounds) and allows automatic alignment of action sequences. Experimental results demonstrate that our approach achieves recognition rates of 92.86 percent on a large gesture data set (with dynamic backgrounds), 100 percent on the Weizmann action data set, 95.77 percent on the KTH action data set, 88 percent on the UCF sports data set, and 87.27 percent on the CMU action data set.  相似文献   

16.
This paper presents a human action recognition framework based on the theory of nonlinear dynamical systems. The ultimate aim of our method is to recognize actions from multi-view video. We estimate and represent human motion by means of a virtual skeleton model providing the basis for a view-invariant representation of human actions. Actions are modeled as a set of weighted dynamical systems associated to different model variables. We use time-delay embeddings on the time series resulting of the evolution of model variables along time to reconstruct phase portraits of appropriate dimensions. These phase portraits characterize the underlying dynamical systems. We propose a distance to compare trajectories within the reconstructed phase portraits. These distances are used to train SVM models for action recognition. Additionally, we propose an efficient method to learn a set of weights reflecting the discriminative power of a given model variable in a given action class. Our approach presents a good behavior on noisy data, even in cases where action sequences last just for a few frames. Experiments with marker-based and markerless motion capture data show the effectiveness of the proposed method. To the best of our knowledge, this contribution is the first to apply time-delay embeddings on data obtained from multi-view video.  相似文献   

17.
大多数动作仅包含部分关节的运动,现有方法未对运动剧烈的关节与几乎不参与运 动的关节进行区分,一定程度上降低了动作识别精度。针对这个问题,提出一种自适应关节权重 计算方法。结合动态时间规整(DTW)方法,利用获得的关节权重进行动作识别。首先对分类动作 序列进行分段,每段动作序列中运动较剧烈的关节选择分配更高权重,其余关节平均分配权重; 然后提取特征向量,计算两段动作序列的DTW 距离;最后采用K 近邻方法进行动作识别。实验 结果表明,该算法的总体分类识别准确率较高,且对于较相似的动作也能获得较好的识别结果。  相似文献   

18.
基于混合特征的人体动作识别改进算法   总被引:1,自引:0,他引:1  
运动特征的选择直接影响人体动作识别方法的识别效果.单一特征往往受到人体外观、环境、摄像机设置等因素的影响不同,其适用范围不同,识别效果也是有限的.在研究人体动作的表征与识别的基础上,充分考虑不同特征的优缺点,提出一种结合全局的剪影特征和局部的光流特征的混合特征,并用于人体动作识别.实验结果表明,该算法得到了理想的识别结果,对于Weizmann数据库中的动作可以达到100%的正确识别率.  相似文献   

19.
针对动作识别中如何有效地利用人体运动的三维信息的问题,提出一种新的基于深度视频序列的特征提取和识别方法。该方法首先运用运动能量模型(MEM)来表征人体动态特征,即先将整个深度视频序列投影到三个正交的笛卡儿平面上,再把每个投影面的视频系列划分为能量均等的子时间序列,分别计算子序列的深度运动图能量从而得到运动能量模型(MEM)。然后利用局部二值模式(LBP)描述符对运动能量模型编码,进一步提取人体运动的有效信息。最后用 范数协同表示分类器进行动作分类识别。在MSRAction3D、MSRGesture3D数据库上测试所提方法,实验结果表明该方法有较高的识别效果。  相似文献   

20.
目的 基于骨骼的动作识别技术由于在光照变化、动态视角和复杂背景等情况下具有更强的鲁棒性而成为研究热点。利用骨骼/关节数据识别人体相似动作时,因动作间关节特征差异小,且缺少其他图像语义信息,易导致识别混乱。针对该问题,提出一种基于显著性图像特征强化的中心连接图卷积网络(saliency image feature enhancement based center-connected graph convolutional network,SIFE-CGCN)模型。方法 首先,设计一种骨架中心连接拓扑结构,建立所有关节点到骨架中心的连接,以捕获相似动作中关节运动的细微差异;其次,利用高斯混合背景建模算法将每一帧图像与实时更新的背景模型对比,分割出动态图像区域并消除背景干扰作为显著性图像,通过预训练的VGG-Net(Visual Geometry Group network)提取特征图,并进行动作语义特征匹配分类;最后,设计一种融合算法利用分类结果对中心连接图卷积网络的识别结果强化修正,提高对相似动作的识别能力。此外,提出了一种基于骨架的动作相似度的计算方法,并建立一个相似动作数据集。结果 ...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号