首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we fully investigate the concept of fundamental ratios, demonstrate their application and significance in view-invariant action recognition, and explore the importance of different body parts in action recognition. A moving plane observed by a fixed camera induces a fundamental matrix F between two frames, where the ratios among the elements in the upper left 2 × 2 submatrix are herein referred to as the fundamental ratios. We show that fundamental ratios are invariant to camera internal parameters and orientation, and hence can be used to identify similar motions of line segments from varying viewpoints. By representing the human body as a set of points, we decompose a body posture into a set of line segments. The similarity between two actions is therefore measured by the motion of line segments and hence by their associated fundamental ratios. We further investigate to what extent a body part plays a role in recognition of different actions and propose a generic method of assigning weights to different body points. Experiments are performed on three categories of data: the controlled CMU MoCap dataset, the partially controlled IXMAS data, and the more challenging uncontrolled UCF-CIL dataset collected on the internet. Extensive experiments are reported on testing (i) view-invariance, (ii) robustness to noisy localization of body points, (iii) effect of assigning different weights to different body points, (iv) effect of partial occlusion on recognition accuracy, and (v) determining how soon our method recognizes an action correctly from the starting point of the query video.  相似文献   

2.
3.
Learning a compact and yet discriminative codebook is an important procedure for local feature-based action recognition. A common procedure involves two independent phases: reducing the dimensionality of local features and then performing clustering. Since the two phases are disconnected, dimensionality reduction does not necessarily capture the dimensions that are greatly helpful for codebook creation. What’s more, some dimensionality reduction techniques such as the principal component analysis do not take class separability into account and thus may not help build an effective codebook. In this paper, we propose the weighted adaptive metric learning (WAML) which integrates the two independent phases into a unified optimization framework. This framework enables to select indispensable and crucial dimensions for building a discriminative codebook. The dimensionality reduction phase in the WAML is optimized for class separability and adaptively adjusts the distance metric to improve the separability of data. In addition, the video word weighting is smoothly incorporated into the WAML to accurately generate video words. Experimental results demonstrate that our approach builds a highly discriminative codebook and achieves comparable results to other state-of-the-art approaches.  相似文献   

4.
5.
A key assumption of traditional machine learning approach is that the test data are draw from the same distribution as the training data. However, this assumption does not hold in many real-world scenarios. For example, in facial expression recognition, the appearance of an expression may vary significantly for different people. As a result, previous work has shown that learning from adequate person-specific data can improve the expression recognition performance over the one from generic data. However, person-specific data is typically very sparse in real-world applications due to the difficulties of data collection and labeling, and learning from sparse data may suffer from serious over-fitting. In this paper, we propose to learn a person-specific model through transfer learning. By transferring the informative knowledge from other people, it allows us to learn an accurate model for a new subject with only a small amount of person-specific data. We conduct extensive experiments to compare different person-specific models for facial expression and action unit (AU) recognition, and show that transfer learning significantly improves the recognition performance with a small amount of training data.  相似文献   

6.
Ongoing human action recognition is a challenging problem that has many applications, such as video surveillance, patient monitoring, human–computer interaction, etc. This paper presents a novel framework for recognizing streamed actions using Motion Capture (MoCap) data. Unlike the after-the-fact classification of completed activities, this work aims at achieving early recognition of ongoing activities. The proposed method is time efficient as it is based on histograms of action poses, extracted from MoCap data, that are computed according to Hausdorff distance. The histograms are then compared with the Bhattacharyya distance and warped by a dynamic time warping process to achieve their optimal alignment. This process, implemented by our dynamic programming-based solution, has the advantage of allowing some stretching flexibility to accommodate for possible action length changes. We have shown the success and effectiveness of our solution by testing it on large datasets and comparing it with several state-of-the-art methods. In particular, we were able to achieve excellent recognition rates that have outperformed many well known methods.  相似文献   

7.
A survey on vision-based human action recognition   总被引:10,自引:0,他引:10  
Vision-based human action recognition is the process of labeling image sequences with action labels. Robust solutions to this problem have applications in domains such as visual surveillance, video retrieval and human–computer interaction. The task is challenging due to variations in motion performance, recording settings and inter-personal differences. In this survey, we explicitly address these challenges. We provide a detailed overview of current advances in the field. Image representations and the subsequent classification process are discussed separately to focus on the novelties of recent research. Moreover, we discuss limitations of the state of the art and outline promising directions of research.  相似文献   

8.
This paper presents a method for unsupervised learning and recognition of human actions in video. Lacking any supervision, there is nothing except the inherent biases of a given representation to guide grouping of video clips along semantically meaningful partitions. Thus, in the first part of this paper, we compare two contemporary methods, Bag of Features (BOF) and Product Manifolds (PM), for clustering video clips of human facial expressions, hand gestures, and full-body actions, with the goal of better understanding how well these very different approaches to behavior recognition produce semantically relevant clustering of data.  相似文献   

9.
Shape recognition using fractal geometry   总被引:1,自引:0,他引:1  
G Neil  K.M Curtis 《Pattern recognition》1997,30(12):1957-1969
Within this paper fractal transformations are presented as a powerful new shape recognition technique. The motivation behind using fractal transformations is to develop a high speed shape recognition technique which will be scale invariant. A review is given of the most popular existing shape recognition techniques. There then follows a full mathematical analysis of the new technique together with a proof of the authors Fractal Invariance Theorem, the new theorem at the centre of the recognition technique. Through the mathematical analysis it becomes apparent that the fractal recognition technique possesses the remarkable property that it is able to distinguish between similar objects. Details are then given of the practical implementation of the technique together with an algorithm for making the technique rotationally invariant. The technique is then applied to a selection of real world objects and a comparison made with the popular moment invariants technique. This shows that the fractal technique is faster than the technique of moment invariants, and also requires less initial information to be effective. Finally conclusions are drawn and further work detailed.  相似文献   

10.
This paper presents a novel method that leverages reasoning capabilities in a computer vision system dedicated to human action recognition. The proposed methodology is decomposed into two stages. First, a machine learning based algorithm – known as bag of words – gives a first estimate of action classification from video sequences, by performing an image feature analysis. Those results are afterward passed to a common-sense reasoning system, which analyses, selects and corrects the initial estimation yielded by the machine learning algorithm. This second stage resorts to the knowledge implicit in the rationality that motivates human behaviour. Experiments are performed in realistic conditions, where poor recognition rates by the machine learning techniques are significantly improved by the second stage in which common-sense knowledge and reasoning capabilities have been leveraged. This demonstrates the value of integrating common-sense capabilities into a computer vision pipeline.  相似文献   

11.
Machine based human action recognition has become very popular in the last decade. Automatic unattended surveillance systems, interactive video games, machine learning and robotics are only few of the areas that involve human action recognition. This paper examines the capability of a known transform, the so-called Trace, for human action recognition and proposes two new feature extraction methods based on the specific transform. The first method extracts Trace transforms from binarized silhouettes, representing different stages of a single action period. A final history template composed from the above transforms, represents the whole sequence containing much of the valuable spatio-temporal information contained in a human action. The second, involves Trace for the construction of a set of invariant features that represent the action sequence and can cope with variations usually appeared in video capturing. The specific method takes advantage of the natural specifications of the Trace transform, to produce noise robust features that are invariant to translation, rotation, scaling and are effective, simple and fast to create. Classification experiments performed on two well known and challenging action datasets (KTH and Weizmann) using Radial Basis Function (RBF) Kernel SVM provided very competitive results indicating the potentials of the proposed techniques.  相似文献   

12.
Video recordings of earthmoving construction operations provide understandable data that can be used for benchmarking and analyzing their performance. These recordings further support project managers to take corrective actions on performance deviations and in turn improve operational efficiency. Despite these benefits, manual stopwatch studies of previously recorded videos can be labor-intensive, may suffer from biases of the observers, and are impractical after substantial period of observations. This paper presents a new computer vision based algorithm for recognizing single actions of earthmoving construction equipment. This is particularly a challenging task as equipment can be partially occluded in site video streams and usually come in wide variety of sizes and appearances. The scale and pose of the equipment actions can also significantly vary based on the camera configurations. In the proposed method, a video is initially represented as a collection of spatio-temporal visual features by extracting space–time interest points and describing each feature with a Histogram of Oriented Gradients (HOG). The algorithm automatically learns the distributions of the spatio-temporal features and action categories using a multi-class Support Vector Machine (SVM) classifier. This strategy handles noisy feature points arisen from typical dynamic backgrounds. Given a video sequence captured from a fixed camera, the multi-class SVM classifier recognizes and localizes equipment actions. For the purpose of evaluation, a new video dataset is introduced which contains 859 sequences from excavator and truck actions. This dataset contains large variations of equipment pose and scale, and has varied backgrounds and levels of occlusion. The experimental results with average accuracies of 86.33% and 98.33% show that our supervised method outperforms previous algorithms for excavator and truck action recognition. The results hold the promise for applicability of the proposed method for construction activity analysis.  相似文献   

13.
In this paper, we present a machine learning approach for subject independent human action recognition using depth camera, emphasizing the importance of depth in recognition of actions. The proposed approach uses the flow information of all 3 dimensions to classify an action. In our approach, we have obtained the 2-D optical flow and used it along with the depth image to obtain the depth flow (Z motion vectors). The obtained flow captures the dynamics of the actions in space–time. Feature vectors are obtained by averaging the 3-D motion over a grid laid over the silhouette in a hierarchical fashion. These hierarchical fine to coarse windows capture the motion dynamics of the object at various scales. The extracted features are used to train a Meta-cognitive Radial Basis Function Network (McRBFN) that uses a Projection Based Learning (PBL) algorithm, referred to as PBL-McRBFN, henceforth. PBL-McRBFN begins with zero hidden neurons and builds the network based on the best human learning strategy, namely, self-regulated learning in a meta-cognitive environment. When a sample is used for learning, PBL-McRBFN uses the sample overlapping conditions, and a projection based learning algorithm to estimate the parameters of the network. The performance of PBL-McRBFN is compared to that of a Support Vector Machine (SVM) and Extreme Learning Machine (ELM) classifiers with representation of every person and action in the training and testing datasets. Performance study shows that PBL-McRBFN outperforms these classifiers in recognizing actions in 3-D. Further, a subject-independent study is conducted by leave-one-subject-out strategy and its generalization performance is tested. It is observed from the subject-independent study that McRBFN is capable of generalizing actions accurately. The performance of the proposed approach is benchmarked with Video Analytics Lab (VAL) dataset and Berkeley Multi-modal Human Action Database (MHAD).  相似文献   

14.
Modelling video sequences by subspaces has recently shown promise for recognising human actions. Subspaces are able to accommodate the effects of various image variations and can capture the dynamic properties of actions. Subspaces form a non-Euclidean and curved Riemannian manifold known as a Grassmann manifold. Inference on manifold spaces usually is achieved by embedding the manifolds in higher dimensional Euclidean spaces. In this paper, we instead propose to embed the Grassmann manifolds into reproducing kernel Hilbert spaces and then tackle the problem of discriminant analysis on such manifolds. To achieve efficient machinery, we propose graph-based local discriminant analysis that utilises within-class and between-class similarity graphs to characterise intra-class compactness and inter-class separability, respectively. Experiments on KTH, UCF Sports, and Ballet datasets show that the proposed approach obtains marked improvements in discrimination accuracy in comparison to several state-of-the-art methods, such as the kernel version of affine hull image-set distance, tensor canonical correlation analysis, spatial-temporal words and hierarchy of discriminative space-time neighbourhood features.  相似文献   

15.
We present an efficient hand recognition algorithm for an interactive image clipping system, which is widely used for environments such as public facilities and security environments where personal capturing devices including mobile phones are not allowed. User-friendly interface and accurate image capturing function are required for an image clipping system. We build the system by combining Microsoft Kinect, HD webcam and projector. The Kinect and webcam are used to capture the motions of users׳ hand and project is to display the user-selected area from the capturing material. Hand recognition is composed of three steps: (i) the region occupied by users׳ hand is extracted from an image, (ii) the fingertips of the extracted hand region are analyzed using k-curvature algorithm, and (iii) the height of the fingertip is estimated using the depth image from Kinect. The height of the fingertip informs whether users׳ finger touched the surface of the target. The region captured by the fingertip is clipped from the image and stored as the target image. The excellence of our hand recognition algorithm is proved through a user test.  相似文献   

16.
17.
This paper proposes a boosting EigenActions algorithm for human action recognition. A spatio-temporal Information Saliency Map (ISM) is calculated from a video sequence by estimating pixel density function. A continuous human action is segmented into a set of primitive periodic motion cycles from information saliency curve. Each cycle of motion is represented by a Salient Action Unit (SAU), which is used to determine the EigenAction using principle component analysis. A human action classifier is developed using multi-class Adaboost algorithm with Bayesian hypothesis as the weak classifier. Given a human action video sequence, the proposed method effectively locates the SAUs in the video, and recognizes the human actions by categorizing the SAUs. Two publicly available human action databases, namely KTH and Weizmann, are selected for evaluation. The average recognition accuracy are 81.5% and 98.3% for KTH and Weizmann databases, respectively. Comparative results with two recent methods and robustness test results are also reported.  相似文献   

18.
We propose a motion recognition strategy that represents each videoclip by a set of filtered images, each of which corresponds to a frame. Using a filtered-image classifier based on support vector machines, we classify a videoclip by applying majority voting over the predicted labels of its filtered images and, for online classification, we identify the most likely type of action at any moment by applying majority voting over the predicted labels of the filtered images within a sliding window. We also define a classification confidence and the associated threshold in both cases, which enable us to identify the existence of an unknown type of motion and, together with the proposed recognition strategy, make it possible to build a real-time motion recognition system that cannot only make classifications in real-time, but also learn new types of motions and recognize them in the future. The proposed strategy is demonstrated on real datasets.  相似文献   

19.
20.
In this paper, we describe a technique for representing and recognizing human motions using directional motion history images. A motion history image is a single human motion image produced by superposing binarized successive motion image frames so that older frames may have smaller weights. It has, however, difficulty that the latest motion overwrites older motions, resulting in inexact motion representation and therefore incorrect recognition. To overcome this difficulty, we propose directional motion history images which describe a motion with respect to four directions of movement, i.e. up, down, right and left, employing optical flow. The directional motion history images are thus a set of four motion history images defined on four optical flow images. Experimental results show that the proposed technique achieves better performance in the recognition of human motions than the existent motion history images. This work was presented in part at the 13th International Symposium on Artificial Life and Robotics, Oita, Japan, January 31–February 2, 2008  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号