首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
High dimensional pose state space is the main challenge in articulated human pose tracking which makes pose analysis computationally expensive or even infeasible. In this paper, we propose a novel generative approach in the framework of evolutionary computation, by which we try to widen the bottleneck with effective search strategy embedded in the extracted state subspace. Firstly, we use ISOMAP to learn the low-dimensional latent space of pose state in the aim of both reducing dimensionality and extracting the prior knowledge of human motion simultaneously. Then, we propose a manifold reconstruction method to establish smooth mappings between the latent space and original space, which enables us to perform pose analysis in the latent space. In the search strategy, we adopt a new evolutionary approach, clonal selection algorithm (CSA), for pose optimization. We design a CSA based method to estimate human pose from static image, which can be used for initialization of motion tracking. In order to make CSA suitable for motion tracking, we propose a sequential CSA (S-CSA) algorithm by incorporating the temporal continuity information into the traditional CSA. Actually, in a Bayesian inference view, the sequential CSA algorithm is in essence a multilayer importance sampling based particle filter. Our methods are demonstrated in different motion types and different image sequences. Experimental results show that our CSA based pose estimation method can achieve viewpoint invariant 3D pose reconstruction and the S-CSA based motion tracking method can achieve accurate and stable tracking of 3D human motion.  相似文献   

2.
While research on articulated human motion and pose estimation has progressed rapidly in the last few years, there has been no systematic quantitative evaluation of competing methods to establish the current state of the art. We present data obtained using a hardware system that is able to capture synchronized video and ground-truth 3D motion. The resulting HumanEva datasets contain multiple subjects performing a set of predefined actions with a number of repetitions. On the order of 40,000 frames of synchronized motion capture and multi-view video (resulting in over one quarter million image frames in total) were collected at 60 Hz with an additional 37,000 time instants of pure motion capture data. A standard set of error measures is defined for evaluating both 2D and 3D pose estimation and tracking algorithms. We also describe a baseline algorithm for 3D articulated tracking that uses a relatively standard Bayesian framework with optimization in the form of Sequential Importance Resampling and Annealed Particle Filtering. In the context of this baseline algorithm we explore a variety of likelihood functions, prior models of human motion and the effects of algorithm parameters. Our experiments suggest that image observation models and motion priors play important roles in performance, and that in a multi-view laboratory environment, where initialization is available, Bayesian filtering tends to perform well. The datasets and the software are made available to the research community. This infrastructure will support the development of new articulated motion and pose estimation algorithms, will provide a baseline for the evaluation and comparison of new methods, and will help establish the current state of the art in human pose estimation and tracking.  相似文献   

3.
4.
5.
Many of the recent real-time markerless camera tracking systems assume the existence of a complete 3D model of the target scene. Also the system developed in the MATRIS project assumes that a scene model is available. This can be a freeform surface model generated automatically from an image sequence using structure from motion techniques or a textured CAD model built manually using a commercial software. The offline model provides 3D anchors to the tracking. These are stable natural landmarks, which are not updated and thus prevent an accumulating error (drift) in the camera registration by giving an absolute reference. However, sometimes it is not feasible to model the entire target scene in advance, e.g. parts, which are not static, or one would like to employ existing CAD models, which are not complete. In order to allow camera movements beyond the parts of the environment modelled in advance it is desired to derive additional 3D information online. Therefore, a markerless camera tracking system for calibrated perspective cameras has been developed, which employs 3D information about the target scene and complements this knowledge online by reconstruction of 3D points. The proposed algorithm is robust and reduces drift, the most dominant problem of simultaneous localisation and mapping (SLAM), in real-time by a combination of the following crucial points: (1) stable tracking of longterm features on the 2D level; (2) use of robust methods like the well-known Random Sampling Consensus (RANSAC) for all 3D estimation processes; (3) consequent propagation of errors and uncertainties; (4) careful feature selection and map management; (5) incorporation of epipolar constraints into the pose estimation. Validation results on the operation of the system on synthetic and real data are presented.  相似文献   

6.
This paper proposes a real-time 3D head tracking method that can handle large rotation and translation. To achieve this goal, we incorporate the following three approaches into the particle filter. First, we take the 3D ellipsoidal head model to handle the large head rotation more effectively, especially the large rotation around the x-axis (pitch). Second, we take the online appearance model (OAM) that can adapt both the short-term and long-term appearance changes in the appearance model image effectively. Third, we take the adaptive state transition model to track the fast moving 3D heads, where the most plausible state for the next time is estimated by using the motion history model and the particles are distributed near the estimated state. This enables the real-time 3D head tracking by reducing the required number of particles greatly. The experimental results show that (1) the tracking accuracy of the 3D ellipsoidal head model is more precise than that of the 3D cylindrical head model by 15%, (2) the OAM provides more stable tracking than the wandering model, and (3) the adaptive state transition model can track faster moving heads than the zero-velocity model.  相似文献   

7.
Immersive virtual environments with life-like interaction capabilities have very demanding requirements including high-precision motion capture and high-processing speed. These issues raise many challenges for computer vision-based motion estimation algorithms. In this study, we consider the problem of hand tracking using multiple cameras and estimating its 3D global pose (i.e., position and orientation of the palm). Our interest is in developing an accurate and robust algorithm to be employed in an immersive virtual training environment, called “Virtual GloveboX” (VGX) (Twombly et al. in J Syst Cybern Inf 2:30–34, 2005), which is currently under development at NASA Ames. In this context, we present a marker-based, hand tracking and 3D global pose estimation algorithm that operates in a controlled, multi-camera, environment built to track the user’s hand inside VGX. The key idea of the proposed algorithm is tracking the 3D position and orientation of an elliptical marker placed on the dorsal part of the hand using model-based tracking approaches and active camera selection. It should be noted that, the use of markers is well justified in the context of our application since VGX naturally allows for the use of gloves without disrupting the fidelity of the interaction. Our experimental results and comparisons illustrate that the proposed approach is more accurate and robust than related approaches. A byproduct of our multi-camera ellipse tracking algorithm is that, with only minor modifications, the same algorithm can be used to automatically re-calibrate (i.e., fine-tune) the extrinsic parameters of a multi-camera system leading to more accurate pose estimates.  相似文献   

8.
目的 面向实时、准确、鲁棒的人体运动分析应用需求,从运动分析的特征提取和运动建模问题出发,本文人体运动分析的实例学习方法。方法 在构建人体姿态实例库基础上,首先,采用运动检测方法得到视频每帧的人体轮廓;其次,基于形状上下文轮廓匹配方法,从实例库中检索得到每帧视频的候选姿态集;最后,通过统计建模和转移概率建模实现人体运动分析。结果 对步行、跑步、跳跃等测试视频进行实验,基于轮廓的形状上下文特征表示和匹配方法具有良好的表达能力;本文方法运动分析结果,关节夹角平均误差在5°左右,与其他算法相比,有效提高了运动分析的精度。结论 本文人体运动分析的实例学习方法,能有效分析单目视频中的人体运动,并克服了映射的深度歧义,对运动的视角变化鲁棒,具有良好的计算效率和精度。  相似文献   

9.
This paper proposes human motion models of multiple actions for 3D pose tracking. A training pose sequence of each action, such as walking and jogging, is separately recorded by a motion capture system and modeled independently. This independent modeling of action-specific motions allows us 1) to optimize each model in accordance with only its respective motion and 2) to improve the scalability of the models. Unlike existing approaches with similar motion models (e.g. switching dynamical models), our pose tracking method uses the multiple models simultaneously for coping with ambiguous motions. For robust tracking with the multiple models, particle filtering is employed so that particles are distributed simultaneously in the models. Efficient use of the particles can be achieved by locating many particles in the model corresponding to an action that is currently observed. For transferring the particles among the models in quick response to changes in the action, transition paths are synthesized between the different models in order to virtually prepare inter-action motions. Experimental results demonstrate that the proposed models improve accuracy in pose tracking.  相似文献   

10.
A major challenge in applying Bayesian tracking methods for tracking 3D human body pose is the high dimensionality of the pose state space. It has been observed that the 3D human body pose parameters typically can be assumed to lie on a low-dimensional manifold embedded in the high-dimensional space. The goal of this work is to approximate the low-dimensional manifold so that a low-dimensional state vector can be obtained for efficient and effective Bayesian tracking. To achieve this goal, a globally coordinated mixture of factor analyzers is learned from motion capture data. Each factor analyzer in the mixture is a “locally linear dimensionality reducer” that approximates a part of the manifold. The global parametrization of the manifold is obtained by aligning these locally linear pieces in a global coordinate system. To enable automatic and optimal selection of the number of factor analyzers and the dimensionality of the manifold, a variational Bayesian formulation of the globally coordinated mixture of factor analyzers is proposed. The advantages of the proposed model are demonstrated in a multiple hypothesis tracker for tracking 3D human body pose. Quantitative comparisons on benchmark datasets show that the proposed method produces more accurate 3D pose estimates over time than those obtained from two previously proposed Bayesian tracking methods.  相似文献   

11.
12.
The latent semantic analysis (LSA) has been widely used in the fields of computer vision and pattern recognition. Most of the existing works based on LSA focus on behavior recognition and motion classification. In the applications of visual surveillance, accurate tracking of the moving people in surveillance scenes, is regarded as one of the preliminary requirement for other tasks such as object recognition or segmentation. However, accurate tracking is extremely hard under challenging surveillance scenes where similarity among multiple objects or occlusion among multiple objects occurs. Usual temporal Markov chain based tracking algorithms suffer from the ‘tracking error accumulation problem’. The accumulated errors can finally make the tracking to drift from the target. To handle the problem of tracking drift, some authors have proposed the idea of using detection along with tracking as an effective solution. However, many of the critical issues still remain unsettled in these detection based tracking algorithms. In this paper, we propose a novel moving people tracking with detection based on (probabilistic) LSA. By employing a novel ‘twin-pipeline’ training framework to find the latent semantic topics of ‘moving people’, the proposed detection can effectively detect the interest points on moving people in different indoor and outdoor environments with camera motion. Since the detected interest points on different body parts can be used to locate the position of moving people more accurately, by combining the detection with incremental subspace learning based tracking, the proposed algorithms resolves the problem of tracking drift during each target appearance update process. In addition, due to the time independent processing mechanism of detection, the proposed method is also able to handle the error accumulation problem. The detection can calibrate the tracking errors during updating of each state of the tracking algorithm. Extensive, experiments on various surveillance environments using different benchmark datasets have proved the accuracy and robustness of the proposed tracking algorithm. Further, the experimental comparison results clearly show that the proposed tracking algorithm outperforms the well known tracking algorithms such as ISL, AMS and WSL algorithms. Furthermore, the speed performance of the proposed method is also satisfactory for realistic surveillance applications.  相似文献   

13.
We formulate the problem of 3D human pose estimation and tracking as one of inference in a graphical model. Unlike traditional kinematic tree representations, our model of the body is a collection of loosely-connected body-parts. In particular, we model the body using an undirected graphical model in which nodes correspond to parts and edges to kinematic, penetration, and temporal constraints imposed by the joints and the world. These constraints are encoded using pair-wise statistical distributions, that are learned from motion-capture training data. Human pose and motion estimation is formulated as inference in this graphical model and is solved using Particle Message Passing (PaMPas). PaMPas is a form of non-parametric belief propagation that uses a variation of particle filtering that can be applied over a general graphical model with loops. The loose-limbed model and decentralized graph structure allow us to incorporate information from “bottom-up” visual cues, such as limb and head detectors, into the inference process. These detectors enable automatic initialization and aid recovery from transient tracking failures. We illustrate the method by automatically tracking people in multi-view imagery using a set of calibrated cameras and present quantitative evaluation using the HumanEva dataset.  相似文献   

14.
Minyoung Kim 《Pattern recognition》2011,44(10-11):2325-2333
We introduce novel discriminative semi-supervised learning algorithms for dynamical systems, and apply them to the problem of 3D human motion estimation. Our recent work on discriminative learning of dynamical systems has been proven to achieve superior performance than traditional generative learning approaches. However, one of the main issues of learning the dynamical systems is to gather labeled output sequences which are typically obtained from precise motion capture tools, hence expensive. In this paper we utilize a large amount of unlabeled (input) video data to improve the prediction performance of the dynamical systems significantly. We suggest two discriminative semi-supervised learning approaches that extend the well-known algorithms in static domains to the sequential, real-valued multivariate output domains: (i) self-training which we derive as coordinate ascent optimization of a proper discriminative objective over both model parameters and the unlabeled state sequences, (ii) minimum entropy approach which maximally reduces the model's uncertainty in state prediction for unlabeled data points. These approaches are shown to achieve significant improvement against the traditional generative semi-supervised learning methods. We demonstrate the benefits of our approaches on the 3D human motion estimation problems.  相似文献   

15.
16.
增强现实应用中基于三维模型的手形追踪   总被引:2,自引:0,他引:2  
本文介绍了一种基于三维模型的分步迭代法来实现对全局和局部手运动的估计追踪。手部位置由ICP(Iterative Closed point)算法和因式分解法求得的掌形近似。结合自然手运动限制,本文采用基于序列的Monte Carlo算法追踪手指运动。最后采用在姿态估计和手指关节追踪之间的迭代算法得到一个精确的结构估计。实验证实本方法对自然手势运动具有较好的精确性和鲁棒性。  相似文献   

17.
目的 在目标跟踪过程中,运动信息可以预测目标位置,忽视目标的运动信息或者对其运动方式的建模与实际差异较大,均可能导致跟踪失败。针对此问题,考虑到视觉显著性具有将注意快速指向感兴趣目标的特点,将其引入目标跟踪中,提出一种基于时空运动显著性的目标跟踪算法。方法 首先,依据大脑视皮层对运动信息的层次处理机制,建立一种自底向上的时空运动显著性计算模型,即通过3D时空滤波器完成对运动信号的底层编码、最大化汇集算子完成运动特征的局部编码;利用视频前后帧之间的时间关联性,通过时空运动特征的差分完成运动信息的显著性度量,形成时空运动显著图。其次,在粒子滤波基本框架之下,将时空运动显著图与颜色直方图相结合,来衡量不同预测状态与观测状态之间的相关性,从而确定目标的状态,实现目标跟踪。结果 与其他跟踪方法相比,本文方法能够提高目标跟踪的中心位置误差、精度和成功率等指标;在光照变化、背景杂乱、运动模糊、部分遮挡及形变等干扰因素下,仍能够稳定地跟踪目标。此外,将时空运动显著性融入其他跟踪方法,能够改善跟踪效果,进一步验证了运动显著性对于运动目标跟踪的有效性。结论 时空运动显著性可以有效度量目标的运动信息,增强运动显著的目标区域,抑制干扰区域,从而提升跟踪性能。  相似文献   

18.
19.
20.
Changes in eyebrow configuration, in conjunction with other facial expressions and head gestures, are used to signal essential grammatical information in signed languages. This paper proposes an automatic recognition system for non-manual grammatical markers in American Sign Language (ASL) based on a multi-scale, spatio-temporal analysis of head pose and facial expressions. The analysis takes account of gestural components of these markers, such as raised or lowered eyebrows and different types of periodic head movements. To advance the state of the art in non-manual grammatical marker recognition, we propose a novel multi-scale learning approach that exploits spatio-temporally low-level and high-level facial features. Low-level features are based on information about facial geometry and appearance, as well as head pose, and are obtained through accurate 3D deformable model-based face tracking. High-level features are based on the identification of gestural events, of varying duration, that constitute the components of linguistic non-manual markers. Specifically, we recognize events such as raised and lowered eyebrows, head nods, and head shakes. We also partition these events into temporal phases. We separate the anticipatory transitional movement (the onset) from the linguistically significant portion of the event, and we further separate the core of the event from the transitional movement that occurs as the articulators return to the neutral position towards the end of the event (the offset). This partitioning is essential for the temporally accurate localization of the grammatical markers, which could not be achieved at this level of precision with previous computer vision methods. In addition, we analyze and use the motion patterns of these non-manual events. Those patterns, together with the information about the type of event and its temporal phases, are defined as the high-level features. Using this multi-scale, spatio-temporal combination of low- and high-level features, we employ learning methods for accurate recognition of non-manual grammatical markers in ASL sentences.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号