首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Compression of Human Motion Capture Data Using Motion Pattern Indexing   总被引:1,自引:0,他引:1  
In this work, a novel scheme is proposed to compress human motion capture data based on hierarchical structure construction and motion pattern indexing. For a given sequence of 3D motion capture data of human body, the 3D markers are first organized into a hierarchy where each node corresponds to a meaningful part of the human body. Then, the motion sequence corresponding to each body part is coded separately. Based on the observation that there is a high degree of spatial and temporal correlation among the 3D marker positions, we strive to identify motion patterns that form a database for each meaningful body part. Thereafter, a sequence of motion capture data can be efficiently represented as a series of motion pattern indices. As a result, higher compression ratio has been achieved when compared with the prior art, especially for long sequences of motion capture data with repetitive motion styles. Another distinction of this work is that it provides means for flexible and intuitive global and local distortion controls.  相似文献   

3.
Generating a visually appealing human motion sequence using low‐dimensional control signals is a major line of study in the motion research area in computer graphics. We propose a novel approach that allows us to reconstruct full body human locomotion using a single inertial sensing device, a smartphone. Smartphones are among the most widely used devices and incorporate inertial sensors such as an accelerometer and a gyroscope. To find a mapping between a full body pose and smartphone sensor data, we perform low dimensional embedding of full body motion capture data, based on a Gaussian Process Latent Variable Model. Our system ensures temporal coherence between the reconstructed poses by using a state decomposition model for automatic phase segmentation. Finally, application of the proposed nonlinear regression algorithm finds a proper mapping between the latent space and the sensor data. Our framework effectively reconstructs plausible 3D locomotion sequences. We compare the generated animation to ground truth data obtained using a commercial motion capture system.  相似文献   

4.
黄建峰  林奕城 《软件学报》2000,11(9):1139-1150
提出一个新的方法来产生脸部动画,即利用动作撷取系统捕捉真人脸上的细微动作,再将动态资料用来驱动脸部模型产生动画,首先,利用Oxford Metrics’VICON8系统,在真人的脸上贴了23个反光标记物,用以进行动作撷取,得到三维动态资料后,必须经过后继处理才能使用,因此,提出了消除头部运动的方法,并估计人头的旋转支点,经过处理后,剩余的动态资料代表脸部表情的变化,因此,可以直接运用到脸部模型。用  相似文献   

5.
黄建峰  林奕成  欧阳明 《软件学报》2000,11(9):1141-1150
提出一个新的方法来产生脸部动画,即利用动作撷取系统捕捉真人脸上的细微动作,再将动态资料用来驱动脸部模型产生动画.首先,OXfor Metrics'VICON8系统,在真人的脸上贴了23全反光标记物,用以进行动作撷取.得到三维动态资料后,必须经过后继处理才能使用,因此,提出了消除头部运动的方法,并估计人头的旋转支点,经过处理后,剩余的动态资料代表脸部表情的变化,因此,可以直接运用到脸部模型.用2.5D的脸模型来实作系统,这样可兼得二维模型与三维模型的优点:简单、在小角度旋转时显得生动、自然.在脸部动务的制作中,利用一个特殊的内差公式来计算非特征点的位移,并将脸部分成数个区域,用以限制模型上三维点的移动,使动画更加自然,此动画系统在Pentium Ⅲ500MHz的机器上,并配有OpenGL的加速卡,更新率可以超过每秒30张.  相似文献   

6.
This work investigates a new challenging problem: how to exactly recognize facial expression captured by a high-frame rate 3D sensing as early as possible, while most works generally focus on improving the recognition rate of 2D facial expression recognition. The recognition of subtle facial expressions in their early stage is unfortunately very sensitive to noise that cannot be ignored due to their low intensity. To overcome this problem, two novel feature enhancement methods, namely, adaptive wavelet spectral subtraction method and SVM-based linear discriminant analysis, are proposed to refine subtle features of facial expressions by employing an estimated noise model or not. Experiments on a custom-made dataset built using a high-speed 3D motion capture system corroborated that the two proposed methods outperform other feature refinement methods by enhancing the discriminability of subtle facial expression features and consequently make correct recognitions earlier.  相似文献   

7.
目的 基于3维骨架的行为识别研究在计算机视觉领域一直是非常活跃的主题,在监控、视频游戏、机器人、人机交互、医疗保健等领域已取得了非常多的成果。现今的行为识别算法大多选择固定关节点作为坐标中心,导致动作识别率较低,为解决动作行为识别中识别精度低的问题,提出一种自适应骨骼中心的人体行为识别的算法。方法 该算法首先从骨骼数据集中获取三维骨架序列,并对其进行预处理,得到动作的原始坐标矩阵;再根据原始坐标矩阵提取特征,依据特征值的变化自适应地选择坐标中心,重新对原始坐标矩阵进行归一化;最后通过动态时间规划方法对动作坐标矩阵进行降噪处理,借助傅里叶时间金字塔表示的方法减少动作坐标矩阵时间错位和噪声问题,再使用支持向量机对动作坐标矩阵进行分类。论文使用国际上通用的数据集UTKinect-Action和MSRAction3D对算法进行验证。结果 结果表明,在UTKinect-Action数据集上,该算法的行为识别率比HO3D J2算法高4.28%,比CRF算法高3.48%。在MSRAction3D数据集上,该算法比HOJ3D算法高9.57%,比Profile HMM算法高2.07%,比Eigenjoints算法高6.17%。结论 本文针对现今行为识别算法的识别率低问题,探究出问题的原因是采用了固定关节坐标中心,提出了自适应骨骼中心的行为识别算法。经仿真验证,该算法能有效提高人体行为识别的精度。  相似文献   

8.
We present a method for the efficient retrieval and browsing of immense amounts of realistic 3D human body motion capture data. The proposed method organizes motion capture data based on statistical K-means (SK–means), democratic decision making, unsupervised learning, and visual key frame extraction, thus achieving intuitive retrieval by browsing thumbnails of semantic key frames. We apply three steps for the efficient retrieval of motion capture data. The first is obtaining the basic type clusters by clustering motion capture data using the novel SK-means algorithm, and after which, immediately performing character matching. The second is learning the retrieval information of users during the retrieval process and updating the successful retrieval rate of each data; the search results are then ranked on the basis of successful retrieval rate by democratic decision making to improve accuracy. The last step is generating thumbnails with semantic generalization, which is conducted by using a novel key frame extraction algorithm based on visualized data analysis. The experiment demonstrates that this method can be utilised for the efficient organization and retrieval of enormous motion capture data.  相似文献   

9.
The purpose of this research is a quantitative analysis of movement patterns of dance,which cannot be analyzed with a motion capture system alone,using simultaneous measurement of body motion and biophysical information.In this research,two kinds of same leg movement are captured by simultaneous measurement;one is a leg movement with given strength,the other is a leg movement without strength on condition of basic experiment using optical motion capture and electromyography (EMG) equipment in order to quantitatively analyze characteristics of leg movement.Also,we measured the motion of the traditional Japanese dance using the constructed system.We can visualize leg movement of Japanese dance by displaying a 3D CG character animation with motion data and EMG data.In addition,we expect that our research will help dancers and researchers on dance through giving new information on dance movement which cannot be analyzed with only motion capture.  相似文献   

10.
陈忠泽  黄国玉 《计算机应用》2008,28(5):1251-1254
提出一种由目标的立体图像通过人工神经网络实时估计得到其3D姿态的方法。网络的输入向量由同步立体图像帧上目标特征点的坐标构成;而输出向量则表示目标若干关键位置的三维姿态(进而可以建立目标的3D模型)。拟合该神经网络所需要的输出样本数据由运动捕获系统REACTOR获取。实验表明基于该算法的3D姿态估计误差低于5%,可以有效应用于3D虚拟目标的计算机实时合成等。  相似文献   

11.
12.
孙燮  陈曦 《计算机科学》2016,43(Z6):187-190, 197
手语识别属于手势识别的研究范畴。传统的基于数据手套的手语识别方法不能完整捕捉手语的所有要素,无法识别手部与肢体配合的手语动作。 惯性测量单元(IMU)由于体积小、成本低而被越来越多地应用到动作捕捉项目中。借鉴机器人运动学相关知识,提出了基于IMU的手语识别骨骼模型,该模型符合人体生物学特征。模型的构建步骤为首先进行骨骼的选取,然后进行尺寸标定。最后提出了标定模型尺寸的实验方法,使用IMU获得的动作集的数据可以进行求解。  相似文献   

13.
We focus on the recognition of human actions in uncontrolled videos that may contain complex temporal structures. It is a difficult problem because of the large intra-class variations in viewpoint, video length, motion pattern, etc. To address these difficulties, we propose a novel system in this paper that represents each action class by hidden temporal models. In this system, we represent the crucial action event per category by a video segment that covers a fixed number of frames and can move temporally within the sequences. To capture the temporal structures, the video segment is described by a temporal pyramid model. To capture large intra-class variations, multiple models are combined using Or operation to represent alternative structures. The index of model and the start frame of segment are both treated as hidden variables. We implement a learning procedure based on the latent SVM method. The proposed approach is tested on two difficult benchmarks: the Olympic Sports and HMDB51 data sets. The experimental results reveal that our system is comparable to the state-of-the-art methods in the literature.  相似文献   

14.
It is well known that biological motion conveys a wealth of socially meaningful information. From even a brief exposure, biological motion cues enable the recognition of familiar people, and the inference of attributes such as gender, age, mental state, actions and intentions. In this paper we show that from the output of a video-based 3D human tracking algorithm we can infer physical attributes (e.g., gender and weight) and aspects of mental state (e.g., happiness or sadness). In particular, with 3D articulated tracking we avoid the need for view-based models, specific camera viewpoints, and constrained domains. The task is useful for man–machine communication, and it provides a natural benchmark for evaluating the performance of 3D pose tracking methods (vs. conventional Euclidean joint error metrics). We show results on a large corpus of motion capture data and on the output of a simple 3D pose tracker applied to videos of people walking.  相似文献   

15.
罗坚  唐琎  毛芳  赵鹏  汪鹏 《传感技术学报》2015,28(8):1108-1114
针对老龄人异常行为的实时检测、识别和在线主动信息推送问题,利用智能移动终端内置的三轴加速传感器来采集人体运动信息。通过匹配追踪算法(MP)对信号进行Gabor原子分解,并利用Wigner-Ville时频分析方法,从时间和三维空间动作特征对其进行时频联合域研究。为解决时频分析过程中的复杂运算问题,并完成异常行为动作的在线训练,分类识别和信息推送,探讨一种运用移动终端APP,无线网络和云计算平台来构建的可穿戴式老龄人异常行为检测系统。实验结果表明,该方法切实可行,可将其应用于老龄人日常监护和紧急救助等相关领域。  相似文献   

16.
In this paper, we propose a gait recognition algorithm that fuses motion and static spatio-temporal templates of sequences of silhouette images, the motion silhouette contour templates (MSCTs) and static silhouette templates (SSTs). MSCTs and SSTs capture the motion and static characteristic of gait. These templates would be computed from the silhouette sequence directly. The performance of the proposed algorithm is evaluated experimentally using the SOTON data set and the USF data set. We compared our proposed algorithm with other research works on these two data sets. Experimental results show that the proposed templates are efficient for human identification in indoor and outdoor environments. The proposed algorithm has a recognition rate of around 85% on the SOTON data set. The recognition rate is around 80% in intrinsic difference group (probes A-C) of USF data set.  相似文献   

17.
Gait as a biometric trait has the ability to be recognized in remote monitoring. In this article, a method based on joint distribution of motion angles is proposed for gait recognition. The new feature of the motion angles of lower limbs is defined and extracted from either 2D video database or 3D motion capture database, and the corresponding angles of right leg and left leg are joined together to work out the joint distribution spectrums. Based on the joint distribution of these angles, we build a feature histogram individually. In the stage of distance measurement, three types of distance vector are defined and utilized to measure the similarity between the histograms, and then a classifier is built to implement the classification. Experiments has been carried out both on CASIA Gait Database and CMU motion capture database, which show that our method can achieve a good recognition performance.  相似文献   

18.
The authors present a novel data-driven 3D facial motion capture data editing system using automated construction of an orthogonal blendshape face model and constrained weight propagation, aiming to bridge the popular facial motion capture technique and blendshape approach. In this work, a 3D facial-motion-capture-editing problem is transformed to a blendshape-animation-editing problem. Given a collected facial motion capture data set, we construct a truncated PCA space spanned by the greatest retained eigenvectors and a corresponding blendshape face model for each anatomical region of the human face. As such, modifying blendshape weights (PCA coefficients) is equivalent to editing their corresponding motion capture sequence. In addition, a constrained weight propagation technique allows animators to balance automation and flexible controls.  相似文献   

19.
We proposed a facial motion tracking and expression recognition system based on video data. By a 3D deformable facial model, the online statistical model (OSM) and cylinder head model (CHM) were combined to track 3D facial motion in the framework of particle filtering. For facial expression recognition, a fast and efficient algorithm and a robust and precise algorithm were developed. With the first, facial animation and facial expression were retrieved sequentially. After that facial animation was obtained, facial expression was recognized by static facial expression knowledge learned from anatomical analysis. With the second, facial animation and facial expression were simultaneously retrieved to increase the reliability and robustness with noisy input data. Facial expression was recognized by fusing static and dynamic facial expression knowledge, the latter of which was learned by training a multi-class expressional Markov process using a video database. The experiments showed that facial motion tracking by OSM+CHM is more pose robust than that by OSM, and the facial expression score of the robust and precise algorithm is higher than those of other state-of-the-art facial expression recognition methods.  相似文献   

20.
In recent years, the convergence of computer vision and computer graphics has put forth a new field of research that focuses on the reconstruction of real-world scenes from video streams. To make immersive 3D video reality, the whole pipeline spanning from scene acquisition over 3D video reconstruction to real-time rendering needs to be researched. In this paper, we describe latest advancements of our system to record, reconstruct and render free-viewpoint videos of human actors. We apply a silhouette-based non-intrusive motion capture algorithm making use of a 3D human body model to estimate the actor’s parameters of motion from multi-view video streams. A renderer plays back the acquired motion sequence in real-time from any arbitrary perspective. Photo-realistic physical appearance of the moving actor is obtained by generating time-varying multi-view textures from video. This work shows how the motion capture sub-system can be enhanced by incorporating texture information from the input video streams into the tracking process. 3D motion fields are reconstructed from optical flow that are used in combination with silhouette matching to estimate pose parameters. We demonstrate that a high visual quality can be achieved with the proposed approach and validate the enhancements caused by the the motion field step.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号