首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
三维模型动画在数字化设计和应用中具有重要意义,受到越来越多研究者关注;但如何通过三维数字化原真再现民族舞蹈表演是极具挑战的问题.本论文通过动捕技术采集舞蹈动作实现舞蹈数字化展示.具体方法是:首先利用动捕设备捕获人体动作数据,然后在Maya中进行人物建模、骨骼绑定、蒙皮和权重调节,再通过MotionBuilder将3D模型与动捕数据结合,最终完成了现实舞蹈动作的虚拟人展演.论文构建了一个面向民族舞蹈展演的虚拟场景,并以13个民族的舞蹈为数字化内容,推广动捕驱动的舞蹈展演方法的应用.  相似文献   

2.
The purpose of this research is a quantitative analysis of movement patterns of dance,which cannot be analyzed with a motion capture system alone,using simultaneous measurement of body motion and biophysical information.In this research,two kinds of same leg movement are captured by simultaneous measurement;one is a leg movement with given strength,the other is a leg movement without strength on condition of basic experiment using optical motion capture and electromyography (EMG) equipment in order to quantitatively analyze characteristics of leg movement.Also,we measured the motion of the traditional Japanese dance using the constructed system.We can visualize leg movement of Japanese dance by displaying a 3D CG character animation with motion data and EMG data.In addition,we expect that our research will help dancers and researchers on dance through giving new information on dance movement which cannot be analyzed with only motion capture.  相似文献   

3.
基于运动捕获数据的虚拟人动画研究   总被引:2,自引:0,他引:2       下载免费PDF全文
随着三维游戏等行业对计算机动画制作需求的增加,在三维动画制作软件中人工调整虚拟人动作的工作方式已经不再适合现在的计算机动画制作。运动捕获技术是直接记录物体的运动数据并将其用于生成计算机动画,具有高效率、所生成的动画真实感强等优点,因而获得了广泛应用。提出了一种利用运动捕获数据来生成动画的方法,基于运动捕获得到的数据建立并驱动三维骨架模型,从而产生骨架的运动,形成动画。该方法可以充分利用现有的大量运动捕获数据,因此具有较大的应用前景。  相似文献   

4.
Noh is a genre of Japanese traditional theater, a kind of musical drama. Similar to other dance forms, Noh dance (shimai) can also be divided into small, discrete units of motion (shosa). Therefore, if we have a set of motion clips of motion units (shosa), we can synthesize Noh dance animation by composing them in a sequence based on the Noh dance notation (katatsuke). However, it is difficult for researchers and learners of Noh dance to utilize existing animation systems to create such animations. The purpose of this research is to develop an easy-to-use authoring system for Noh dance animation. In this paper, we introduce the design, implementation, and evaluation of our system. To solve the problems of existing animation systems, we employ our smart motion synthesis technique to compose motion units automatically. We improved the motion synthesis method by enhancing the algorithms for detecting body orientation and constraints between the foot and ground to handle Noh dance motions correctly. We classify motion units as either pattern units, which are specific forms of motion, represented as shot motion clips, or locomotion units, generated on the fly to denote movement towards a specific position or direction. To handle locomotion-type motion units, we implemented a module to generate walking motion based on a given path. We created several Noh dance animations using this system, which was evaluated through a series of experiments. We also conducted a user test to determine the usefulness of our system for learners of Noh dance.  相似文献   

5.
Motion capture is a technique of digitally recording the movements of real entities, usually humans. It was originally developed as an analysis tool in biomechanics research, but has grown increasingly important as a source of motion data for computer animation. In this context it has been widely used for both cinema and video games. Hand motion capture and tracking in particular has received a lot of attention because of its critical role in the design of new Human Computer Interaction methods and gesture analysis. One of the main difficulties is the capture of human hand motion. This paper gives an overview of ongoing research “HandPuppet3D” being carried out in collaboration with an animation studio to employ computer vision techniques to develop a prototype desktop system and associated animation process that will allow an animator to control 3D character animation through the use of hand gestures. The eventual goal of the project is to support existing practice by providing a softer, more intuitive, user interface for the animator that improves the productivity of the animation workflow and the quality of the resulting animations. To help achieve this goal the focus has been placed on developing a prototype camera based desktop gesture capture system to capture hand gestures and interpret them in order to generate and control the animation of 3D character models. This will allow an animator to control 3D character animation through the capture and interpretation of hand gestures. Methods will be discussed for motion tracking and capture in 3D animation and in particular that of hand motion tracking and capture. HandPuppet3D aims to enable gesture capture with interpretation of the captured gestures and control of the target 3D animation software. This involves development and testing of a motion analysis system built from algorithms recently developed. We review current software and research methods available in this area and describe our current work.  相似文献   

6.
基于动作捕捉技术的岭南舞蹈数字化保护研究   总被引:1,自引:0,他引:1  
对岭南舞蹈的保护所面临的问题进行分析.提出基于动作捕捉技术的岭南舞蹈数字化保护方法。和传统的视频记录相比。观看者能够以任意角度、任意距离进行浏览。通过建立岭南舞蹈三维动作数据库与数字化虚拟展示,能真实全面再现每个岭南舞蹈的艺术精华,为今后岭南舞蹈的教学、舞蹈辅助编排、动画制作以及相关的研究工作提供精准的数字化平台。对岭南舞蹈的保护、传承和发展有着深远的意义。  相似文献   

7.
针对传统三维角色动画制作成本高、时间长的问题,文章介绍了一种应用Kinect动作捕捉技术实现高效制作三维角色动画的方法。该方法借助Kinect体感摄像机捕捉真人的动作生成骨骼关节的关键帧数据并输出bvh动作路径文件,然后把bvh动作路径数据导入C4D软件中,就可以驱动角色模型完成角色动画的制作。把这种方法运用到三维角色动画教学实践中,有利于提高学生学习的兴趣和效率。  相似文献   

8.
基于视频的人体动画   总被引:14,自引:0,他引:14  
现有的基于运动捕获的动画普遍存在投资大,人体的运动受到捕获设备限制等缺陷。提出了新的基于视频的人体动画技术,首先从视频中捕获人体运动,然后将编辑处理后的运动重定向到动画角色,产生满足动画师要求的动画片段。对其中的运动捕获、运动编辑和运动重定向等关键技术进行了深入研究,并基于这些关键技术开发了一个基于双像机的视频动画系统(VBHA),系统运行结果表明了从来源广、成本低的视频中捕获运动,并经过运动编辑和运动重定向后生成逼真动画的可行性。  相似文献   

9.
The creation of a stylistic animation through the use of high‐level controls has always been a design goal for computer animation software. In this paper, we propose a procedural animation system, called rhythmic character animation playacting (RhyCAP), which allows a designer to interactively direct animated characters by adjusting rhythmic parameters such as tempo, exaggeration, and timing. The motions thus generated reflect the intention of the director and also adapt to environmental obstacle constraints. We use a sequence of martial‐art steps in the performance of a Chinese lion dance to illustrate the effectiveness of the system. The animation is generated by composition of common motion elements, concisely represented in an action graph. We have implemented an animation control program that allows Chinese lion dance to be choreographed interactively. This authoring tool also serves as a useful means for preserving this part of world cultural heritage. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

10.
In this paper, we introduce a method that endows a given animation signal with slow-in and slow-out effects by using a bilateral filter scheme. By modifying the equation of the bilateral filter, the method applies reparameterization to the original animation trajectory. This holds extreme poses in the original animation trajectory for a long time, in such a way that there is no distortion or loss of the original information in the animation path. Our method can successfully enhance the slow-in and slow-out effects for several different types of animation data: keyframe and hand-drawn trajectory animation, motion capture data, and physically-based animation by using a rigid body simulation system.  相似文献   

11.
基于多自主智能体的群体动画创作   总被引:7,自引:2,他引:7  
群体动画一直是计算机动画界一个具有挑战性的研究方向,提出了一个基于多自主智能体的群体动画创作框架:群体中的各角色作为自主智能体,能感知环境信息,产生意图,规划行为,最后通过运动系统产生运动来完成行为和实现意图,与传统的角色运动生成机理不同,首先采用运动捕获系统建立基本运动库,然后通过运动编辑技术对基本运动进行处理以最终得到角色运动,应用本技术,动画师只需“拍摄”角色群体的运动就能创作群体动画,极大地提高了制作效率。  相似文献   

12.
运动捕捉系统产生的人体运动数据是标记点在运动序列中的位置数据,用于驱动人体模型产生真实感的动画。在对近几年有关人体运动数据重构的文献进行综合和分析的基础上,首先对人体运动数据重构进行了问题描述,并对人体运动数据在重构过程中难以避免的噪声问题和特征点(虚拟空间中的标记点)缺失问题的研究分别进行了总结和分析;然后对人体运动数据获取的光学式原型捕捉系统开发的研究进行了讨论,评述了人体运动数据驱动人体几何模型的相关研究;最后对未来研究提出了一些展望。  相似文献   

13.
运动捕捉技术是计算机视觉和人体运动分析领域的研究热点,在计算机动画等领域拥有广泛的应用前景。在总结基于视觉的人体运动捕捉技术进展的基础上,分析运动跟踪、捕捉方法及技术难点,提出一种新的从视频提取人体运动信息,重现人体运动轨迹的方法、流程及系统设计框架。  相似文献   

14.
This paper presents a novel data‐driven expressive speech animation synthesis system with phoneme‐level controls. This system is based on a pre‐recorded facial motion capture database, where an actress was directed to recite a pre‐designed corpus with four facial expressions (neutral, happiness, anger and sadness). Given new phoneme‐aligned expressive speech and its emotion modifiers as inputs, a constrained dynamic programming algorithm is used to search for best‐matched captured motion clips from the processed facial motion database by minimizing a cost function. Users optionally specify ‘hard constraints’ (motion‐node constraints for expressing phoneme utterances) and ‘soft constraints’ (emotion modifiers) to guide this search process. We also introduce a phoneme–Isomap interface for visualizing and interacting phoneme clusters that are typically composed of thousands of facial motion capture frames. On top of this novel visualization interface, users can conveniently remove contaminated motion subsequences from a large facial motion dataset. Facial animation synthesis experiments and objective comparisons between synthesized facial motion and captured motion showed that this system is effective for producing realistic expressive speech animations.  相似文献   

15.
Synthesizing expressive facial animation is a very challenging topic within the graphics community. In this paper, we present an expressive facial animation synthesis system enabled by automated learning from facial motion capture data. Accurate 3D motions of the markers on the face of a human subject are captured while he/she recites a predesigned corpus, with specific spoken and visual expressions. We present a novel motion capture mining technique that "learns" speech coarticulation models for diphones and triphones from the recorded data. A phoneme-independent expression eigenspace (PIEES) that encloses the dynamic expression signals is constructed by motion signal processing (phoneme-based time-warping and subtraction) and principal component analysis (PCA) reduction. New expressive facial animations are synthesized as follows: First, the learned coarticulation models are concatenated to synthesize neutral visual speech according to novel speech input, then a texture-synthesis-based approach is used to generate a novel dynamic expression signal from the PIEES model, and finally the synthesized expression signal is blended with the synthesized neutral visual speech to create the final expressive facial animation. Our experiments demonstrate that the system can effectively synthesize realistic expressive facial animation  相似文献   

16.
针对人体运动姿态可自由编辑的特性,提取人体骨架模型,建立人体运动姿态模型库,分析了运动捕捉系统在舞蹈训练中的应用,提出基于特征平面间相似性匹配的方法来计算模型各部件间运动数据参数。经过验证,该方法对于人体姿态的分析具有较高的准确性、鲁棒性,使得舞蹈演员能够准确的对比出与标准舞蹈动作的差异,为舞蹈者进行科学舞蹈训练提供了理论支持。  相似文献   

17.
黄建峰  林奕城 《软件学报》2000,11(9):1139-1150
提出一个新的方法来产生脸部动画,即利用动作撷取系统捕捉真人脸上的细微动作,再将动态资料用来驱动脸部模型产生动画,首先,利用Oxford Metrics’VICON8系统,在真人的脸上贴了23个反光标记物,用以进行动作撷取,得到三维动态资料后,必须经过后继处理才能使用,因此,提出了消除头部运动的方法,并估计人头的旋转支点,经过处理后,剩余的动态资料代表脸部表情的变化,因此,可以直接运用到脸部模型。用  相似文献   

18.
黄建峰  林奕成  欧阳明 《软件学报》2000,11(9):1141-1150
提出一个新的方法来产生脸部动画,即利用动作撷取系统捕捉真人脸上的细微动作,再将动态资料用来驱动脸部模型产生动画.首先,OXfor Metrics'VICON8系统,在真人的脸上贴了23全反光标记物,用以进行动作撷取.得到三维动态资料后,必须经过后继处理才能使用,因此,提出了消除头部运动的方法,并估计人头的旋转支点,经过处理后,剩余的动态资料代表脸部表情的变化,因此,可以直接运用到脸部模型.用2.5D的脸模型来实作系统,这样可兼得二维模型与三维模型的优点:简单、在小角度旋转时显得生动、自然.在脸部动务的制作中,利用一个特殊的内差公式来计算非特征点的位移,并将脸部分成数个区域,用以限制模型上三维点的移动,使动画更加自然,此动画系统在Pentium Ⅲ500MHz的机器上,并配有OpenGL的加速卡,更新率可以超过每秒30张.  相似文献   

19.
赵威  李毅 《计算机应用》2022,42(9):2830-2837
为了生成更准确流畅的虚拟人动画,采用Kinect设备捕获三维人体姿态数据的同时,使用单目人体三维姿态估计算法对Kinect的彩色信息进行骨骼点数据推理,从而实时优化人体姿态估计效果,并驱动虚拟人物模型生成动画。首先,提出了一种时空优化的骨骼点数据处理方法,以提高单目估计人体三维姿态的稳定性;其次,提出了一种Kinect和遮挡鲁棒姿势图(ORPM)算法融合的人体姿态估计方法来解决Kinect的遮挡问题;最后,研制了基于四元数向量插值和逆向运动学约束的虚拟人动画系统,其能够进行运动仿真和实时动画生成。与仅利用Kinect捕获人体运动来生成动画的方法相比,所提方法的人体姿态估计数据鲁棒性更强,具备一定的防遮挡能力,而与基于ORPM算法的动画生成方法相比,所提方法生成的动画在帧率上提高了两倍,效果更真实流畅。  相似文献   

20.
Motion capture cannot generate cartoon‐style animation directly. We emulate the rubber‐like exaggerations common in traditional character animation as a means of converting motion capture data into cartoon‐like movement. We achieve this using trajectory‐based motion exaggeration while allowing the violation of link‐length constraints. We extend this technique to obtain smooth, rubber‐like motion by dividing the original links into shorter sub‐links and computing the positions of joints using Bézier curve interpolation and a mass‐spring simulation. This method is fast enough to be used in real time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号