首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Expressive facial animations are essential to enhance the realism and the credibility of virtual characters. Parameter‐based animation methods offer a precise control over facial configurations while performance‐based animation benefits from the naturalness of captured human motion. In this paper, we propose an animation system that gathers the advantages of both approaches. By analyzing a database of facial motion, we create the human appearance space. The appearance space provides a coherent and continuous parameterization of human facial movements, while encapsulating the coherence of real facial deformations. We present a method to optimally construct an analogous appearance face for a synthetic character. The link between both appearance spaces makes it possible to retarget facial animation on a synthetic face from a video source. Moreover, the topological characteristics of the appearance space allow us to detect the principal variation patterns of a face and automatically reorganize them on a low‐dimensional control space. The control space acts as an interactive user‐interface to manipulate the facial expressions of any synthetic face. This interface makes it simple and intuitive to generate still facial configurations for keyframe animation, as well as complete temporal sequences of facial movements. The resulting animations combine the flexibility of a parameter‐based system and the realism of real human motion. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

2.
基于视频的人体动画   总被引:14,自引:0,他引:14  
现有的基于运动捕获的动画普遍存在投资大,人体的运动受到捕获设备限制等缺陷。提出了新的基于视频的人体动画技术,首先从视频中捕获人体运动,然后将编辑处理后的运动重定向到动画角色,产生满足动画师要求的动画片段。对其中的运动捕获、运动编辑和运动重定向等关键技术进行了深入研究,并基于这些关键技术开发了一个基于双像机的视频动画系统(VBHA),系统运行结果表明了从来源广、成本低的视频中捕获运动,并经过运动编辑和运动重定向后生成逼真动画的可行性。  相似文献   

3.
This paper proposes a new generative model for flexible editing of human motion. Different from previous work, three intuitive factors of motion, namely, content, identity and style, can be manipulated directly with the new model. With the new generative model, motion editing can be achieved in various aspects, including transferring an unknown style from an actor to another, synthesizing other styles for an unknown actor and generating a new motion with other content. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

4.
艺术体操动作辅助编排系统   总被引:1,自引:0,他引:1  
针对艺术体操个人项目与集体项目编排的特点,采用运动捕获技术、运动编辑和合成技术以及音乐特征提取等技术,开发了艺术体操动作辅助编排系统.该系统可以对个人和集体动作编排的整个流程进行辅助,从而缩短动作编排时间,拓宽教练员创作空间,提高动作编排质量.  相似文献   

5.
为了使人们对3维人体运动编辑与合成技术有个概略了解,对当前3维人体运动编辑和合成技术的研究现状进行了系统阐述。首先对4大类3维人体运动合成方法进行了分析比较;然后按照运动捕获数据的表示方法、运动编辑技术、运动合成技术3个方面着重分析了运动捕获数据驱动的运动合成方法的发展现状;最后对目前3维人体运动编辑与合成技术研究中的难点问题及未来研究重点进行了展望。  相似文献   

6.
The processing of captured motion is an essential task for undertaking the synthesis of high-quality character animation. The motion decomposition techniques investigated in prior work extract meaningful motion primitives that help to facilitate this process. Carefully selected motion primitives can play a major role in various motion-synthesis tasks, such as interpolation, blending, warping, editing or the generation of new motions. Unfortunately, for a complex character motion, finding generic motion primitives by decomposition is an intractable problem due to the compound nature of the behaviours of such characters. Additionally, decomposed motion primitives tend to be too limited for the chosen model to cover a broad range of motion-synthesis tasks. To address these challenges, we propose a generative motion decomposition framework in which the decomposed motion primitives are applicable to a wide range of motion-synthesis tasks. Technically, the input motion is smoothly decomposed into three motion layers. These are base-level motion, a layer with controllable motion displacements and a layer with high-frequency residuals. The final motion can easily be synthesized simply by changing a single user parameter that is linked to the layer of controllable motion displacements or by imposing suitable temporal correspondences to the decomposition framework. Our experiments show that this decomposition provides a great deal of flexibility in several motion synthesis scenarios: denoising, style modulation, upsampling and time warping.  相似文献   

7.
杨春玲  董传良 《计算机仿真》2007,24(1):186-187,195
运动捕获技术可以记录人体关节运动的细节,是当前最有前景的计算机动画技术.然而,运动数据的重用性一直是个难点,为此,多种运动编辑手段被提出.运动过渡是一种常见的编辑技术,它可以将输入的两端运动序列拼接,形成新的运动序列.其中,过渡点选择的合理与否直接影响着结果运动的质量.在两运动间选择过渡点,需要对输入运动的每一对帧之间分别计算帧间的距离,其计算复杂度是O(n2)的,通过引入多分辨率模型,文中将该复杂度降低到O(nlog2n),同时试验结果表明,此方法并未损害到结果运动的质量.  相似文献   

8.
示范表演驱动的运动数据检索方法及可用性评估   总被引:2,自引:1,他引:1  
以直观、自然的方式来检索运动数据可有效地提高角色动画生产效率.提出一种通过现场示范表演来检索运动数据的方法,使动画制作人员能够直观、快速地从大容量运动数据库中找到所期望的运动片段.首先通过运动传感器获取用户的示范表演,然后利用经典的DTW时序数据匹配算法来实现基于样例的运动数据检索.通过一组观察实验对它的可用性进行分析和评估的结果表明,文中方法能够显著地提升运动数据检索的效率及用户体验.  相似文献   

9.
王忆源  陈福民 《计算机应用》2005,25(8):1951-1953
在虚拟现实中,采用运动捕获系统建立基本运动库,然后通过运动编辑技术对基本运动进行处理。运动混合技术是编辑技术中最为实用也最为复杂的一种。提出了基于运动混合的实时同步算法,以便更好地动态混合运动,避免产生未预期的效果,以创建复杂的虚拟动画。  相似文献   

10.
We introduce “Crowd Sculpting”: a method to interactively design populated environments by using intuitive deformation gestures to drive both the spatial coverage and the temporal sequencing of a crowd motion. Our approach assembles large environments from sets of spatial elements which contain inter‐connectible, periodic crowd animations. Such a “Crowd Patches” approach allows us to avoid expensive and difficult‐to‐control simulations. It also overcomes the limitations of motion editing, that would result into animations delimited in space and time. Our novel methods allows the user to control the crowd patches layout in ways inspired by elastic shape sculpting: the user creates and tunes the desired populated environment through stretching, bending, cutting and merging gestures, applied either in space or time. Our examples demonstrate that our method allows the space‐time editing of very large populations and results into endless animation, while offering real‐time, intuitive control and maintaining animation quality.  相似文献   

11.
针对实时人物动作角色软件DI-Guy在运动编辑及新动作生成方面的局限,提出了结合3D人体动画软件及标准动.作捕捉数据文件在Vega环境中基于DI-Guy平台进行虚拟人动作生成的开发思路,给出了人体动作建模软件Poser中动作数据文件生成的详细开发过程,分析了Vega环境下动作捕捉数据文件的应用方法.依据上述开发思路进行了DI-Guy下用户定制动作的添加生成,所添加、定制的动作逼真、形象,而且系统运行速度不受影响,同时系统也可直接调用DI-Guy原有高层动作函数,大大方便了应用系统的开发.实验结果表明开发模式对于扩展DI-Guy的应用范围及提高虚拟人应用系统的开发效率具有重要的实用价值.  相似文献   

12.
人体运动路径的编辑算法   总被引:1,自引:0,他引:1  
对现有的路径编辑方法进行了较大改进,将运动自动简化技术引入到传统的时空优化方法中,同时将物理约束引入到传统的路径变换方法,保证了优化过程的快速收敛和结果运动的物理真实性.算法框架包括模型简化、运动简化、路径编辑、运动合成和运动复原等5个部分.实验结果表明,该算法能够将人体运动从原始路径上成功地映射到新路径上.  相似文献   

13.
针对多关节的反向运动学(IK)提出一种解析的求解方法,直接给出公式来求解关节链中所有待求关节的位置和旋转变量。同传统的迭代方法相比,该方法省却了繁琐的迭代计算,具有更高的求解效率,可以应用于角色动作、运动轨迹的设计、编辑及优化等交互性强和实时性要求高的应用中。  相似文献   

14.
Stitching different character motions is one of the most commonly used techniques as it allows the user to make new animations that fit one's purpose from pieces of motion. However, current motion stitching methods often produce unnatural motion with foot sliding artefacts, depending on the performance of the interpolation. In this paper, we propose a novel motion stitching technique based on a recurrent motion refiner (RMR) that connects discontinuous locomotions into a single natural locomotion. Our model receives different locomotions as input, in which the root of the last pose of the previous motion and that of the first pose of the next motion are aligned. During runtime, the model slides through the sequence, editing frames window by window to output a smoothly connected animation. Our model consists of a two-layer recurrent network that comes between a simple encoder and decoder. To train this network, we created a sufficient number of paired data with a newly designed data generation. This process employs a K-nearest neighbour search that explores a predefined motion database to create the corresponding input to the ground truth. Once trained, the suggested model can connect various lengths of locomotion sequences into a single natural locomotion.  相似文献   

15.
In this paper a pen-based intuitive interface is presented, that controls a virtual human figure interactively. Recent commercial pen devices can detect not only the pen positions but also the pressure and tilt of the pen. We utilize such information to make a human figure perform various types of motions in response to the pen movements manipulated by the user. The figure walks, runs, turns and steps along the trajectory and speed of the pen. The figure also bends, stretches and tilts in response to the tilt of the pen. Moreover, it ducks and jumps in response to the pen pressure. Using our interface, the user controls a virtual human figure intuitively as if he or she were holding a virtual puppet and playing with it.

In addition to the interface design and implementation, this paper describes a motion generation engine to produce various motion based on varying parameters that are given by the pen interface. We take a motion blending approach and construct motion blending modules with a set of small number of motion capture data for each type of motions. Finally, we present the results from user experiments and comparison with a transitional gamepad-based interface.  相似文献   


16.
We present a novel approach for style retargeting to non‐humanoid characters by allowing extracted stylistic features from one character to be added to the motion of another character with a different body morphology. We introduce the concept of groups of body parts (GBPs), for example, the torso, legs and tail, and we argue that they can be used to capture the individual style of a character motion. By separating GBPs from a character, the user can define mappings between characters with different morphologies. We automatically extract the motion of each GBP from the source, map it to the target and then use a constrained optimization to adjust all joints in each GBP in the target to preserve the original motion while expressing the style of the source. We show results on characters that present different morphologies to the source motion from which the style is extracted. The style transfer is intuitive and provides a high level of control. For most of the examples in this paper, the definition of GBP takes around 5 min and the optimization about 7 min on average. For the most complicated examples, the definition of three GBPs and their mapping takes about 10 min and the optimization another 30 min.  相似文献   

17.
基于时空约束的运动编辑和运动重定向   总被引:8,自引:2,他引:8  
近年来兴起的运动捕获已成为人体动画中最有应用前景的技术之一,目前运动捕获手段很多,但是通常成本高,而且捕获到的运动类型比较单一,为了提高运动捕获数据的重用性,生成与复杂场景协调的多样的动画,必须对捕获的运动数据进行编辑和重定向处理,介绍了一种基于时空约束的运动编辑和运动重定向方法,通过规定一组时空约束条件,建立相应的目标函数,采用逆向运动学和数值优化方法求解出满足约束条件的运动姿势,实验结果表明,该方法可以生成多种满足不同场景婪泊逼真运动,提出了数据的重用性。  相似文献   

18.
We describe a system to synthesize facial expressions by editing captured performances. For this purpose, we use the actuation of expression muscles to control facial expressions. We note that there have been numerous algorithms already developed for editing gross body motion. While the joint angle has direct effect on the configuration of the gross body, the muscle actuation has to go through a complicated mechanism to produce facial expressions. Therefore,we devote a significant part of this paper to establishing the relationship between muscle actuation and facial surface deformation. We model the skin surface using the finite element method to simulate the deformation caused by expression muscles. Then, we implement the inverse relationship, muscle actuation parameter estimation, to find the muscle actuation values from the trajectories of the markers on the performer's face. Once the forward and inverse relationships are established, retargeting or editing a performance becomes an easy job. We apply the original performance data to different facial models with equivalent muscle structures, to produce similar expressions. We also produce novel expressions by deforming the original data curves of muscle actuation to satisfy the key‐frame constraints imposed by animators.Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

19.
Compactly representing time‐varying geometries is an important issue in dynamic geometry processing. This paper proposes a framework of sparse localized decomposition for given animated meshes by analyzing the variation of edge lengths and dihedral angles (LAs) of the meshes. It first computes the length and dihedral angle of each edge for poses and then evaluates the difference (residuals) between the LAs of an arbitrary pose and their counterparts in a reference one. Performing sparse localized decomposition on the residuals yields a set of components which can perfectly capture local motion of articulations. It supports intuitive articulation motion editing through manipulating the blending coefficients of these components. To robustly reconstruct poses from altered LAs, we devise a connection‐map‐based algorithm which consists of two steps of linear optimization. A variety of experiments show that our decomposition is truly localized with respect to rotational motions and outperforms state‐of‐the‐art approaches in precisely capturing local articulated motion.  相似文献   

20.
Interactive Multiresolution Editing of Arbitrary Meshes   总被引:5,自引:0,他引:5  
This paper presents a novel approach to multiresolution editing of a triangular mesh. The basic idea is to embed an editing area of a mesh onto a 2D rectangle and interpolate the user-specified editing information over the 2D rectangle. The result of the interpolation is mapped back to the editing area and then used to update the mesh. We adopt harmonic maps for the embedding and multilevel B-splines for the interpolation. The proposed mesh editing technique can handle an arbitrary mesh without any preprocessing such as remeshing. It runs fast enough to support interactive editing and produces intuitive editing results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号