首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
To automate character animation and extend it to 3-D we need to create and manipulate three-dimensional models of articulated figures as well as the worlds they will inhabit.Abstraction andadaptive motion are key mechanisms for dealing with thedegrees of freedom problem, which refers to the sheer volume of control information necessary for coordinating the motion of an articulated figure when the number of links is large. A three level hierarchy of control modes for animation is proposed:guiding, animator-level, andtask-level systems. Guiding is best suited for specifiying fine details but unsuited for controlling complex motion. Animatorlevel programming is powerful but difficult. Task-level systems give us facile control over complex motions and tasks by trading off explicit control over the details of motion. The integration of the three control levels is discussed.  相似文献   

3.
4.
Expressive facial animations are essential to enhance the realism and the credibility of virtual characters. Parameter‐based animation methods offer a precise control over facial configurations while performance‐based animation benefits from the naturalness of captured human motion. In this paper, we propose an animation system that gathers the advantages of both approaches. By analyzing a database of facial motion, we create the human appearance space. The appearance space provides a coherent and continuous parameterization of human facial movements, while encapsulating the coherence of real facial deformations. We present a method to optimally construct an analogous appearance face for a synthetic character. The link between both appearance spaces makes it possible to retarget facial animation on a synthetic face from a video source. Moreover, the topological characteristics of the appearance space allow us to detect the principal variation patterns of a face and automatically reorganize them on a low‐dimensional control space. The control space acts as an interactive user‐interface to manipulate the facial expressions of any synthetic face. This interface makes it simple and intuitive to generate still facial configurations for keyframe animation, as well as complete temporal sequences of facial movements. The resulting animations combine the flexibility of a parameter‐based system and the realism of real human motion. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

5.
A survey of human body animation is presented dealing with its geometrical representation, motion control techniques and rendering. A classification of human body animation systems is presented according to different criteria.The human body movement notations are described and the different existing geometric models of the body and the face are analised and compared.The body and face control motion techniques are presented and discussed, as well as human body motion in its environment. Finally, the different problems of human body rendering are presented.  相似文献   

6.
This paper presents a novel approach for the generation of realistic speech synchronized 3D facial animation that copes with anticipatory and perseveratory coarticulation. The methodology is based on the measurement of 3D trajectories of fiduciary points marked on the face of a real speaker during the speech production of CVCV non-sense words. The trajectories are measured from standard video sequences using stereo vision photogrammetric techniques. The first stationary point of each trajectory associated with a phonetic segment is selected as its articulatory target. By clustering according to geometric similarity all articulatory targets of a same segment in different phonetic contexts, a set of phonetic context-dependent visemes accounting for coarticulation is identified. These visemes are then used to drive a set of geometric transformation/deformation models that reproduce the rotation and translation of the temporomandibular joint on the 3D virtual face, as well as the behavior of the lips, such as protrusion, and opening width and height of the natural articulation. This approach is being used to generate 3D speech synchronized animation from both natural and synthetic speech generated by a text-to-speech synthesizer.  相似文献   

7.
We develop a new 3D hierarchical model of the human face. The model incorporates a physically-based approximation to facial tissue and a set of anatomically-motivated facial muscle actuators. Despite its sophistication, the model is efficient enough to produce facial animation at interactive rates on a high-end graphics workstation. A second contribution of this paper is a technique for estimating muscle contractions from video sequences of human faces performing expressive articulations. These estimates may be input as dynamic control parameters to the face model in order to produce realistic animation. Using an example, we demonstrate that our technique yields sufficiently accurate muscle contraction estimates for the model to reconstruct expressions from dynamic images of faces.  相似文献   

8.
This paper presents a strategy for the production of a computer-generated film showing the complete motion of a human heart. The model for construction and animation is discussed. The heart is decomposed into different parts; each part is considered as a subactor of the heart which is viewed as the main actor. The left ventricle simulation is emphasized, because it is based on medical measures. The atrium simulation is based on beta-spline surfaces and blood volume considerations.  相似文献   

9.
An object-oriented architecture for a computer animation system   总被引:1,自引:1,他引:0  
This paper describes the architecture, capabilities and implementation of The Clockworks, an object-oriented computer animation system encompassing a wide variety of modeling, image synthesis, animation, programming and simulation capabilities in a single integrated environment. The object-oriented features of The Clockworks are implemented in portable C under UNIX using a programming discipline. These features include objects with methods and instance variables, class hierarchies, inheritance, instantiation and message passing.  相似文献   

10.
This paper describes a computer graphics system that enables users to define virtual marionette puppets, operate them using relatively simple hardware input devices, and display the scene from a given viewpoint on the computer screen. This computerized marionette theater has the potential to become a computer game for children, an interaction tool over the Internet, enabling the creation of simultaneously viewed and operated marionette show by users on the World Wide Web, and, most importantly, a versatile and efficient professional animation system.  相似文献   

11.
本文将国内外计算机动画发挥情况进行了对比,并介绍了目前计算机动画技术发挥在那的现状。  相似文献   

12.
Recently, the dynamics of linked articulated rigid bodies has become a valuable tool for making realistic three-dimensional computer animations. An exact treatment of rigid body dynamics, however, is based on rather non-intuitive results from classical mechanics (e.g. the Euler equations for rotating bodies) and it relies heavily on sophisticated numerical schemes to solve (large) sets of coupled non-linear algebraic and differential equations. As a result, articulated rigid bodies are not yet supported by most real-time animation systems. This paper discusses an approach to rigid body dynamics which is based on (both conceptually and algorithmically much simpler) point mechanics; this gives rise to an asymptotically exact numerical scheme (NSI) which is useful in the context of real-time animation, provided that the number of degrees of freedom of the simulated system is not too large. Based on NSI, a second scheme (NS2) is derived which is useful for approximating the motions of linked articulated rigid bodies; NS2 turns out to be sufficiently fast to give at least qualitative results in real-time simulation. In general, the algorithm NS2 is not necessarily (asymptotically) exact, but a quantitative analysis shows that in the absence of reaction forces it conserves angular momentum.  相似文献   

13.
Image-based animation of facial expressions   总被引:1,自引:0,他引:1  
We present a novel technique for creating realistic facial animations given a small number of real images and a few parameters for the in-between images. This scheme can also be used for reconstructing facial movies where the parameters can be automatically extracted from the images. The in-between images are produced without ever generating a three-dimensional model of the face. Since facial motion due to expressions are not well defined mathematically our approach is based on utilizing image patterns in facial motion. These patterns were revealed by an empirical study which analyzed and compared image motion patterns in facial expressions. The major contribution of this work is showing how parameterized “ideal” motion templates can generate facial movies for different people and different expressions, where the parameters are extracted automatically from the image sequence. To test the quality of the algorithm, image sequences (one of which was taken from a TV news broadcast) were reconstructed, yielding movies hardly distinguishable from the originals. Published online: 2 October 2002 Correspondence to: A. Tal Work has been supported in part by the Israeli Ministry of Industry and Trade, The MOST Consortium  相似文献   

14.
随着现如今的经济水平的不断提高和发展,人们对于生活品质当中的追求不仅仅停留在物质的层面上,更多地开始关注精神文化方面的享受。动画是人们精神世界中不可忽视的重要组成部分,传统动画凭借其细腻的风格,开辟了我国动画事业的第一战场,也推动了flash动画的发展,但二者之中存在着许多特点方面的差异性。  相似文献   

15.
We present a lightweight non-parametric method to generate wrinkles for 3D facial modeling and animation. The key lightweight feature of the method is that it can generate plausible wrinkles using a single low-cost Kinect camera and one high quality 3D face model with details as the example. Our method works in two stages: (1) offline personalized wrinkled blendshape construction. User-specific expressions are recorded using the RGB-Depth camera, and the wrinkles are generated through example-based synthesis of geometric details. (2) Online 3D facial performance capturing. These reconstructed expressions are used as blendshapes to capture facial animations in real-time. Experiments on a variety of facial performance videos show that our method can produce plausible results, approximating the wrinkles in an accurate way. Furthermore, our technique is low-cost and convenient for common users.  相似文献   

16.
17.
为快速进行不规则多边形区域内的数字图象渐变处理,提出了一种基于三角形骨架坐标的图象渐变算法,即先将图象区域分割为若干个三角形区域,再逐个对这些三角形区域建立象素点的骨架坐标,这样三角形骨架外壳的改变就会带动其内部图象的渐变,并根据骨架坐标变换,推导三角形区域内象素点坐标随外壳三角形顶点改变的计算公式,进而建立了骨架外壳改变后的新象素点与原始象素点间的颜色对应关系。利用该不规则多边形区域内的图象渐变算法,可解决运动模拟等常见图象的变形问题。  相似文献   

18.
汪卫平  袁芳 《软件》2014,(7):121-125
Flash动画以其强大的表现力和小巧的体积成为网络动画的首选软件。特别是脚本语言ActionScript 3.0版本功能的增强,使Flash表现出了强大的交互性,用户由单纯的观看欣赏动画逐步过渡到体验动画中。可以借助设计环境制作绚丽的动画效果,也可以改制成一个完整的Web站点,甚至完成一个丰富网络应用(Rich Internet Application,RIA)项目。本文以ActionScript 3.0脚本为切入点,介绍其在Flash动画设计与制作带来的巨大优势,并引入具体案例来加以说明其区别于传统Flash动画之处,领略其独特魅力。  相似文献   

19.
An artificial life approach for the animation of cognitive characters   总被引:4,自引:0,他引:4  
This paper addresses the problem of cognitive character animation. We propose the use of finite state machines for the behavioral control of characters. Our approach rests on the idea that the cognitive character arises from the evolutionary computation embedded in the artificial life simulation, which in our case is implemented by the finite state machine. We present some of the results of the WOXBOT/ARENA research project. This project to build virtual worlds is aimed at the graphic simulation of an arena, where small mobile robots can perform requested tasks while behaving according to their own motivation and reasoning. Each robot is an intelligent agent that perceives the virtual environment through a simulated vision system and reacts by moving away from or approaching the object it sees. The conception and specification of the robots and environment are being done very carefully to create an open distributed object architecture that could serve as a test-bed freely available and ready to use for testing theories in some computational areas such as evolutionary computation, artificial life, pattern recognition, artificial intelligence, cognitive neurosciences and distributed objects architectures. Furthermore, it is a first step towards building a cognitive animated character.  相似文献   

20.
The availability of high‐performance 3D workstations has increased the range of application for interactive real‐time animation. In these applications the user can directly interact with the objects in the animation and direct the evolution of their motion, rather than simply watching a pre‐computed animation sequence. Interactive real‐time animation has fast‐growing applications in virtual reality, scientific visualization, medical training and distant learning. Traditional approaches to computer animation have been based on the animator having complete control over all aspects of the motion. In interactive animation the user can interact with any of the objects, which changes the current motion path or behaviour in real time. The objects in the animation must be capable of reacting to the user's actions and not simply replay a canned motion sequence. This paper presents a framework for interactive animation that allows the animator to specify the reactions of objects to events generated by other objects and the user. This framework is based on the concept of relations that describe how an object reacts to the influence of a dynamic environment. Each relation specifies one motion primitive triggered by either its enabling condition or the state of the environment. A collection of the relations is structured through several hierarchical layers to produce responsive behaviours and their variations. This framework is illustrated by several room‐based dancing examples that are modelled by relations. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号