首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
True real-time animation of complex hairstyles on animated characters is the goal of this work, and the challenge is to build a mechanical model of the hairstyle which is sufficiently fast for real-time performance while preserving the particular behavior of the hair medium and maintaining sufficient versatility for simulating any kind of complex hairstyles. Rather than building a complex mechanical model directly related to the structure of the hair strands, we take advantage of a volume free-form deformation scheme. We detail the construction of an efficient lattice mechanical deformation model which represents the volume behavior of the hair strands. The lattice is deformed as a particle system using state-of-the-art numerical methods, and animates the hairs using quadratic B-spline interpolation. The hairstyle reacts to the body skin through collisions with a metaball-based approximation. The model is highly scalable and allows hairstyles of any complexity to be simulated in any rendering context with the appropriate trade off between accuracy and computation speed, fitting the need of level-of-detail optimization schemes.  相似文献   

2.
Real-time animation of realistic virtual humans   总被引:10,自引:0,他引:10  
The authors have been working on simulating virtual humans for several years. Until recently, these constructs could not act in real time. Today, however, many applications need to simulate in real time virtual humans that look realistic. They have invested considerable effort in developing and integrating several modules into a system capable of animating humans in real-time situations. This includes interactive modules for building realistic individuals and a texture-fitting method suitable for all parts of the head and body. Animating the body, including the hands and their deformations, is the key aspect of the system; to their knowledge, no competing system integrates all these functions. They also included facial animation, as demonstrated below with virtual tennis players. They have developed a single system containing all the modules needed for simulating real-time virtual humans in distant virtual environments (VEs). The system lets one rapidly clone any individual and animate the clone in various contexts. People cannot mistake virtual humans for real ones, but they think them recognizable and realistic  相似文献   

3.
亓晋  顾耀林 《计算机应用》2007,27(6):1556-1558
提出了一个基于晶格的机械变形方法。晶格被作为一个粒子系统采用数学方法变形,毛发采用2次B样条插值方法绘制。用元球模型处理毛发和身体表面之间碰撞。发型机械模型具有很高的实时性能和通用性,可以用来模拟各种复杂发型。模型可达到各种绘制背景下准确性和计算速度之间的最佳折中。  相似文献   

4.
《Graphical Models》2014,76(3):172-179
We present a performance-based facial animation system capable of running on mobile devices at real-time frame rates. A key component of our system is a novel regression algorithm that accurately infers the facial motion parameters from 2D video frames of an ordinary web camera. Compared with the state-of-the-art facial shape regression algorithm [1], which takes a two-step procedure to track facial animations (i.e., first regressing the 3D positions of facial landmarks, and then computing the head poses and expression coefficients), we directly regress the head poses and expression coefficients. This one-step approach greatly reduces the dimension of the regression target and significantly improves the tracking performance while preserving the tracking accuracy. We further propose to collect the training images of the user under different lighting environments, and make use of the data to learn a user-specific regressor, which can robustly handle lighting changes that frequently occur when using mobile devices.  相似文献   

5.
论文提出了一种基于GPU 的对三维场景进行实时水彩画效果渲染的方法。 该方法的大部分过程使用图像空间的技术实现。算法将画面分为细节层、环境层、笔触层分 别渲染,再进行合成。在过程中使用环境遮挡、shadow mapping 等技术进行快速的阴影计算, 并使用图像滤镜的方法模拟水彩的多种主要特征。由于该方法以图像空间的技术为主,因此 可以利用GPU 并行处理的特点对计算过程进行加速,进而达到实时的渲染速度。最后建立 动画脚本分析系统,进行实时动画渲染,表明该方法在计算机动画、游戏等数字娱乐产业领 域有较大的应用潜力。  相似文献   

6.
给出了一个仿真变形体的方法。物体被几何方法推动变形,操纵基于点的物体并且不需要连接信息,这是一个不需要任何预处理,计算简单,并提供无条件稳定的动态仿真方法。主要思想是通过几何约束替换能量和通过当前位置到目标位置的距离替换力。这些目标位置通过一个通用的无变形的静止状态和点云的当前变形状态之间的形状匹配来决定,因为点总是在定义好的位置被绘制,显式积分方法的过度不稳定问题被消除。相关物体表现方法的灵活性能被控制,相关内存和计算是有效的,动态仿真上的无条件稳定性让这个方法特别适合游戏开发。  相似文献   

7.
A real-time speech-driven synthetic talking face provides an effective multimodal communication interface in distributed collaboration environments. Nonverbal gestures such as facial expressions are important to human communication and should be considered by speech-driven face animation systems. In this paper, we present a framework that systematically addresses facial deformation modeling, automatic facial motion analysis, and real-time speech-driven face animation with expression using neural networks. Based on this framework, we learn a quantitative visual representation of the facial deformations, called the motion units (MUs). A facial deformation can be approximated by a linear combination of the MUs weighted by MU parameters (MUPs). We develop an MU-based facial motion tracking algorithm which is used to collect an audio-visual training database. Then, we construct a real-time audio-to-MUP mapping by training a set of neural networks using the collected audio-visual training database. The quantitative evaluation of the mapping shows the effectiveness of the proposed approach. Using the proposed method, we develop the functionality of real-time speech-driven face animation with expressions for the iFACE system. Experimental results show that the synthetic expressive talking face of the iFACE system is comparable with a real face in terms of the effectiveness of their influences on bimodal human emotion perception.  相似文献   

8.
基于Kinect深度相机的实时三维人体动画   总被引:1,自引:0,他引:1  
研究了一种基于H Anim标准的实时人体三维动画方法,首先对H Anim中人体肢体层次结构进行研究和分析,给出了进行人体动画的坐标变换方法;其次,基于OpenNI对Kinect获取的数据进行重新处理,采用逆运动学计算非根关节旋转矩阵;最后给出了系统流程及具体实验方法,采用OpenGL由实时获取的关节旋转矩阵驱动虚拟人实现了人体动画。实验结果表明,该算法可以较精确地提取人体三维姿态,实时重构人体运动。  相似文献   

9.
Real-time rendering of large animated crowds consisting of thousands of virtual humans is important for several applications including simulations, games, and interactive walkthroughs but cannot be performed using complex polygonal models at interactive frame rates. For that reason, methods using large numbers of precomputed image-based representations, called impostors, have been proposed. These methods take advantage of existing programmable graphics hardware to compensate for computational expense while maintaining visual fidelity. Thanks to these methods, the number of different virtual humans rendered in real time is no longer restricted by computational power but by texture memory consumed for the variety and discretization of their animations. This work proposes a resource-efficient impostor rendering methodology that employs image morphing techniques to reduce memory consumption while preserving perceptual quality, thus allowing higher diversity or resolution of the rendered crowds. Results of the experiments indicated that the proposed method, in comparison with conventional impostor rendering techniques, can obtain 38 % smoother animations or 87 % better appearance quality by reducing the number of key-frames required for preserving the animation quality via resynthesizing them with up to 92 % similarity on real time.  相似文献   

10.
11.
12.
In this paper we argue for our NPAR system as an effective 2D alternative to most NPR research, which is focused on frame coherent stylised rendering of 3D models. Our approach gives a highly stylised look to images without the support of 3D models. Nevertheless, they still behave as though they are animated by drawing, which they are. First, a stylised brush tool is used to freely draw extreme poses of characters. Each character is built of 2D drawn brush strokes which are manually grouped into layers. Each layer is assigned its place in a drawing hierarchy called a hierarchical display model (HDM). Next, multiple HDMs are created for the same character, each corresponding to a specific view. A collection of HDMs essentially reintroduces some correspondence information to the 2D drawings needed for inbetweening and, in effect, eliminates the need for a true 3D model. Once the models are composed the animator starts by defining keyframes from extreme poses in time. Next, brush stroke trajectories defined by the keyframe HDMs are inbetweened automatically across intermediate frames. Finally, each HDM of each generated inbetween frame is traversed and all elements are drawn one on another from back to front. Our techniques support highly rendered styles which are particularly difficult to animate by traditional means including the ‘airbrushed’, scraperboard, watercolour, Gouache, ‘ink-wash’, pastel, and the ‘crayon’ styles. In addition, we describe the data path to be followed to create highly stylised animations by incorporating real footage. We believe our system offers a new fresh perspective on computer-aided animation production and associated tools.  相似文献   

13.
We introduce an easy and intuitive approach to create animations by assembling existing animations. Using our system, the user needs only to simply scribble regions of interest and select the example animations that he/she wants to apply. Our system will then synthesize a transformation for each triangle and solve an optimization problem to compute the new animation for this target mesh. Like playing a jigsaw puzzle game, even a novice can explore his/her creativity by using our system without learning complicated routines, but just using a few simple operations to achieve the goal.  相似文献   

14.
Each section examines a single technique of conventional, manual animation. Within each section various means of automating the technique are considered. It is seen, for each technique, that there are advantages to using electronic rather than mechanical hardware. Desirable characteristics for an electronic base for animation are identified.  相似文献   

15.
This paper introduces an animation system directed by order. High-level operations such as moving, turning, rolling and bouncing on a flat surface provide an easy-to-use interface to build animations. The underlying animation system relies on a constraint-based deformation model. Previously, to build an animation the user had to break up the desired animation into a list of deformations composed of a set of constraints. In addition to each constraint, he had to control the size and the shape of the deformed area as well as the shape of the deformation. The goal of the order-controlled animation is to encapsulate all the parameters of the deformations. Indeed, using these high-level operations the underlying deformations are completely transparent to the user. Before introducing these operations, we will present some extensions of the deformation model such as the generalized shape of the deformed area and a rotating movement combined with the deformation. We also explain how to control velocity and acceleration.  相似文献   

16.
In this paper we present techniques for reusing view-dependent animation. First, we provide a framework for representing view-dependent animations. We formulate the concept of a view space, which is the space formed by the key views and their associated character poses. Tracing a path on the view space generates the corresponding view-dependent animation in real time. We then demonstrate that the framework can be used to synthesize new stylized animations by reusing view-dependent animations. We present three types of novel reuse techniques. In the first we show how to animate multiple characters from the same view space. Next, we show how to animate multiple characters from multiple view spaces. We use this technique to animate a crowd of characters. Finally, we draw inspiration from cubist paintings and create their view-dependent analogues by using different cameras to control different body parts of the same character.  相似文献   

17.
Raza  S.  Nadda  R.  Nirala  C. K. 《Microsystem Technologies》2023,29(3):359-376
Microsystem Technologies - Owing to the non-isoenergetic discharge pulses in an RC-based micro-electrical discharge machining (µEDM) process, the unit material removal analysis is difficult....  相似文献   

18.
In most currently produced computer generated animation there is significant visual information that is literally never seen because of the characteristics of the human visual system. By tailoring the rendering of animation images to match the characteristics of human visual perception, significant computational and image storage savings can be obtained while retaining perceived animation quality. Vision research indicates that the human visual system processes two information channels. The transient channel rapidly processes low spatial resolution full colour information. The sustained channel processes high spatial resolution luminance information at a much slower rate. This paper discusses how the rendering of animation images can be accomplished to match these perception characteristics and achieve significant savings while maintaining animation quality. Computation and storage savings of up to 80 per cent are possible. Several animation segments have been produced to demonstrate the viability of this approach.  相似文献   

19.
Creating speech-synchronized animation   总被引:1,自引:0,他引:1  
We present a facial model designed primarily to support animated speech. Our facial model takes facial geometry as input and transforms it into a parametric deformable model. The facial model uses a muscle-based parameterization, allowing for easier integration between speech synchrony and facial expressions. Our facial model has a highly deformable lip model that is grafted onto the input facial geometry to provide the necessary geometric complexity needed for creating lip shapes and high-quality renderings. Our facial model also includes a highly deformable tongue model that can represent the shapes the tongue undergoes during speech. We add teeth, gums, and upper palate geometry to complete the inner mouth. To decrease the processing time, we hierarchically deform the facial surface. We also present a method to animate the facial model over time to create animated speech using a model of coarticulation that blends visemes together using dominance functions. We treat visemes as a dynamic shaping of the vocal tract by describing visemes as curves instead of keyframes. We show the utility of the techniques described in this paper by implementing them in a text-to-audiovisual-speech system that creates animation of speech from unrestricted text. The facial and coarticulation models must first be interactively initialized. The system then automatically creates accurate real-time animated speech from the input text. It is capable of cheaply producing tremendous amounts of animated speech with very low resource requirements.  相似文献   

20.
This paper reports on an innovative use of the WWW at the University of Geneva. Indeed, until now, the web has mostly been used to make information publicly available, whether directly in documents, or through search engines querying external databases. With the availability of forms, we are now starting to see WWW viewers used to input information, e.g. to order a pizza or give one's opinion about some topic.In the Computer Science department, as a semester project for the Software Engineering class, we are now implementing an experiment to allow first year students to have not only on-line access to a hypertextual version of the book used in the Data Structures class, but also to “animate” the algorithms that are described in the book. That is, the students can run, on the server, the program (or program segment) and interact with the execution to put breakpoints in the code, display the contents of variables and advance execution either step by step, or until a breakpoint is met, in much the same way as with a symbolic debugger.To do this required the development of a whole set of tools to facilitate, and even automate, the preparation of the algorithms to allow them to be started and controlled from a WWW client like Mosaic. It also required designing a mechanism to have the server spawn subprocesses to execute the algorithms and have the server query the appropriate subprocess for what has to be displayed next, based on the user's queries.This paper will describe the technical solutions that had to be designed to make this remote control feasible. Even though the project described in this paper has educational purposes, many of the solutions described can prove useful in very different contexts.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号