首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Human face is a complex biomechanical system and non‐linearity is a remarkable feature of facial expressions. However, in blendshape animation, facial expression space is linearized by regarding linear relationship between blending weights and deformed face geometry. This results in the loss of reality in facial animation. To synthesize more realistic facial animation, aforementioned relationship should be non‐linear to allow the greatest generality and fidelity of facial expressions. Unfortunately, few existing works pay attention to the topic about how to measure the non‐linear relationship. In this paper, we propose an optimization scheme that automatically explores the non‐linear relationship of blendshape facial animation from captured facial expressions. Experiments show that the explored non‐linear relationship is consistent with the non‐linearity of facial expressions soundly and is able to synthesize more realistic facial animation than the linear one.  相似文献   

2.
Although avatars may resemble communicative interface agents, they have for the most part not profited from recent research into autonomous embodied conversational systems. In particular, even though avatars function within conversational environments (for example, chat or games), and even though they often resemble humans (with a head, hands, and a body) they are incapable of representing the kinds of knowledge that humans have about how to use the body during communication. Humans, however, do make extensive use of the visual channel for interaction management where many subtle and even involuntary cues are read from stance, gaze, and gesture. We argue that the modeling and animation of such fundamental behavior is crucial for the credibility and effectiveness of the virtual interaction in chat. By treating the avatar as a communicative agent, we propose a method to automate the animation of important communicative behavior, deriving from work in conversation and discourse theory. BodyChat is a system that allows users to communicate via text while their avatars automatically animate attention, salutations, turn taking, back-channel feedback, and facial expression. An evaluation shows that users found an avatar with autonomous conversational behaviors to be more natural than avatars whose behaviors they controlled, and to increase the perceived expressiveness of the conversation. Interestingly, users also felt that avatars with autonomous communicative behaviors provided a greater sense of user control.  相似文献   

3.
We present a novel performance‐driven approach to animating cartoon faces starting from pure 2D drawings. A 3D approximate facial model automatically built from front and side view master frames of character drawings is introduced to enable the animated cartoon faces to be viewed from angles different from that in the input video. The expressive mappings are built by artificial neural network (ANN) trained from the examples of the real face in the video and the cartoon facial drawings in the facial expression graph for a specific character. The learned mapping model makes the resultant facial animation to properly get the desired expressiveness, instead of a mere reproduction of the facial actions in the input video sequence. Furthermore, the lit sphere, capturing the lighting in the painting artwork of faces, is utilized to color the cartoon faces in terms of the 3D approximate facial model, reinforcing the hand‐drawn appearance of the resulting facial animation. We made a series of comparative experiments to test the effectiveness of our method by recreating the facial expression in the commercial animation. The comparison results clearly demonstrate the superiority of our method not only in generating high quality cartoon‐style facial expressions, but also in speeding up the animation production of cartoon faces. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

4.
Oftentimes facial animation is created separately from overall body motion. Since convincing facial animation is challenging enough in itself, artists tend to create and edit the face motion in isolation. Or if the face animation is derived from motion capture, this is typically performed in a mo‐cap booth while sitting relatively still. In either case, recombining the isolated face animation with body and head motion is non‐trivial and often results in an uncanny result if the body dynamics are not properly reflected on the face (e.g. the bouncing of facial tissue when running). We tackle this problem by introducing a simple and intuitive system that allows to add physics to facial blendshape animation. Unlike previous methods that try to add physics to face rigs, our method preserves the original facial animation as closely as possible. To this end, we present a novel simulation framework that uses the original animation as per‐frame rest‐poses without adding spurious forces. As a result, in the absence of any external forces or rigid head motion, the facial performance will exactly match the artist‐created blendshape animation. In addition we propose the concept of blendmaterials to give artists an intuitive means to account for changing material properties due to muscle activation. This system allows to automatically combine facial animation and head motion such that they are consistent, while preserving the original animation as closely as possible. The system is easy to use and readily integrates with existing animation pipelines.  相似文献   

5.
This paper presents a novel data‐driven expressive speech animation synthesis system with phoneme‐level controls. This system is based on a pre‐recorded facial motion capture database, where an actress was directed to recite a pre‐designed corpus with four facial expressions (neutral, happiness, anger and sadness). Given new phoneme‐aligned expressive speech and its emotion modifiers as inputs, a constrained dynamic programming algorithm is used to search for best‐matched captured motion clips from the processed facial motion database by minimizing a cost function. Users optionally specify ‘hard constraints’ (motion‐node constraints for expressing phoneme utterances) and ‘soft constraints’ (emotion modifiers) to guide this search process. We also introduce a phoneme–Isomap interface for visualizing and interacting phoneme clusters that are typically composed of thousands of facial motion capture frames. On top of this novel visualization interface, users can conveniently remove contaminated motion subsequences from a large facial motion dataset. Facial animation synthesis experiments and objective comparisons between synthesized facial motion and captured motion showed that this system is effective for producing realistic expressive speech animations.  相似文献   

6.
Expressive facial animations are essential to enhance the realism and the credibility of virtual characters. Parameter‐based animation methods offer a precise control over facial configurations while performance‐based animation benefits from the naturalness of captured human motion. In this paper, we propose an animation system that gathers the advantages of both approaches. By analyzing a database of facial motion, we create the human appearance space. The appearance space provides a coherent and continuous parameterization of human facial movements, while encapsulating the coherence of real facial deformations. We present a method to optimally construct an analogous appearance face for a synthetic character. The link between both appearance spaces makes it possible to retarget facial animation on a synthetic face from a video source. Moreover, the topological characteristics of the appearance space allow us to detect the principal variation patterns of a face and automatically reorganize them on a low‐dimensional control space. The control space acts as an interactive user‐interface to manipulate the facial expressions of any synthetic face. This interface makes it simple and intuitive to generate still facial configurations for keyframe animation, as well as complete temporal sequences of facial movements. The resulting animations combine the flexibility of a parameter‐based system and the realism of real human motion. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

7.
Physically‐based animation techniques enable more realistic and accurate animation to be created. We present a fully physically‐based approach for efficiently producing realistic‐looking animations of facial movement, including animation of expressive wrinkles. This involves simulation of detailed voxel‐based models using a graphics processing unit‐based total Lagrangian explicit dynamic finite element solver with an anatomical muscle contraction model, and advanced boundary conditions that can model the sliding of soft tissue over the skull. The flexibility of our approach enables detailed animations of gross and fine‐scale soft‐tissue movement to be easily produced with different muscle structures and material parameters, for example, to animate different aged skins. Although we focus on the forehead, our approach can be used to animate any multi‐layered soft body. © 2014 The Authors. Computer Animation and Virtual Worlds published by John Wiley & Sons, Ltd.  相似文献   

8.
Facial animation is a time‐consuming and cumbersome task that requires years of experience and/or a complex and expensive set‐up. This becomes an issue, especially when animating the multitude of secondary characters required, e.g. in films or video‐games. We address this problem with a novel technique that relies on motion graphs to represent a landmarked database. Separate graphs are created for different facial regions, allowing a reduced memory footprint compared to the original data. The common poses are identified using a Euclidean‐based similarity metric and merged into the same node. This process traditionally requires a manually chosen threshold, however, we simplify it by optimizing for the desired graph compression. Motion synthesis occurs by traversing the graph using Dijkstra's algorithm, and coherent noise is introduced by swapping some path nodes with their neighbours. Expression labels, extracted from the database, provide the control mechanism for animation. We present a way of creating facial animation with reduced input that automatically controls timing and pose detail. Our technique easily fits within video‐game and crowd animation contexts, allowing the characters to be more expressive with less effort. Furthermore, it provides a starting point for content creators aiming to bring more life into their characters.  相似文献   

9.
Synthesizing expressive facial animation is a very challenging topic within the graphics community. In this paper, we present an expressive facial animation synthesis system enabled by automated learning from facial motion capture data. Accurate 3D motions of the markers on the face of a human subject are captured while he/she recites a predesigned corpus, with specific spoken and visual expressions. We present a novel motion capture mining technique that "learns" speech coarticulation models for diphones and triphones from the recorded data. A phoneme-independent expression eigenspace (PIEES) that encloses the dynamic expression signals is constructed by motion signal processing (phoneme-based time-warping and subtraction) and principal component analysis (PCA) reduction. New expressive facial animations are synthesized as follows: First, the learned coarticulation models are concatenated to synthesize neutral visual speech according to novel speech input, then a texture-synthesis-based approach is used to generate a novel dynamic expression signal from the PIEES model, and finally the synthesized expression signal is blended with the synthesized neutral visual speech to create the final expressive facial animation. Our experiments demonstrate that the system can effectively synthesize realistic expressive facial animation  相似文献   

10.
11.
For the last decades, the concern of producing convincing facial animation has garnered great interest, that has only been accelerating with the recent explosion of 3D content in both entertainment and professional activities. The use of motion capture and retargeting has arguably become the dominant solution to address this demand. Yet, despite high level of quality and automation performance-based animation pipelines still require manual cleaning and editing to refine raw results, which is a time- and skill-demanding process. In this paper, we look to leverage machine learning to make facial animation editing faster and more accessible to non-experts. Inspired by recent image inpainting methods, we design a generative recurrent neural network that generates realistic motion into designated segments of an existing facial animation, optionally following user-provided guiding constraints. Our system handles different supervised or unsupervised editing scenarios such as motion filling during occlusions, expression corrections, semantic content modifications, and noise filtering. We demonstrate the usability of our system on several animation editing use cases.  相似文献   

12.
利用语音来驱动人脸动画,是虚拟现实(Virtual Reality)等领域重要的智能技术,近年来虚拟现实技术的飞速发展更进一步地突出了在沉浸环境下的人机自然交流的迫切需求。语音驱动的人脸动画技术能够创造出自然生动、带有情感的动画,相对于传统预设的人脸动画而言能够更好地辅助人机交互、提升用户体验。为推进该技术的智能化程度和应用,针对语音驱动人脸动画的关键问题:音视频映射,综述了逐帧分析、多帧分析和逐音素分析的映射方法,同时也梳理了多种脸部模型的思想,动画合成、情感融合、人脸动画评价的方法,及可能的研究发展方向。  相似文献   

13.
This paper presents a parametric performance‐based model for facial animation that was inspired by Facial Action Coding System (FACS) developed by P. Ekman and F. W. Friesen. The FACS consists of 44 Action Units (AUs) corresponding to visual changes on the face. Additionally, predefined co‐occurrence rules describe how different AUs influence each other. In our model, each facial animation parameter corresponds to one of the AUs as defined in FACS. Implementation of the model is completed with methods for accumulating displacement from separate AUs together, and fuzzy‐logical adaptation of co‐occurrence rules from the FACS. We also describe the method for adapting our model to a specific person.  相似文献   

14.
We propose a design framework to assist with user‐generated content in facial animation — without requiring any animation experience or ground truth reference. Where conventional prototyping methods rely on handcrafting by experienced animators, our approach looks to encode the role of the animator as an Evolutionary Algorithm acting on animation controls, driven by visual feedback from a user. Presented as a simple interface, users sample control combinations and select favourable results to influence later sampling. Over multiple iterations of disregarding unfavourable control values, parameters converge towards the user's ideal. We demonstrate our framework through two non‐trivial applications: creating highly nuanced expressions by evolving control values of a face rig and non‐linear motion through evolving control point positions of animation curves.  相似文献   

15.
Colouration Issues in Computer Generated Facial Animation   总被引:1,自引:0,他引:1  
In everyday interactions with one another we use the face for recognising people and for communicating with them. Despite the considerable amount of research into computer generated facial animation, one particular aspect, that of the colouration of the face appears to have been neglected. In this paper we address issues pertinent to the use of colour for both modelling the appearance of the face and for enhancing communication during facial expression and animation. Colouration is an integral part of the face, which helps in the recognition of faces as well as in the interpretation of the often subtle signals emitted by the human face.  相似文献   

16.
We present a real‐time multi‐view facial capture system facilitated by synthetic training imagery. Our method is able to achieve high‐quality markerless facial performance capture in real‐time from multi‐view helmet camera data, employing an actor specific regressor. The regressor training is tailored to specified actor appearance and we further condition it for the expected illumination conditions and the physical capture rig by generating the training data synthetically. In order to leverage the information present in live imagery, which is typically provided by multiple cameras, we propose a novel multi‐view regression algorithm that uses multi‐dimensional random ferns. We show that higher quality can be achieved by regressing on multiple video streams than previous approaches that were designed to operate on only a single view. Furthermore, we evaluate possible camera placements and propose a novel camera configuration that allows to mount cameras outside the field of view of the actor, which is very beneficial as the cameras are then less of a distraction for the actor and allow for an unobstructed line of sight to the director and other actors. Our new real‐time facial capture approach has immediate application in on‐set virtual production, in particular with the ever‐growing demand for motion‐captured facial animation in visual effects and video games.  相似文献   

17.
The creation of a stylistic animation through the use of high‐level controls has always been a design goal for computer animation software. In this paper, we propose a procedural animation system, called rhythmic character animation playacting (RhyCAP), which allows a designer to interactively direct animated characters by adjusting rhythmic parameters such as tempo, exaggeration, and timing. The motions thus generated reflect the intention of the director and also adapt to environmental obstacle constraints. We use a sequence of martial‐art steps in the performance of a Chinese lion dance to illustrate the effectiveness of the system. The animation is generated by composition of common motion elements, concisely represented in an action graph. We have implemented an animation control program that allows Chinese lion dance to be choreographed interactively. This authoring tool also serves as a useful means for preserving this part of world cultural heritage. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

18.
This paper presents new methods for efficient modeling and animation of an hierarchical facial model that conforms to the human face anatomy for realistic and fast 3D facial expression synthesis. The facial model has a skin–muscle–skull structure. The deformable skin model directly simulates the nonlinear visco‐elastic behavior of soft tissue and effectively prevents model collapse. The construction of facial muscles is achieved by using an efficient muscle mapping approach. Based on a cylindrical projection of the texture‐mapped facial surface and wire‐frame skin and skull meshes, this approach ensures different muscles to be located at the anatomically correct positions between the skin and skull layers. For computational efficiency, we devise an adaptive simulation algorithm which uses either a semi‐implicit integration scheme or a quasi‐static solver to compute the relaxation by traversing the designed data structures in a breadth‐first order. The algorithm runs in real‐time and has successfully synthesized realistic facial expressions. ACM CSS: I.3.5 Computer Graphics: Computational Geometry and Object Modelling—physically based modelling; I.3.7 Computer Graphics: Three‐Dimensional Graphics and Realism—animation;  相似文献   

19.
We present a new real‐time approach to simulate deformable objects using a learnt statistical model to achieve a high degree of realism. Our approach improves upon state‐of‐the‐art interactive shape‐matching meshless simulation methods by not only capturing important nuances of an object's kinematics but also of its dynamic texture variation. We are able to achieve this in an automated pipeline from data capture to simulation. Our system allows for the capture of idiosyncratic characteristics of an object's dynamics which for many simulations (e.g. facial animation) is essential. We allow for the plausible simulation of mechanically complex objects without knowledge of their inner workings. The main idea of our approach is to use a flexible statistical model to achieve a geometrically‐driven simulation that allows for arbitrarily complex yet easily learned deformations while at the same time preserving the desirable properties (stability, speed and memory efficiency) of current shape‐matching simulation systems. The principal advantage of our approach is the ease with which a pseudo‐mechanical model can be learned from 3D scanner data to yield realistic animation. We present examples of non‐trivial biomechanical objects simulated on a desktop machine in real‐time, demonstrating superior realism over current geometrically motivated simulation techniques.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号