首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Synthesizing expressive facial animation is a very challenging topic within the graphics community. In this paper, we present an expressive facial animation synthesis system enabled by automated learning from facial motion capture data. Accurate 3D motions of the markers on the face of a human subject are captured while he/she recites a predesigned corpus, with specific spoken and visual expressions. We present a novel motion capture mining technique that "learns" speech coarticulation models for diphones and triphones from the recorded data. A phoneme-independent expression eigenspace (PIEES) that encloses the dynamic expression signals is constructed by motion signal processing (phoneme-based time-warping and subtraction) and principal component analysis (PCA) reduction. New expressive facial animations are synthesized as follows: First, the learned coarticulation models are concatenated to synthesize neutral visual speech according to novel speech input, then a texture-synthesis-based approach is used to generate a novel dynamic expression signal from the PIEES model, and finally the synthesized expression signal is blended with the synthesized neutral visual speech to create the final expressive facial animation. Our experiments demonstrate that the system can effectively synthesize realistic expressive facial animation  相似文献   

2.
We present a lightweight non-parametric method to generate wrinkles for 3D facial modeling and animation. The key lightweight feature of the method is that it can generate plausible wrinkles using a single low-cost Kinect camera and one high quality 3D face model with details as the example. Our method works in two stages: (1) offline personalized wrinkled blendshape construction. User-specific expressions are recorded using the RGB-Depth camera, and the wrinkles are generated through example-based synthesis of geometric details. (2) Online 3D facial performance capturing. These reconstructed expressions are used as blendshapes to capture facial animations in real-time. Experiments on a variety of facial performance videos show that our method can produce plausible results, approximating the wrinkles in an accurate way. Furthermore, our technique is low-cost and convenient for common users.  相似文献   

3.
Realistic rendering and animation of knitwear   总被引:2,自引:0,他引:2  
We present a framework for knitwear modeling and rendering that accounts for characteristics that are particular to knitted fabrics. We first describe a model for animation that considers knitwear features and their effects on knitwear shape and interaction. With the computed free-form knitwear configurations, we present an efficient procedure for realistic synthesis based on the observation that a single cross section of yarn can serve as the basic primitive for modeling entire articles of knitwear. This primitive, called the lumislice, describes radiance from a yarn cross section that accounts for fine-level interactions among yarn fibers. By representing yarn as a sequence of identical but rotated cross sections, the lumislice can effectively propagate local microstructure over arbitrary stitch patterns and knitwear shapes. The lumislice accommodates varying levels of detail, allows for soft shadow generation, and capitalizes on hardware-assisted transparency blending. These modeling and rendering techniques together form a complete approach for generating realistic knitwear.  相似文献   

4.
A combustion-based technique for fire animation and visualization   总被引:1,自引:0,他引:1  
In this paper, we present a new fire animation and visualization scheme. The most difficult problem in creating fire animation is how to simulate the mechanism of emitting the light and heat of fire. We attack the difficulty by presenting a simulation scheme for the combustion process in voxelized space where the numerical solution of the classical fluid equations is implemented. Therefore, the combustion process is simulated at each voxel and the amount of heat generated at the voxel is estimated. The generated heat will increase the temperature at the voxel, which results in the increase of the turbulent motion of fire. We also propose a visualization scheme that is based on a photon mapping algorithm in order to render fire and various lighting effects of fire to the environments.  相似文献   

5.
The performance of an automatic facial expression recognition system can be significantly improved by modeling the reliability of different streams of facial expression information utilizing multistream hidden Markov models (HMMs). In this paper, we present an automatic multistream HMM facial expression recognition system and analyze its performance. The proposed system utilizes facial animation parameters (FAPs), supported by the MPEG-4 standard, as features for facial expression classification. Specifically, the FAPs describing the movement of the outer-lip contours and eyebrows are used as observations. Experiments are first performed employing single-stream HMMs under several different scenarios, utilizing outer-lip and eyebrow FAPs individually and jointly. A multistream HMM approach is proposed for introducing facial expression and FAP group dependent stream reliability weights. The stream weights are determined based on the facial expression recognition results obtained when FAP streams are utilized individually. The proposed multistream HMM facial expression system, which utilizes stream reliability weights, achieves relative reduction of the facial expression recognition error of 44% compared to the single-stream HMM system.  相似文献   

6.
7.
This article outlines the development of a DRE at Purdue University. Faculty and students are currently beta testing the DRE. The project's goal is to make DREs available to faculty and students throughout the US at no cost. The design addresses several pedagogical paradigms that are critical for enhanced positive learning outcomes. Critical to the development of Purdue's DRE is the notion of an enhanced positive user experience. To address this goal, the project team is focusing on engineering methods for one-touch job deployment; ease of job status monitoring; anywhere, anytime job submission; rendering and animation error correction; and reduced job wait times.  相似文献   

8.
9.
This paper describes a technique for the automatic adaptation of a canonical facial model to data obtained by a 3D laser scanner. The facial model is a B-spline surface with 13×16 control points. We introduce a technique by which this canonical model is fit to the scanned data and that takes into consideration the requirements for the animation of facial expressions. The animation of facial expressions is based on the facial action coding system (FACS). Using B-splines in combination with FACS, we automatically create the impression of a moving skin. To increase the realism of the animation we map textural information onto the B-spline surface.  相似文献   

10.
In this paper, we propose a novel gender classification framework, which utilizes not only facial features, but also external information, i.e. hair and clothing. Instead of using the whole face, we consider five facial components: forehead, eyes, nose, mouth and chin. We also design feature extraction methods for hair and clothing; these features have seldom been used in previous work because of their large variability. For each type of feature, we train a single support vector machine classifier with probabilistic output. The outputs of these classifiers are combined using various strategies, namely fuzzy integral, maximal, sum, voting, and product rule. The major contributions of this paper are (1) investigating the gender discriminative ability of clothing information; (2) using facial components instead of the whole face to obtain higher robustness for occlusions and noise; (3) exploiting hair and clothing information to facilitate gender classification. Experimental results show that our proposed framework improves classification accuracy, even when images contain occlusions, noise, and illumination changes.  相似文献   

11.
This paper describes a new method for generating facial animation in which facial expression and shape can be changed simultaneously in real time. A 2D parameter space independent of facial shape is defined, on which facial expressions are superimposed so that the expressions can be applied to various facial shapes. A facial model is transformed by a bilinear interpolation, which enables a rapid change in facial expression with metamorphosis. The practical efficiency of this method has been demonstrated by a real-time animation system based on this method in live theater.  相似文献   

12.
We present in this paper a new facial feature localizer. It uses a kind of auto-associative neural network trained to localize specific facial features (like eyes and mouth corners) in orientation-free face-images (i.e. images where faces are rotated in-plane and out-of-plane). To increase localization accuracy, two extensions are presented. The first one uses space displacement neural networks instead of classical, fully-connected networks. The second one combines several specialized networks trained to deal with each face orientation. A gating network is then used for combination. Finally, a two stage localizer is presented, which increases speed. Thorough evaluation is performed; including sensitivity to identity, noise and occlusions. The mean localization error (estimated on more than 4000 test images) is about 15% and the system can perform 40 images/s.  相似文献   

13.
14.
Multimedia Tools and Applications - The color of skin is one of the key indicators of bodily change that affect facial expressions. The skin colour tenacity is majorly determined by the light...  相似文献   

15.
Multimedia Tools and Applications - In this paper, an approach for Facial Expressions Recognition (FER) based on a multi-facial patches (MFP) aggregation network is proposed. Deep features are...  相似文献   

16.
Physics-based animation programs can often be modeled in terms of hybrid automata. A hybrid automaton includes both discrete and continuous dynamical variables. The discrete variables define the automaton’s modes of behavior. The continuous variables are governed by mode-dependent differential equations. This paper describes a system for specifying and automatically synthesizing physics-based animation programs based on hybrid automata. The system presents a program developer with a family of parameterized specification schemata. Each schema describes a pattern of behavior as a hybrid automaton passes through a sequence of modes. The developer specifies a program by selecting one or more schemata and supplying application-specific instantiation parameters for each of them. Each schema is associated with a set of axioms in a logic of hybrid automata. The axioms serve to document the semantics of the specification schema. Each schema is also associated with a set of implementation rules. The rules synthesize program components implementing the specification in a general physics-based animation architecture. The system allows animation programs to be developed and tested in an incremental manner. The system itself can be extended to incorporate additional schemata for specifying new patterns of behavior, along with new sets of axioms and implementation rules. It has been implemented and tested on over a dozen examples. We believe this research is a significant step toward a specification and synthesis system that is flexible enough to handle a wide variety of animation programs, yet restricted enough to permit programs to be synthesized automatically.  相似文献   

17.
We have coupled the three-dimensional solver for the two-phase incompressible Navier-Stokes equations NaSt3DGPF with Autodesk Maya. Maya is the industry standard software framework for the creation of three-dimensional animations. The parallel level-set-based fluid solver NaSt3DGPF simulates the interaction of two fluids like air and water. It uses high-order finite difference discretization methods that are designed for physics applications. By coupling both applications, we are now able to set up scientific fluid simulations in an easy-to-use user interface. Moreover, the rendering techniques provided by Maya allow us to create photorealistic visualizations for computational fluid dynamics problems and support the creation of highly visually realistic fluid simulations for animation movies. Altogether, we obtain an easy usable and fully coupled fluid animation toolkit for two-phase fluid simulations. These are the first published results of the full integration of a physics-oriented, high-order grid-based parallel two-phase fluid solver in Maya, at least to our knowledge.  相似文献   

18.
We present a new method for using texture and color to visualize multivariate data elements arranged on an underlying height field. We combine simple texture patterns with perceptually uniform colors to increase the number of attribute values we can display simultaneously. Our technique builds multicolored perceptual texture elements (or pexels) to represent each data element. Attribute values encoded in an element are used to vary the appearance of its pexel. Texture and color patterns that form when the pexels are displayed can be used to rapidly and accurately explore the dataset. Our pexels are built by varying three separate texture dimensions: height, density, and regularity. Results from computer graphics, computer vision, and human visual psychophysics have identified these dimensions as important for the formation of perceptual texture patterns. The pexels are colored using a selection technique that controls color distance, linear separation, and color category. Proper use of these criteria guarantees colors that are equally distinguishable from one another. We describe a set of controlled experiments that demonstrate the effectiveness of our texture dimensions and color selection criteria. We then discuss new work that studies how texture and color can be used simultaneously in a single display  相似文献   

19.
Automatic perception of human affective behaviour from facial expressions and recognition of intentions and social goals from dialogue contexts would greatly enhance natural human robot interaction. This research concentrates on intelligent neural network based facial emotion recognition and Latent Semantic Analysis based topic detection for a humanoid robot. The work has first of all incorporated Facial Action Coding System describing physical cues and anatomical knowledge of facial behaviour for the detection of neutral and six basic emotions from real-time posed facial expressions. Feedforward neural networks (NN) are used to respectively implement both upper and lower facial Action Units (AU) analysers to recognise six upper and 11 lower facial actions including Inner and Outer Brow Raiser, Lid Tightener, Lip Corner Puller, Upper Lip Raiser, Nose Wrinkler, Mouth Stretch etc. An artificial neural network based facial emotion recogniser is subsequently used to accept the derived 17 Action Units as inputs to decode neutral and six basic emotions from facial expressions. Moreover, in order to advise the robot to make appropriate responses based on the detected affective facial behaviours, Latent Semantic Analysis is used to focus on underlying semantic structures of the data and go beyond linguistic restrictions to identify topics embedded in the users’ conversations. The overall development is integrated with a modern humanoid robot platform under its Linux C++ SDKs. The work presented here shows great potential in developing personalised intelligent agents/robots with emotion and social intelligence.  相似文献   

20.
There are still many challenging problems in facial gender recognition which is mainly due to the complex variances of face appearance. Although there has been tremendous research effort to develop robust gender recognition over the past decade, none has explicitly exploited the domain knowledge of the difference in appearance between male and female. Moustache contributes substantially to the facial appearance difference between male and female and could be a good feature to be incorporated into facial gender recognition. Little work on moustache segmentation has been reported in the literature. In this paper, a novel real-time moustache detection method is proposed which combines face feature extraction, image decolorization and texture detection. Image decolorization, which converts a color image to grayscale, aims to enhance the color contrast while preserving the grayscale. On the other hand, moustache appearance is normally grayscale surrounded by the skin color face tissue. Hence, it is a fast and efficient way to segment the moustache by using the decolorization technology. In order to make the algorithm robust to the variances of illumination and head pose, an adaptive decolorization segmentation has been proposed in which both the segmentation threshold selection and the moustache region following are guided by some special regions defined by their geometric relationship with the salient facial features. Furthermore, a texture-based moustache classifier is developed to compensate the decolorization-based segmentation which could detect the darker skin or shadow around the mouth caused by the small lines or skin thicker from where he/she smiles as moustache. The face is verified as the face containing a moustache only when it satisfies: (1) a larger moustache region can be found by applying the decolorization segmentation; (2) the segmented moustache region is detected as moustache by the texture moustache detector. The experimental results on color FERET database showed that the proposed approach can achieve 89 % moustache face detection rate with 0.1 % false acceptance rate. By incorporating the moustache detector into a facial gender recognition system, the gender recognition accuracy on a large database has been improved from 91 to 93.5 %.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号