首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Bilinear Models for 3-D Face and Facial Expression Recognition   总被引:1,自引:0,他引:1  
In this paper, we explore bilinear models for jointly addressing 3-D face and facial expression recognition. An elastically deformable model algorithm that establishes correspondence among a set of faces is proposed first and then bilinear models that decouple the identity and facial expression factors are constructed. Fitting these models to unknown faces enables us to perform face recognition invariant to facial expressions and facial expression recognition with unknown identity. A quantitative evaluation of the proposed technique is conducted on the publicly available BU-3DFE face database in comparison with our previous work on face recognition and other state-of-the-art algorithms for facial expression recognition. Experimental results demonstrate an overall 90.5% facial expression recognition rate and an 86% rank-1 face recognition rate.   相似文献   

2.
3.

Generating dynamic 2D image-based facial expressions is a challenging task for facial animation. Much research work focused on performance-driven facial animation from given videos or images of a target face, while animating a single face image driven by emotion labels is a less explored problem. In this work, we treat the task of animating single face image from emotion labels as a conditional video prediction problem, and propose a novel framework by combining factored conditional restricted boltzmann machines (FCRBM) and reconstruction contractive auto-encoder (RCAE). A modified RCAE with an associated efficient training strategy is used to extract low dimensional features and reconstruct face images. FCRBM is used as animator to predict facial expression sequence in the feature space given discrete emotion labels and a frontal neutral face image as input. Both quantitative and qualitative evaluations on two facial expression databases, and comparison to state-of-the-art showed the effectiveness of our proposed framework for animating frontal neutral face image from given emotion labels.

  相似文献   

4.
5.
Reflectance from images: a model-based approach for human faces   总被引:1,自引:0,他引:1  
In this paper, we present an image-based framework that acquires the reflectance properties of a human face. A range scan of the face is not required. Based on a morphable face model, the system estimates the 3D shape and establishes point-to-point correspondence across images taken from different viewpoints and across different individuals' faces. This provides a common parameterization of all reconstructed surfaces that can be used to compare and transfer BRDF data between different faces. Shape estimation from images compensates deformations of the face during the measurement process, such as facial expressions. In the common parameterization, regions of homogeneous materials on the face surface can be defined a priori. We apply analytical BRDF models to express the reflectance properties of each region and we estimate their parameters in a least-squares fit from the image data. For each of the surface points, the diffuse component of the BRDF is locally refined, which provides high detail. We present results for multiple analytical BRDF models, rendered at novel orientations and lighting conditions.  相似文献   

6.
Emotive audio–visual avatars are virtual computer agents which have the potential of improving the quality of human-machine interaction and human-human communication significantly. However, the understanding of human communication has not yet advanced to the point where it is possible to make realistic avatars that demonstrate interactions with natural- sounding emotive speech and realistic-looking emotional facial expressions. In this paper, We propose the various technical approaches of a novel multimodal framework leading to a text-driven emotive audio–visual avatar. Our primary work is focused on emotive speech synthesis, realistic emotional facial expression animation, and the co-articulation between speech gestures (i.e., lip movements) and facial expressions. A general framework of emotive text-to-speech (TTS) synthesis using a diphone synthesizer is designed and integrated into a generic 3-D avatar face model. Under the guidance of this framework, we therefore developed a realistic 3-D avatar prototype. A rule-based emotive TTS synthesis system module based on the Festival-MBROLA architecture has been designed to demonstrate the effectiveness of the framework design. Subjective listening experiments were carried out to evaluate the expressiveness of the synthetic talking avatar.   相似文献   

7.
An approach to the analysis of dynamic facial images for the purposes of estimating and resynthesizing dynamic facial expressions is presented. The approach exploits a sophisticated generative model of the human face originally developed for realistic facial animation. The face model which may be simulated and rendered at interactive rates on a graphics workstation, incorporates a physics-based synthetic facial tissue and a set of anatomically motivated facial muscle actuators. The estimation of dynamical facial muscle contractions from video sequences of expressive human faces is considered. An estimation technique that uses deformable contour models (snakes) to track the nonrigid motions of facial features in video images is developed. The technique estimates muscle actuator controls with sufficient accuracy to permit the face model to resynthesize transient expressions  相似文献   

8.
Reanimating Faces in Images and Video   总被引:8,自引:0,他引:8  
  相似文献   

9.
We develop a new 3D hierarchical model of the human face. The model incorporates a physically-based approximation to facial tissue and a set of anatomically-motivated facial muscle actuators. Despite its sophistication, the model is efficient enough to produce facial animation at interactive rates on a high-end graphics workstation. A second contribution of this paper is a technique for estimating muscle contractions from video sequences of human faces performing expressive articulations. These estimates may be input as dynamic control parameters to the face model in order to produce realistic animation. Using an example, we demonstrate that our technique yields sufficiently accurate muscle contraction estimates for the model to reconstruct expressions from dynamic images of faces.  相似文献   

10.
Face and gesture recognition: overview   总被引:5,自引:0,他引:5  
Computerised recognition of faces and facial expressions would be useful for human-computer interface, and provision for facial animation is to be included in the ISO standard MPEG-4 by 1999. This could also be used for face image compression. The technology could be used for personal identification, and would be proof against fraud. Degrees of difference between people are discussed, with particular regard to identical twins. A particularly good feature for personal identification is the texture of the iris. A problem is that there is more difference between images of the same face with, e.g., different expression or illumination, than there sometimes is between images of different faces. Face recognition by the brain is discussed  相似文献   

11.
一个实用的人脸定制和表情动画编辑系统   总被引:1,自引:1,他引:1       下载免费PDF全文
介绍了一个简单而行之有效的特定人脸定制和表情动画的编辑系统,首先给定一个内嵌肌肉向量的一般人脸三维多边形网络模型并提供特定人脸的正侧面正交图象,然后采用snake技术自动适配人脸特征线,变分一般模型定制出三维虚拟人脸;接着再用多分辨率样条技术产生无缝的人脸纹理镶嵌图,进而生成高度真实感的虚拟特定人脸,由内嵌肌肉向量的参数化表示,通过编辑参数能赋予三维虚拟人脸各种丰富的表情。由于变分后人脸模型的拓扑不变,因此可用基于三角化的图象metamorphosis实现不同特定人脸间的三维morph,该系统能在廉价的PC平台上实现,其快速、简单而且具有真实感,具有很大的实用价值。  相似文献   

12.
We present a novel performance‐driven approach to animating cartoon faces starting from pure 2D drawings. A 3D approximate facial model automatically built from front and side view master frames of character drawings is introduced to enable the animated cartoon faces to be viewed from angles different from that in the input video. The expressive mappings are built by artificial neural network (ANN) trained from the examples of the real face in the video and the cartoon facial drawings in the facial expression graph for a specific character. The learned mapping model makes the resultant facial animation to properly get the desired expressiveness, instead of a mere reproduction of the facial actions in the input video sequence. Furthermore, the lit sphere, capturing the lighting in the painting artwork of faces, is utilized to color the cartoon faces in terms of the 3D approximate facial model, reinforcing the hand‐drawn appearance of the resulting facial animation. We made a series of comparative experiments to test the effectiveness of our method by recreating the facial expression in the commercial animation. The comparison results clearly demonstrate the superiority of our method not only in generating high quality cartoon‐style facial expressions, but also in speeding up the animation production of cartoon faces. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

13.
Synthesizing expressive facial animation is a very challenging topic within the graphics community. In this paper, we present an expressive facial animation synthesis system enabled by automated learning from facial motion capture data. Accurate 3D motions of the markers on the face of a human subject are captured while he/she recites a predesigned corpus, with specific spoken and visual expressions. We present a novel motion capture mining technique that "learns" speech coarticulation models for diphones and triphones from the recorded data. A phoneme-independent expression eigenspace (PIEES) that encloses the dynamic expression signals is constructed by motion signal processing (phoneme-based time-warping and subtraction) and principal component analysis (PCA) reduction. New expressive facial animations are synthesized as follows: First, the learned coarticulation models are concatenated to synthesize neutral visual speech according to novel speech input, then a texture-synthesis-based approach is used to generate a novel dynamic expression signal from the PIEES model, and finally the synthesized expression signal is blended with the synthesized neutral visual speech to create the final expressive facial animation. Our experiments demonstrate that the system can effectively synthesize realistic expressive facial animation  相似文献   

14.
Recognizing expressions is a key part of human social interaction, and processing of facial expression information is largely automatic for humans, but it is a non-trivial task for a computational system. The purpose of this work is to develop computational models capable of differentiating between a range of human facial expressions. Raw face images are examples of high-dimensional data, so here we use two dimensionality reduction techniques: principal component analysis and curvilinear component analysis. We also preprocess the images with a bank of Gabor filters, so that important features in the face images may be identified. Subsequently, the faces are classified using a support vector machine. We show that it is possible to differentiate faces with a prototypical expression from the neutral expression. Moreover, we can achieve this with data that has been massively reduced in size: in the best case the original images are reduced to just 5 components. We also investigate the effect size on face images, a concept which has not been reported previously on faces. This enables us to identify those areas of the face that are involved in the production of a facial expression.  相似文献   

15.
杨璞  易法令  刘王飞  杨远发 《微机发展》2006,16(11):131-133
人脸是人类相互交流的重要渠道,是人类的喜、怒、哀、乐等复杂表情和语言的载体。因此,具有真实感的三维人脸模型的构造和变形是计算机图形学领域中一个研究热点。如何在三维人脸模型上产生具有真实感的人脸表情和动作,是其中的一个难点。文中介绍了一种基于Delaunay和Dirichlet/Voronoi图的Dirichlet自由变形算法(Dirichlet Free-Form De-formations,简称DFFD)解决这一问题。文中详细介绍了DFFD技术,并根据MPEG-4的脸部定义参数,应用DFFD对一般人脸进行变形。同时提出了在进行人脸变形时利用脸部定义参数FDP与脸部动画参数FAP分层次控制的方法,这种两级控制点控制的设置,使三维人脸模型产生光滑变形,由此可将人脸各种表情平滑准确地展现出来。  相似文献   

16.
快速实时生成表情逼真、姿态自然的虚拟人脸一直是较为有挑战性的研究。提出一种基于3DMM与GAN结合的实时人脸表情迁移方法。通过目标人脸的一段表演视频,将表演人员与目标人脸关键点建立映射关系,使用二维RGB摄像头实时跟踪表演人脸关键点并利用GAN生成目标虚拟人脸特征点,进一步估计人脸姿态。利用3DMM构成二维到三维人脸模型的重建,实时渲染出当前姿态的二维人脸表情,再将表演人脸表情与目标人脸表情进行融合,生成表情逼真的目标人脸。对比实验表明,该方法能得到更为逼真的人脸表情,可以模仿出目标人脸真实的表情,同时也能够达到实时性,在创建逼真的视频方面实现了更大的灵活性。同时,提出一种针对人脸表情迁移仿真效果的验证方法可以客观评价仿真人脸的结果。  相似文献   

17.
Synthesizing the image of a 3-D scene as it would be captured by a camera from an arbitrary viewpoint is a central problem in Computer Graphics. Given a complete 3-D model, it is possible to render the scene from any viewpoint. The construction of models is a tedious task. Here, we propose to bypass the model construction phase altogether, and to generate images of a 3-D scene from any novel viewpoint from prestored images. Unlike methods presented so far, we propose to completely avoid inferring and reasoning in 3-D by using projective invariants. These invariants are derived from corresponding points in the prestored images. The correspondences between features are established off-line in a semi-automated way. It is then possible to generate wireframe animation in real time on a standard computing platform. Well understood texture mapping methods can be applied to the wireframes to realistically render new images from the prestored ones. The method proposed here should allow the integration of computer generated and real imagery for applications such as walkthroughs in realistic virtual environments. We illustrate our approach on synthetic and real indoor and outdoor images.  相似文献   

18.
为了由视频进行驱动生成人脸表情动画,提出一种表演驱动的二维人脸表情合成方法。利用主动外观模型算法对人脸的关键点进行定位,并从关键点提取出人脸的运动参数;对人脸划分区域,并获取目标人脸的若干样本图像;从人脸的运动参数获取样本图像的插值系数,对样本图像进行线性组合来合成目标人脸的表情图像。该方法具有计算简单有效、真实感强的特点,可以应用于数字娱乐、视频会议等领域。  相似文献   

19.
面向纹理特征的真实感三维人脸动画方法   总被引:2,自引:0,他引:2  
纹理变化是人脸表情的重要组成部分,传统的人脸动画方法通常只是对纹理图像做简单的拉伸变换,没有考虑人脸细微纹理特征的变化,比如皱纹、酒窝等,该文提出了一种面向纹理特征变化的真实感三维人脸动画方法.给出局部表情比率图(Partial Expression Ratio Image,PERI)的概念及其获取方法,在此基础上,进一步给出了面向MPEG-4的PERI参数化与面向三维人脸动画的多方向PERI方法,前者通过有机结合MPEG-4的人脸动画参数(Facial Anlmation Parameter,FAP),实现人脸动画中细微表情特征的参数化表示;后者通过多方向PERI纹理特征调整方法,使得三维人脸模型在不同角度都具有较好的细微表情特征,该文提出的方法克服了传统人脸动画只考虑人脸曲面形变控制而忽略纹理变化的缺陷,实现面向纹理变化的具有细微表情特征的真实感三维人脸动画,实验表明,该文提出的方法能有效捕捉纹理变化细节,提高人脸动画的真实感。  相似文献   

20.
具有真实感的三维虚拟特定人脸生成方法   总被引:12,自引:1,他引:11  
晏洁  高文  尹宝才 《计算机学报》1999,22(2):147-153
三维人脸的计算机生成目前是一个具有挑战性的课题。如何在人脸复杂的、不规则的表面上建模以及如何反映出特定人脸间的个体差异是现真实人脸模拟的两大主要困难所在。本文针对后者,提出了一种新的三维特定人脸生成方法,该方法基于人脸模型变形技术,允许模拟者在交互方式下将一般人脸几何模型和豫先提供的特定人脸多方面图像之间进形特征校准,进而得到特定人脸三维模型,该模型将精确地反映特定人脸的诸特征,同时,这种变形技术  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号