首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Research on facial animation is as vast as the many interests and needs that can be found in the general public, television or film production. For Mac Guff Ligne, a company specialized in the fabrication of special effects and computer generated images, the needs and the constraints on such a topic are very big. Morphing is often used in facial animation, and consists in mixing several expression models. As we will discover, the advantages in using morphing are numerous, but the animation workload remains long and time-consuming. Our goal is to propose a fast and reliable animation tool that is based on the same morphing technique with which the graphic artists are familiar. Our method is to inverse the classical process of morphing, and to match a real facial animation to a 3D facial animation by automatic calculation of the weights of expression models. Firstly, we discretize the real facial animation using a number of characteristic points. Then we follow the path of each point by optic or magnetic motion capture, or through the filmed images. This tracked down animation is then decomposed, for each frame, according to a basis of characteristic expressions (joy, anger, etc.) that can be automatically taken out from the real animation during the calibration stage. It means that we express a simplified shape of the face by linear composition of a series of basic faces. Finally, we can reintroduce the results of this decomposition into a more complex facial morph which has a totally different topology and geometry. The user can thus complete and modify the resulting animation using a tool he knows well. This method, used in production at Mac Guff Ligne, proves to be a solid, effective and easy-to-use work base for facial animation.  相似文献   

2.
People instinctively recognize facial expression as a key to nonverbal communication, which has been confirmed by many different research projects. A change in intensity or magnitude of even one specific facial expression can cause different interpretations. A systematic method for generating facial expression syntheses, while mimicking realistic facial expressions and intensities, is a strong need in various applications. Although manually produced animation is typically of high quality, the process is slow and costly-therefore, often unrealistic for low polygonal applications. In this paper, we present a simple and efficient emotional-intensity-based expression cloning process for low-polygonal-based applications, by generating a customized face, as well as by cloning facial expressions. We define intensity mappings to measure expression intensity. Once a source expression is determined by a set of suitable parameter values in a customized 3D face and its embedded muscles, expressions for any target face(s) can be easily cloned by using the same set of parameters. Through experimental study, including facial expression simulation and cloning with intensity mapping, our research reconfirms traditional psychological findings. Additionally, we discuss the method's overall usability and how it allows us to automatically adjust a customized face with embedded facial muscles while mimicking the user's facial configuration, expression, and intensity.  相似文献   

3.
吴晓军  鞠光亮 《电子学报》2016,44(9):2141-2147
提出了一种无标记点的人脸表情捕捉方法.首先根据ASM(Active Shape Model)人脸特征点生成了覆盖人脸85%面部特征的人脸均匀网格模型;其次,基于此人脸模型提出了一种表情捕捉方法,使用光流跟踪特征点的位移变化并辅以粒子滤波稳定其跟踪结果,以特征点的位移变化驱动网格整体变化,作为网格跟踪的初始值,使用网格的形变算法作为网格的驱动方式.最后,以捕捉到的表情变化数据驱动不同的人脸模型,根据模型的维数不同使用不同的驱动方法来实现表情动画重现,实验结果表明,提出的算法能很好地捕捉人脸表情,将捕捉到的表情映射到二维卡通人脸和三维虚拟人脸模型都能取得较好的动画效果.  相似文献   

4.
We propose a novel approach for face tracking, resulting in a visual feedback loop: instead of trying to adapt a more or less realistic artificial face model to an individual, we construct from precise range data a specific texture and wireframe face model, whose realism allows the analysis and synthesis modules to visually cooperate in the image plane, by directly using 2D patterns synthesized by the face model. Unlike other feedback loops found in the literature, we do not explicitly handle the 3D complex geometric data of the face model, to make real-time manipulations possible. Our main contribution is a complete face tracking and pose estimation framework, with few assumptions about the face rigid motion (allowing large rotations out of the image plane), and without marks or makeup on the user's face. Our framework feeds the feature-tracking procedure with synthesized facial patterns, controlled by an extended Kalman filter. Within this framework, we present original and efficient geometric and photometric modelling techniques, and a reformulation of a block-matching algorithm to make it match synthesized patterns with real images, and avoid background areas during the matching. We also offer some numerical evaluations, assessing the validity of our algorithms, and new developments in the context of facial animation. Our face-tracking algorithm may be used to recover the 3D position and orientation of a real face and generate a MPEG-4 animation stream to reproduce the rigid motion of the face with a synthetic face model. It may also serve as a pre-processing step for further facial expression analysis algorithms, since it locates the position of the facial features in the image plane, and gives precise 3D information to take into account the possible coupling between pose and expressions of the analysed facial images.  相似文献   

5.
This paper presents a novel view-based approach to quantify and reproduce facial expressions, by systematically exploiting the degrees of freedom allowed by a realistic face model. This approach embeds efficient mesh morphing and texture animations to synthesize facial expressions. We suggest using eigenfeatures, built from synthetic images, and designing an estimator to interpret the responses of the eigenfeatures on a facial expression in terms of animation parameters.  相似文献   

6.
With better understanding of face anatomy and technical advances in computer graphics, 3D face synthesis has become one of the most active research fields for many human-machine applications, ranging from immersive telecommunication to the video games industry. In this paper we proposed a method that automatically extracts features like eyes, mouth, eyebrows and nose from the given frontal face image. Then a generic 3D face model is superimposed onto the face in accordance with the extracted facial features in order to fit the input face image by transforming the vertex topology of the generic face model. The 3D-specific face can finally be synthesized by texturing the individualized face model. Once the model is ready six basic facial expressions are generated with the help of MPEG-4 facial animation parameters. To generate transitions between these facial expressions we use 3D shape morphing between the corresponding face models and blend the corresponding textures. Novelty of our method is automatic generation of 3D model and synthesis face with different expressions from frontal neutral face image. Our method has the advantage that it is fully automatic, robust, fast and can generate various views of face by rotation of 3D model. It can be used in a variety of applications for which the accuracy of depth is not critical such as games, avatars, face recognition. We have tested and evaluated our system using standard database namely, BU-3DFE.  相似文献   

7.
当前,动画及其实现技术受到业界广泛关注,而人脸动画如喜、怒、哀、乐的表达其真实感还不够好.以Waters肌肉模型为基础,提出NURBS弹性肌肉模型,该方法依据解剖学知识,用非均匀有理B样条曲线仿真肌肉.通过改变曲线控制点的权重,可以找到一个动作向量控制肌肉的运动,进而合成人脸的各种表情.控制点数量越多,肌肉就越好控制,那么就可以更加真实地仿真人脸表情.  相似文献   

8.
陈娜 《激光与红外》2022,52(6):923-930
基于单张人脸图片的3D人脸模型重构,无论是在计算机图形领域还是可见光成像领域都是一个极具挑战性的研究方向,对于人脸识别、人脸成像、人脸动画等实际应用更是具有重要意义。针对目前算法复杂度较高、运算量较大且存在局部最优解和初始化不良等问题,本文提出了一种基于深度卷积神经网络的单张图片向3D人脸自动重构算法。该算法首先基于3D转换模型来提取2D人脸图像的密集信息,然后构建深度卷积神经网络架构、设计总体损失函数,直接学习2D人脸图像从像素到3D坐标的映射,从而实现了3D人脸模型的自动构建。算法对比与仿真实验表明,该算法在3D人脸重建上的归一化平均误差更低,且仅需一张2D人脸图像便可自动重构生成3D人脸模型。所生成的3D人脸模型鲁棒性好,重构准确,完整保留表情细节,并且对不同姿态的人脸也具有较好的重建效果,能够在三维空间中无死角自由呈现,将满足更多实际应用需求。  相似文献   

9.
实时逼真的人脸表情动画技术是计算机图形学研究领域中一个富有挑战性的课题,文章针对已有物理模型算法的计算复杂、效果粗糙等问题,阐述在Direct3D开发平台下给出的一种将物理模型与渐变表情算法结合的组合渐变动画算法及用其生成真实感表情动画的实现过程。实验表明,这种方法极大地增强了人脸表情动画生成的真实性。  相似文献   

10.
人类面部表情是其心理情绪变化的最直观刻画,不同人的面部表情具有很大差异,现有表情识别方法均利用面部统计特征区分不同表情,其缺乏对于人脸细节信息的深度挖掘。根据心理学家对面部行为编码的定义可以看出,人脸的局部细节信息决定了其表情意义。因此该文提出一种基于多尺度细节增强的面部表情识别方法,针对面部表情受图像细节影响较大的特点,提出利用高斯金字塔提取图像细节信息,并对图像进行细节增强,从而强化人脸表情信息。针对面部表情的局部性特点,提出利用层次结构的局部梯度特征计算方法,描述面部特征点局部形状特征。最后,使用支持向量机(SVM)对面部表情进行分类。该文在CK+表情数据库中的实验结果表明,该方法不仅验证了图像细节对面部表情识别过程的重要作用,而且在小规模训练数据下也能够得到非常好的识别结果,表情平均识别率达到98.19%。  相似文献   

11.
Lifelike talking faces for interactive services   总被引:1,自引:0,他引:1  
Lifelike talking faces for interactive services are an exciting new modality for man-machine interactions. Recent developments in speech synthesis and computer animation enable the real-time synthesis of faces that look and behave like real people, opening opportunities to make interactions with computers more like face-to-face conversations. This paper focuses on the technologies for creating lifelike talking heads, illustrating the two main approaches: model-based animations and sample-based animations. The traditional model-based approach uses three-dimensional wire-frame models, which can be animated from high-level parameters such as muscle actions, lip postures, and facial expressions. The sample-based approach, on the other hand, concatenates segments of recorded videos, instead of trying to model the dynamics of the animations in detail. Recent advances in image analysis enable the creation of large databases of mouth and eye images, suited for sample-based animations. The sample-based approach tends to generate more naturally looking animations at the expense of a larger size and less flexibility than the model-based animations. Beside lip articulation, a talking head must show appropriate head movements, in order to appear natural. We illustrate how such "visual prosody" is analyzed and added to the animations. Finally, we present four applications where the use of face animation in interactive services results in engaging user interfaces and an increased level of trust between user and machine. Using an RTP-based protocol, face animation can be driven with only 800 bits/s in addition to the rate for transmitting audio.  相似文献   

12.
Scalable low bit-rate video coding is vital for the transmission of video signals over wireless channels. A scalable model-based video coding scheme is proposed in this paper to achieve this. This paper mainly addresses automatic scalable face model design. Firstly, a robust and adaptive face segmentation method is proposed, which is based on piecewise skin-colour distributions. 43 million skin pixels from 900 images are used to train the skin-colour model, which can identify skin-colour pixels reliably under different lighting conditions. Next, reliable algorithms are proposed for detecting the eyes, mouth and chin that are used to verify the face candidatures. Then, based on the detected facial features and human face muscular distributions, a heuristic scalable face model is designed to represent the rigid and non-rigid motion of head and facial features. A novel motion estimation algorithm is proposed to estimate the object model motion hierarchically. Experimental results are provided to illustrate the performance of the proposed algorithms for facial feature detection and the accuracy of the designed scalable face model for representing face motion.  相似文献   

13.
李晓峰  赵海  葛新  程显永 《电子学报》2010,38(5):1167-1171
由于外界环境的不确定性和人脸的复杂性,人脸表情的跟踪与计算机形象描绘是一个较难问题.基于此问题,提出了一种有别于模式识别、样本学习等传统手段的较为简单解决方法,在视频采集条件下,分析帧图像,通过对比多种边缘检测方法,采用一种基于边缘特征提取的人脸表情建模方法,来完成用于表情描绘的面部特征量提取与建模,并结合曲线拟合和模型控制等手段,进行人脸卡通造型生成和二维表情动画模拟.实现了从输入数据生成卡通造型画并真实地表现出表情变化情况.  相似文献   

14.
This paper presents a new method in computer facial animation that models facial deformation by modifying the NURBS curve. In this work, displacement vectors of curve points are decomposed into physically meaningful components. Three independent vectors are determined for each curve sample point and the directions of the vectors possess anatomic meaning. The movement of the curve point is then resolved into components along the three vectors. These components can be processed individually to simulate a real face, and then summed again into a vector. The resultant vector can thus affect the polygonal vertices of a face model that are associated with the curve point. This method represents an extension of our previous work in computer facial animation. Experiments suggest this technique can be used to facilitate subtle modeling and animation of facial deformation.  相似文献   

15.
16.
三维扫描仪可以准确获取人脸的几何形状与纹理,但原始的人脸扫描数据仅为一张连续曲面,不符合实际的人脸结构,无法用于人脸动画。针对此问题,提出了一种由三雏扫描数据进行人脸建模的方法,将一个具备完整结构的通用人脸模型与扫描数据进行初步适配,再采用细节重建技术恢复特定人脸的表面细节和皮肤纹理。实验表明,该方法建立的三维人脸模型真实感强,结构完整,可生成连续自然的表情动画。  相似文献   

17.
This letter presents a face normalization algorithm based on 2-D face model to rec-ognize faces with variant postures from front-view face.A 2-D face mesh model can be extracted from faces with rotation to left or right and the corresponding front-view mesh model can be estimated according to facial symmetry.Then based on the relationship between the two mesh models,the nrmalized front-view face is formed by gray level mapping.Finally,the face recognition will be finished based on Principal Component Analysis(PCA).Experiments show that better face recognition performance is achieved in this way.  相似文献   

18.
This paper presents an overview of some of the synthetic visual objects supported by MPEG-4 version-1, namely animated faces and animated arbitrary 2D uniform and Delaunay meshes. We discuss both specification and compression of face animation and 2D-mesh animation in MPEG-4. Face animation allows to animate a proprietary face model or a face model downloaded to the decoder. We also address integration of the face animation tool with the text-to-speech interface (TTSI), so that face animation can be driven by text input.  相似文献   

19.
A 3D facial reconstruction and expression modeling system which creates 3D video sequences of test subjects and facilitates interactive generation of novel facial expressions is described. Dynamic 3D video sequences are generated using computational binocular stereo matching with active illumination and are used for interactive expression modeling. An individual’s 3D video set is annotated with control points associated with face subregions. Dragging a control point updates texture and depth in only the associated subregion so that the user generates new composite expressions unseen in the original source video sequences. Such an interactive manipulation of dynamic 3D face reconstructions requires as little preparation on the test subject as possible. Dense depth data combined with video-based texture results in realistic and convincing facial animations, a feature lacking in conventional marker-based motion capture systems.  相似文献   

20.
An automatic field motion image synthesis scheme (driven by speech) and a real-time image synthesis design are presented. The purpose of this research is to realize an intelligent human-machine interface or intelligent communication system with talking head images. A human face is reconstructed on the display of a terminal using a 3-D surface model and texture mapping technique. Facial motion images are synthesized naturally by transformation of the lattice points on 3-D wire frames. Two driving motion methods, a text-to-image conversion scheme and a voice-to-image conversion scheme, are proposed. In the first method, the synthesized head image can appear to speak some given words and phrases naturally. In the second case, some mouth and jaw motions can be synthesized in synchronization with voice signals from a speaker. Facial expressions other than mouth shape and jaw position can be added at any moment, so it is easy to make the facial model appear angry, to smile, to appear sad, etc., by special modification rules. These schemes were implemented on a parallel image computer system. A real-time image synthesizer was able to generate facial motion images on the display at a TV image video rate  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号