首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
《Graphical Models》2007,69(2):106-118
Facial Motion Cloning (FMC) is the technique employed to transfer the motion of a virtual face (namely the source) to a mesh representing another face (the target), generally having a different geometry and connectivity. In this paper, we describe a novel method based on the combination of the Radial Basis Functions (RBF) volume morphing with the encoding capabilities of the widely used MPEG-4 Facial and Body Animation (FBA) international standard. First, we find the morphing function G(P) that precisely fits the shape of the source into the shape of the target face. Then, all the MPEG-4 encoded movements of the source face are transformed using the same function G(P) and mapped to the corresponding vertices of the target mesh. By doing this, we obtain, in a straightforward and simple way, the whole set of the MPEG-4 encoded facial movements for the target face in a short time. This animatable version of the target face is able to perform generic face animation stored in a MPEG-4 FBA data stream.  相似文献   

2.
一个实用的人脸定制和表情动画编辑系统   总被引:1,自引:1,他引:1       下载免费PDF全文
介绍了一个简单而行之有效的特定人脸定制和表情动画的编辑系统,首先给定一个内嵌肌肉向量的一般人脸三维多边形网络模型并提供特定人脸的正侧面正交图象,然后采用snake技术自动适配人脸特征线,变分一般模型定制出三维虚拟人脸;接着再用多分辨率样条技术产生无缝的人脸纹理镶嵌图,进而生成高度真实感的虚拟特定人脸,由内嵌肌肉向量的参数化表示,通过编辑参数能赋予三维虚拟人脸各种丰富的表情。由于变分后人脸模型的拓扑不变,因此可用基于三角化的图象metamorphosis实现不同特定人脸间的三维morph,该系统能在廉价的PC平台上实现,其快速、简单而且具有真实感,具有很大的实用价值。  相似文献   

3.
Lip synchronization of 3D face model is now being used in a multitude of important fields. It brings a more human, social and dramatic reality to computer games, films and interactive multimedia, and is growing in use and importance. High level of realism can be used in demanding applications such as computer games and cinema. Authoring lip syncing with complex and subtle expressions is still difficult and fraught with problems in terms of realism. This research proposed a lip syncing method of realistic expressive 3D face model. Animated lips requires a 3D face model capable of representing the myriad shapes the human face experiences during speech and a method to produce the correct lip shape at the correct time. The paper presented a 3D face model designed to support lip syncing that align with input audio file. It deforms using Raised Cosine Deformation (RCD) function that is grafted onto the input facial geometry. The face model was based on MPEG-4 Facial Animation (FA) Standard. This paper proposed a method to animate the 3D face model over time to create animated lip syncing using a canonical set of visemes for all pairwise combinations of a reduced phoneme set called ProPhone. The proposed research integrated emotions by the consideration of Ekman model and Plutchik’s wheel with emotive eye movements by implementing Emotional Eye Movements Markup Language (EEMML) to produce realistic 3D face model.  相似文献   

4.
特定人脸的快速定制和肌肉驱动的表情动画   总被引:13,自引:6,他引:7  
为解决特定人脸的真实模拟问题提供了一个简单而行之有效的方法,首先,给出特定人脸的正侧面正交图像,并提供了一个内嵌肌肉向量的一般人脸三维多边形网络模型,采用Snake技术自动配人脸特征线,基于特定人脸的特征线相对于一般人脸模型上的特征线的位移,变分插值一般人脸网络,适配特定人脸几何;然后,用多分辨率样条技术产生无缝的人脸纹理镶嵌图,纹理映射后生成高度真实感的能以任意视线方向观察的特定人脸;进而,通过组合特定人脸的肌肉向量的运动,变形模型,组合出特定人脸的各种表情,该方法能在廉价的PC平台上实现,快速、简单,而且具有真实感。  相似文献   

5.
论文提出了一种新的基于三维人脸形变模型,并兼容于MPEG-4的三维人脸动画模型。采用基于均匀网格重采样的方法建立原型三维人脸之间的对齐,应用MPEG-4中定义的三维人脸动画规则,驱动三维模型自动生成真实感人脸动画。给定一幅人脸图像,三维人脸动画模型可自动重建其真实感的三维人脸,并根据FAP参数驱动模型自动生成人脸动画。  相似文献   

6.
In this paper, we present a fully-automatic and real-time approach for person-independent recognition of facial expressions from dynamic sequences of 3D face scans. In the proposed solution, first a set of 3D facial landmarks are automatically detected, then the local characteristics of the face in the neighborhoods of the facial landmarks and their mutual distances are used to model the facial deformation. Training two hidden Markov models for each facial expression to be recognized, and combining them to form a multiclass classifier, an average recognition rate of 79.4 % has been obtained for the 3D dynamic sequences showing the six prototypical facial expressions of the Binghamton University 4D Facial Expression database. Comparisons with competitor approaches on the same database show that our solution is able to obtain effective results with the advantage of being capable to process facial sequences in real-time.  相似文献   

7.
8.
Facial expression is central to human experience. Its efficiency and valid measurement are challenges that automated facial image analysis seeks to address. Most publically available databases are limited to 2D static images or video of posed facial behavior. Because posed and un-posed (aka “spontaneous”) facial expressions differ along several dimensions including complexity and timing, well-annotated video of un-posed facial behavior is needed. Moreover, because the face is a three-dimensional deformable object, 2D video may be insufficient, and therefore 3D video archives are required. We present a newly developed 3D video database of spontaneous facial expressions in a diverse group of young adults. Well-validated emotion inductions were used to elicit expressions of emotion and paralinguistic communication. Frame-level ground-truth for facial actions was obtained using the Facial Action Coding System. Facial features were tracked in both 2D and 3D domains. To the best of our knowledge, this new database is the first of its kind for the public. The work promotes the exploration of 3D spatiotemporal features in subtle facial expression, better understanding of the relation between pose and motion dynamics in facial action units, and deeper understanding of naturally occurring facial action.  相似文献   

9.
基于自由形状变形的三维人脸表情控制   总被引:2,自引:0,他引:2       下载免费PDF全文
人脸表情控制是生物特征识别研究的重要内容,本文提出了一种基于MPEG-4中FAP、FAT标准进行三维人脸表情合成的方法。首先对人脸进行关键点定义和区域分割,然后使用面向表面的自由形状变形方法(SOFFD)生成人脸表情。在特定人脸表情生成过程中,使用了基表情的比例合成方法。实验表明,该方法可以有效地合成各种真实的人脸表情。  相似文献   

10.
Chen  Jingying  Xu  Ruyi  Liu  Leyuan 《Multimedia Tools and Applications》2018,77(22):29871-29887

Facial expression recognition (FER) is important in vision-related applications. Deep neural networks demonstrate impressive performance for face recognition; however, it should be noted that this method relies heavily on a great deal of manually labeled training data, which is not available for facial expressions in real-world applications. Hence, we propose a powerful facial feature called deep peak–neutral difference (DPND) for FER. DPND is defined as the difference between two deep representations of the fully expressive (peak) and neutral facial expression frames. The difference tends to emphasize the facial parts that are changed in the transition from the neutral to the expressive face and to eliminate the face identity information retained in the fine-tuned deep neural network for facial expression, the network has been trained on large-scale face recognition dataset. Furthermore, unsupervised clustering and semi-supervised classification methods are presented to automatically acquire the neutral and peak frames from the expression sequence. The proposed facial expression feature achieved encouraging results on public databases, which suggests that it has strong potential to recognize facial expressions in real-world applications.

  相似文献   

11.
Facial anthropometry plays an important role in ergonomic applications. Most ergonomically designed products depend on stable and accurate human body measurement data. Our research automatically identifies human facial features based on three-dimensional geometric relationships, revealing a total of 67 feature points and 24 feature lines — more than the definitions associated with MPEG-4. In this study, we also verify the replicability, robustness, and accuracy of this feature set. Even with a lower-density point cloud from a non-dedicated head scanner, this method can provide robust results, with 86.6% validity in the 5 mm range. We also analyze the main 31 feature points on the human face, with 96.7% validity of less than 5 mm.  相似文献   

12.
Oftentimes facial animation is created separately from overall body motion. Since convincing facial animation is challenging enough in itself, artists tend to create and edit the face motion in isolation. Or if the face animation is derived from motion capture, this is typically performed in a mo‐cap booth while sitting relatively still. In either case, recombining the isolated face animation with body and head motion is non‐trivial and often results in an uncanny result if the body dynamics are not properly reflected on the face (e.g. the bouncing of facial tissue when running). We tackle this problem by introducing a simple and intuitive system that allows to add physics to facial blendshape animation. Unlike previous methods that try to add physics to face rigs, our method preserves the original facial animation as closely as possible. To this end, we present a novel simulation framework that uses the original animation as per‐frame rest‐poses without adding spurious forces. As a result, in the absence of any external forces or rigid head motion, the facial performance will exactly match the artist‐created blendshape animation. In addition we propose the concept of blendmaterials to give artists an intuitive means to account for changing material properties due to muscle activation. This system allows to automatically combine facial animation and head motion such that they are consistent, while preserving the original animation as closely as possible. The system is easy to use and readily integrates with existing animation pipelines.  相似文献   

13.
鲁棒的镜头边界检测与基于运动信息的视频摘要生成   总被引:1,自引:0,他引:1  
根据基于内容的视频索引与检索等应用的需求,提出一种视频摘要生成方法.首先进行鲁棒的镜头边界检测,基于颜色直方图计算相邻帧间距离来进行初步检测,并通过分析帧间运动向量去除由相机运动引起的误检测;然后根据镜头的运动指示图将镜头分为静态镜头、包含对象运动的镜头和包含显著相机运动的镜头;最后提出镜头间基于多实例表示的距离度量方法以及聚类算法的初始化方法,采用核K-均值算法对每类镜头进行聚类,抽取每类中最靠近类簇中心的镜头作为关键镜头,将关键镜头按时间序组合起来形成视频摘要.与已有方法相比,文中方法能进行更鲁棒的镜头边界检测,识别镜头中的运动信息,并对镜头分类后进行分别处理,从而增强视频摘要的信息概括能力.  相似文献   

14.
王振 《电脑与信息技术》2010,18(5):11-12,37
表现人脸的皱纹特征是提高人脸动画真实感的重要因素之一,文章提出了一种基于关键帧的皱纹动画方法,使用高度图、法线图和MPEG-4人脸运动参数描述皱纹动画关键帧,通过对高度图、法线图插值生成皱纹动画中间帧。所提方法对人脸模型网格复杂度要求不高,且合成的皱纹动画具有真实感强和实时性高的特点。  相似文献   

15.
基于MPEG-4的人脸表情图像变形研究   总被引:1,自引:0,他引:1       下载免费PDF全文
为了实时地生成自然真实的人脸表情,提出了一种基于MPEG-4人脸动画框架的人脸表情图像变形方法。该方法首先采用face alignment工具提取人脸照片中的88个特征点;接着在此基础上,对标准人脸网格进行校准变形,以进一步生成特定人脸的三角网格;然后根据人脸动画参数(FAP)移动相应的面部关键特征点及其附近的关联特征点,并在移动过程中保证在多个FAP的作用下的人脸三角网格拓扑结构不变;最后对发生形变的所有三角网格区域通过仿射变换进行面部纹理填充,生成了由FAP所定义的人脸表情图像。该方法的输入是一张中性人脸照片和一组人脸动画参数,输出是对应的人脸表情图像。为了实现细微表情动作和虚拟说话人的合成,还设计了一种眼神表情动作和口内细节纹理的生成算法。基于5分制(MOS)的主观评测实验表明,利用该人脸图像变形方法生成的表情脸像自然度得分为3.67。虚拟说话人合成的实验表明,该方法具有很好的实时性,在普通PC机上的平均处理速度为66.67 fps,适用于实时的视频处理和人脸动画的生成。  相似文献   

16.
Reanimating Faces in Images and Video   总被引:8,自引:0,他引:8  
  相似文献   

17.
18.
We present four techniques for modeling and animating faces starting from a set of morph targets. The first technique involves obtaining parameters to control individual facial components and learning the mapping from one type of parameter to another through machine learning techniques. The second technique is to fuse visible speech and facial expressions in the lower part of a face. The third technique combines coarticulation rules and kernel smoothing techniques. Finally, a new 3D tongue model with flexible and intuitive skeleton controls is presented. The results of eight animated character models demonstrate that these techniques are powerful and effective.  相似文献   

19.
Image-based animation of facial expressions   总被引:1,自引:0,他引:1  
We present a novel technique for creating realistic facial animations given a small number of real images and a few parameters for the in-between images. This scheme can also be used for reconstructing facial movies where the parameters can be automatically extracted from the images. The in-between images are produced without ever generating a three-dimensional model of the face. Since facial motion due to expressions are not well defined mathematically our approach is based on utilizing image patterns in facial motion. These patterns were revealed by an empirical study which analyzed and compared image motion patterns in facial expressions. The major contribution of this work is showing how parameterized “ideal” motion templates can generate facial movies for different people and different expressions, where the parameters are extracted automatically from the image sequence. To test the quality of the algorithm, image sequences (one of which was taken from a TV news broadcast) were reconstructed, yielding movies hardly distinguishable from the originals. Published online: 2 October 2002 Correspondence to: A. Tal Work has been supported in part by the Israeli Ministry of Industry and Trade, The MOST Consortium  相似文献   

20.
基于球面参数化的人脸动画重映射   总被引:1,自引:0,他引:1       下载免费PDF全文
动画重映射是生成真实感人脸动画的重要途径,为了快速地进行人脸动画重映射,提出了一种基于球面参数化的动画重映射算法,该算法先通过引入球面参数化来保证参数域上的三角面不重叠,再利用重心坐标插值,以实现简单、无歧义的重映射。该算法不仅能够在保证真实感的基础上显著提升重映射的速度,并可减少需要标记的关键点个数。另外,结合唇线自动分割和区域划分还能修正直接插值产生的错误。实验表明,该新算法可用于Motion Capture采集的稀疏关键点的人脸动画数据的重映射。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号