共查询到20条相似文献,搜索用时 15 毫秒
1.
表现人脸的皱纹特征是提高人脸动画真实感的重要因素之一,文章提出了一种基于关键帧的皱纹动画方法,使用高度图、法线图和MPEG-4人脸运动参数描述皱纹动画关键帧,通过对高度图、法线图插值生成皱纹动画中间帧。所提方法对人脸模型网格复杂度要求不高,且合成的皱纹动画具有真实感强和实时性高的特点。 相似文献
2.
表情生成技术是智能人机接口领域的重要研究内容之一.基于方法的表情生成技术多数以图像为研究对象,不需要3维建模,同时又具有丰富的纹理细节,在表情生成的逼真度和降低复杂性方面具有相当的优势,一些方法也可扩展至3维应用.本文根据各种算法的技术本质将已公开发表的研究总结归纳为渐变技术、表情映射技术、统计学方法、2维网格法、面色表情和表演驱动技术等几类,从概念、理论和技术方法等方面对典型成果进行讨论,分析了不同算法的特点和存在的问题以及今后的发展方向,为开展相关研究提供参考. 相似文献
3.
We present a novel data-driven skinning model—rigidity-aware skinning (RAS) model, for simulating both active and passive 3D facial animation of different identities in real time. Our model builds upon a linear blend skinning (LBS) scheme, where the bone set and skinning weights are shared for diverse identities and learned from the data via a sparse and localized skinning decomposition algorithm. Our model characterizes the animated face into the active expression and the passive deformation: The former is represented by an LBS-based multi-linear model learned from the FaceWareHouse data set, and the latter is represented by a spatially varying as-rigid-as-possible deformation applied to the LBS-based multi-linear model, whose rigidity parameters are learned from the data by a novel rigidity estimation algorithm. Our RAS model is not only generic and expressive for faithfully modelling medium-scale facial deformation, but also compact and lightweight for generating vivid facial animation in real time. We validate the efficiency and effectiveness of our RAS model for real-time 3D facial animation and expression editing. 相似文献
4.
A new facial image morphing algorithm based on the Kohonen self-organizing feature map (SOM) algorithm is proposed to generate a smooth 2D transformation that reflects anchor point correspondences. Using only a 2D face image and a small number of anchor points, we show that the proposed morphing algorithm provides a powerful mechanism for processing facial expressions. 相似文献
5.
6.
Shuang Liu Xiaosong Yang Zhao Wang Zhidong Xiao Jianjun Zhang 《Computer Animation and Virtual Worlds》2016,27(3-4):301-310
Facial expression transfer has been actively researched in the past few years. Existing methods either suffer from depth ambiguity or require special hardware. We present a novel marker‐less, real‐time facial transfer method that requires only a single video camera. We develop a robust model, which is adaptive to user‐specific facial data. It computes expression variances in real time and rapidly transfers them onto a target character either from images or videos. Our method can be applied to videos without prior camera calibration and focal adjustment. It enables realistic online facial expression editing and performance transferring in many scenarios such as video conference, news broadcasting, lip‐syncing for song performances and so on. With low computational cost and hardware requirement, our method tracks a single user at an average of 38fps and runs smoothly even in web browsers. Copyright © 2016 John Wiley & Sons, Ltd. 相似文献
7.
为了重用视频内容中的表情信息,提出一种语义表情构造与语义表情参数优化方法.首先从带有噪声的稀疏特征点中定义出人脸表情的语义信息;然后在语义表情空间中优化求解出最优表情参数,以提高人脸动画的真实感.该方法既不需要标定相机参数,也不需要预先建立表演者的3D人脸模型及其表情基,因此除了可用于网络视频的表情重用,也可用于开发实时在线的网络社交等应用.实验结果表明,对于头部摆动的俯仰角和侧角在[?15?,15?]范围内的原始视频,文中方法能够实时合成稳定、逼真的表情动画. 相似文献
8.
基于特征发现的卡通人脸肖像生成 总被引:6,自引:0,他引:6
通过对成年男女各100幅真实照片进行特征提取和特征统计,获得平均人脸特征分布信息,对新输入的人脸照片进行特征比对,发现其相对突出的特征,采用主动形状模型特征提取和特征线对相结合的方法,对突出的特征实现自动变形,生成人物的卡通肖像.实验结果表明,该方法具有人脸数据量大、特征提取和发现的自动化、变形效果好等优点. 相似文献
9.
提出一种面向未来掌上移动设备(如高端手机,PDA等)的灵活实用的纹理映射方法.该方法仅需要一张正面人脸照片作为输入,不要求模型和纹理的精确匹配,通过简单的交互实现在较低资源配置下人脸纹理的提取.采用了一种交互调整映射的方案,通过用户对模型中特征点及其影响区域的编辑,实现对局部纹理坐标的定义,得到满意的映射效果.实验结果表明,文中方法具有较高的效率和真实感,可以用于产生真实感三维人脸表情动画. 相似文献
10.
多表情源的人脸表情合成技术 总被引:2,自引:0,他引:2
提出了基于一个局部约束偏移和相对偏移技术的图像变形方法,用于人脸表情合成.该方法可以自动克服目前表情合成方法的非特征区域的形变,并与脸型外观相匹配.在此基础上,提出了基于多表情源的人脸表情合成技术.从人脸图像划分出三个主要表现表情的区域,并对每个区域根据多个表情源的加权平均的相对偏移量进行图像局部变形,最后利用加权平均的表情比图像加强表情.实验结果表明,用该方法得到的人脸表情自然逼真,更为重要的是,通过调节权向量的取值,可以取得不同模式的表情,丰富了表情表现的内涵. 相似文献
11.
表情动画作为语音驱动人脸动画的一部分,在增加人脸动画逼真性方面起着重要的作用,但已有的工作没有定量分析人脸表情动画与语音之间的关系.文中通过研究人脸表情动画与语音的相关性,采用典型相关性分析方法(CCA)定量分析两者之间的内在联系,得出这些关系直观的量化的结论.首先计算人脸表情动画与语音的典型相关性系数,衡量两者的相关程度;然后分析人脸表情动画与语音的典型负荷、典型交叉负荷等数据,并挖掘两者内部各分量之间的联系,由此得出直观的量化的结论.最后验证了结论的稳定性.分析结果表明两者具有强相关性,并揭示了人脸表情动画各成分与语音声学特征之间的具体内在联系.文中成果可为语音驱动人脸动画技术提供理论参考及结果评价依据. 相似文献
12.
13.
Mouth images are difficult to synthesize because they vary greatly according to different illumination, size and shape of mouth opening, and especially visibility of teeth and tongue. Conventional approaches such as manipulating 3D model or warping images do not produce very realistic animation. To overcome these difficulties, we describe a method of producing large variations of mouth shape and gray-level appearance using a compact parametric appearance model, which represents both shape and gray-level appearance. We find the high correlation between shape model parameters and gray-level model parameters, and design a shape appearance dependence mapping (SADM) strategy that converts one to the other. Once mouth shape parameters are derived from speech analysis, a proper full mouth appearance can be reconstructed with SADM. Some synthetic results of representative mouth appearance are shown in our experiments, they are very close to real mouth images. The proposed technique can be integrated into a speech-driven face animation system. In effect, SADM can synthesize not only the mouth image but also different kinds of dynamic facial texture, such as furrow, dimple and cheekbone shadows. 相似文献
14.
为了提高计算机合成人脸表情动画的后期制作效率,提出一种基于时空的人脸表情动画编辑方法.首先使用基于拉普拉斯的网格变形技术将用户的编辑效果在空间域传播到整个人脸模型,很好地保留了中性人脸模型的几何细节特征,从而提高合成表情的真实感;然后使用高斯函数将用户编辑效果在时间域传播到邻近表情序列,保持人脸表情动画的平滑过渡,所合成人脸表情动画与初始给定数据保持一致.该方法为用户提供了人脸表情动画编辑的局部控制,可由用户指定编辑的影响范围,使得编辑效果在指定范围内自然传播.实验结果表明,文中方法所合成人脸表情动画自然、真实,有效地提高了数据驱动人脸表情动画的编辑效率. 相似文献
15.
面部表情重建的实时性与重建效果的真实性是人脸表情动画的关键问题,提出一种基于Kinect人脸追踪和几何变形技术的面部表情快速重建新方法。使用微软的Kinect设备识别出表演者面部并记录其特征点数据,并利用捕捉到的特征点建立覆盖人脸的网格模型,从中选取变形使用的控制点数据,由于Kinect可以实时地自动追踪表演者面部,由此实现了利用三种不同变形算法对目标模型实时快速重建。实验结果表明,该方法简单易实施,不用在表演者面部做任何标定,可以自动地将人脸表情动作迁移到目标模型上,实现人脸表情快速重建,并且保证目标模型表情真实自然。 相似文献
16.
We describe a system to synthesize facial expressions by editing captured performances. For this purpose, we use the actuation of expression muscles to control facial expressions. We note that there have been numerous algorithms already developed for editing gross body motion. While the joint angle has direct effect on the configuration of the gross body, the muscle actuation has to go through a complicated mechanism to produce facial expressions. Therefore,we devote a significant part of this paper to establishing the relationship between muscle actuation and facial surface deformation. We model the skin surface using the finite element method to simulate the deformation caused by expression muscles. Then, we implement the inverse relationship, muscle actuation parameter estimation, to find the muscle actuation values from the trajectories of the markers on the performer's face. Once the forward and inverse relationships are established, retargeting or editing a performance becomes an easy job. We apply the original performance data to different facial models with equivalent muscle structures, to produce similar expressions. We also produce novel expressions by deforming the original data curves of muscle actuation to satisfy the key‐frame constraints imposed by animators.Copyright © 2001 John Wiley & Sons, Ltd. 相似文献
17.
Visual Speech Synthesis by Morphing Visemes 总被引:6,自引:0,他引:6
We present MikeTalk, a text-to-audiovisual speech synthesizer which converts input text into an audiovisual speech stream. MikeTalk is built using visemes, which are a small set of images spanning a large range of mouth shapes. The visemes are acquired from a recorded visual corpus of a human subject which is specifically designed to elicit one instantiation of each viseme. Using optical flow methods, correspondence from every viseme to every other viseme is computed automatically. By morphing along this correspondence, a smooth transition between viseme images may be generated. A complete visual utterance is constructed by concatenating viseme transitions. Finally, phoneme and timing information extracted from a text-to-speech synthesizer is exploited to determine which viseme transitions to use, and the rate at which the morphing process should occur. In this manner, we are able to synchronize the visual speech stream with the audio speech stream, and hence give the impression of a photorealistic talking face. 相似文献
18.
人工心理模型驱动的人脸表情动画合成 总被引:1,自引:0,他引:1
提出了一种人脸表情动画合成方法,该方法以人工心理模型输出概率值作为权重向量,通过因素加权综合法,控制表情动画模型参数。实验结果表明,该方法实现了心理状态对表情的实时驱动,合成的人脸表情动画真实、自然。 相似文献
19.
计算机人脸表情动画技术综述* 总被引:1,自引:0,他引:1
真实感的计算机人脸表情动画是计算机图形学领域最基本的问题之一。由于其具有广阔的应用前景,引起了越来越多的研究者的关注与极大的兴趣。针对近几十年来该领域的发展状况,对计算机人脸表情动画技术进行综述。通过将人脸表情动画技术分为基于几何学的方法和基于图像的方法,详细阐述并比较了相关的研究成果,分析和讨论了它们的优缺点,并对人脸表情动画技术的未来发展进行了展望。 相似文献
20.
人脸表情发生变化时,面部纹理也相应地改变;为了方便有效地模拟这一动态的表情变化过程,提出一种基于主动外观模型的人脸表情合成方法.首先离线学习人脸表情与人脸形状和外观参数之间的关系,利用该学习结果实现对输入人脸图像的表情合成;针对合成图像中眼睛和牙齿模糊的缺点,利用合成的眼睛图像和牙齿模板来替代模糊纹理.实验结果表明,该方法能合成不同表情强度和类型的表情图像;合成的眼睛图像不仅增强了表情的真实感,同时也便于实现眼睛的动画. 相似文献