首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 52 毫秒
1.
本文设计并实现了文本驱动的面部表情合成系统。本系统可以应用于聋哑人辅助教学。  相似文献   

2.
人脸表情动画是计算机图形学的重要研究领域之一,在影视和游戏中的虚拟人的应用促进了它的发展,它主要研究若干种典型表情的产生以及利用已有表情产生中间过渡表情。本文主要介绍了表情动画的研究现状,并且设计实现了基于文本驱动的人脸表情变化系统。  相似文献   

3.
文本驱动的唇动合成系统   总被引:11,自引:1,他引:11  
国内在汉语拼音到口型的变换方面还没有相应的研究,本文根据汉语拼音的构成及发音时唇动变动特点,首先定义了一个基本口型集,其中包括6种基本口型,再根据该基本口型集,衍生一个韵母口型库,使每一个汉字在发音时都对应着相应的口型,文本驱动的唇动系统的功能为对于任意输入文本,系统均可将该输入文本切分析的单独的汉字并对应到三维虚拟人脸的语音变化中去。本系统的实现在聋哑人辅助教学,提高聋哑人与正常人之间日常交流方  相似文献   

4.
多表情源的人脸表情合成技术   总被引:2,自引:0,他引:2  
提出了基于一个局部约束偏移和相对偏移技术的图像变形方法,用于人脸表情合成.该方法可以自动克服目前表情合成方法的非特征区域的形变,并与脸型外观相匹配.在此基础上,提出了基于多表情源的人脸表情合成技术.从人脸图像划分出三个主要表现表情的区域,并对每个区域根据多个表情源的加权平均的相对偏移量进行图像局部变形,最后利用加权平均的表情比图像加强表情.实验结果表明,用该方法得到的人脸表情自然逼真,更为重要的是,通过调节权向量的取值,可以取得不同模式的表情,丰富了表情表现的内涵.  相似文献   

5.
基于表情分解-扭曲变形的人工表情合成算法   总被引:1,自引:0,他引:1       下载免费PDF全文
为了能快速有效地生成任意强度的人脸表情图像,提出了一种鲁棒的可以生成带任意强度表情图像的人工表情合成算法,该算法首先通过施加高阶奇异值分解(HOSVD)来把训练集分解为个人、表情和特征3个子空间,并把它们映射到表情子空间中,用来合成任意人脸正面照片的任意强度、任意表情的图像;在生成图像时,不采用通常所使用的线性组合基图像生成法,而是对源图像进行扭曲变形,这不仅能使训练数据和计算量大为减少,还可以生成任意尺寸、任意背景、任意光照、任意色彩或任意姿势的表情图像,且通过二次插值,还可以得到任意强度的表情图像。实验证明,该算法效率较高,且生成的图像效果很好。  相似文献   

6.
为了重用视频内容中的表情信息,提出一种语义表情构造与语义表情参数优化方法.首先从带有噪声的稀疏特征点中定义出人脸表情的语义信息;然后在语义表情空间中优化求解出最优表情参数,以提高人脸动画的真实感.该方法既不需要标定相机参数,也不需要预先建立表演者的3D人脸模型及其表情基,因此除了可用于网络视频的表情重用,也可用于开发实时在线的网络社交等应用.实验结果表明,对于头部摆动的俯仰角和侧角在[?15?,15?]范围内的原始视频,文中方法能够实时合成稳定、逼真的表情动画.  相似文献   

7.
详细介绍开发一个人脸面部表情自动生成系统所用到的相关技术,包括人脸三维建模、面部表情合成、系统用户界面实现等技术。系统开发的目的是建立一个具有真实感和实用性的三维人脸面部表情自动生成系统,其中,三维人脸建模技术包括3DMAX使用关键技术以及人脸特征提取技术,基于人脸建模的表情生成技术主要介绍通过肌肉运动来描述表情的技术,系统界面的设计采用MFC技术。对各技术全面而系统的介绍,将为人脸面部表情自动生成系统的开发与实现提供一个有效途径,为系统的应用者提供便利。  相似文献   

8.
分析人脸模型的动态表情合成方法并依据它们内在特点进行分类描述。尽管这个领域已经存在较多文献,但是动态人脸表情合成仍然是非常活跃的研究热点。根据输出类型的不同,分类概览二维图像平面和三维人脸曲面上的合成算法。对于二维图像平面空间合成人脸表情主要有如下几种算法:主动表情轮廓模型驱动的人脸表情合成算法,基于拉普拉斯算子迁移计算的合成方法,使用表情比率图合成框架的表情合成算法,基于面部主特征点offset驱动的人脸表情合成算法,基于通用表情映射函数的表情合成方法和近来基于深度学习的表情合成技术。对于三维空间人脸合成则主要包括:基于物理肌肉模型的合成,基于形变的表情合成,基于三维形状线性回归的表情合成,基于脸部运动图的表情合成和近来基于深度学习的三维人脸表情合成技术。对以上每一种类别讨论它们的方法论以及其主要优缺点。本工作有望帮助未来研究者更好地定位研究方向和技术突破口。  相似文献   

9.
文中综述了真实感3D人脸表情合成技术的研究进展.根据所采用的技术,将人脸表情合成方法划分为混合样本人脸表情合成、直接表情迁移、基于sketch的人脸表情合成与编辑、基于机器学习的真实感人脸表情合成、高分辨率人脸表情和细节的提取与合成5类,介绍了各类方法的研究现状,并比较了它们的优缺点.最后进行了总结,并对人脸表情合成技术未来的研究方向予以展望.  相似文献   

10.
人脸表情合成技术旨在保留人脸身份信息的情况下,对人脸表情进行重建,从而生成具有新表情的源人脸图像。深度学习的发展为表情合成提供了全新的解决方案,本文从特征提取、生成对抗网络的表情合成和实验评估方面综述了人脸表情合成技术的发展。首先,介绍了人脸特征的提取,这是表情合成任务中的一项关键技术,人脸特征可客观全面地描述人脸表情状态。其次,分析了表情合成领域中主流的基于深度学习的方法,主要针对生成对抗网络(Generative adversarial network,GAN)的发展现状,探讨了基于生成对抗网络的表情合成方法。通过对人脸数据集及实验评估方法的深入研究,总结出广泛使用的人脸表情合成数据集以及多种客观评价方法。最后根据现有方法所存在的问题,提出了未来工作的研究方向。  相似文献   

11.
虚拟人情绪行为动画模型   总被引:6,自引:0,他引:6       下载免费PDF全文
近年来 ,虚拟人行为动画已经成为计算机动画一个新的分枝 ,已往的研究多集中在虚拟人局部的情绪表达动画方面 ,如脸部动画 ,而对于给定的虚拟场景 ,则尚未考虑情绪产生的原因 .情绪是虚拟人和虚拟环境交互作用的结果 ,然而在计算机动画领域 ,虚拟人的情绪至今尚未得到清楚的描述 .为此 ,依据心理学的理论 ,提出了虚拟人情绪行为的动画模型 ,即 ,首先提出了情绪集合和情绪表达集合的概念 ,并建立了从情绪状态到情绪表达之间的映射 ;其次 ,着重分析了情绪产生的原因 ,并引入了情绪源的概念 ,如果一种情绪刺激的强度大于情绪的抵抗强度 ,那么这种情绪就会产生 ;此外 ,情绪状态可以采用有限状态机来描述 ,据此提出了情绪的变化流程 ;最后 ,在微机上通过调用 Microsoft Direct3D API,实现了虚拟人的情绪行为动画  相似文献   

12.
Facial expressional image synthesis controlled by emotional parameters   总被引:2,自引:0,他引:2  
  相似文献   

13.
Over the last decade, automatic facial expression analysis has become an active research area that finds potential applications in areas such as more engaging human-computer interfaces, talking heads, image retrieval and human emotion analysis. Facial expressions reflect not only emotions, but other mental activities, social interaction and physiological signals. In this survey, we introduce the most prominent automatic facial expression analysis methods and systems presented in the literature. Facial motion and deformation extraction approaches as well as classification methods are discussed with respect to issues such as face normalization, facial expression dynamics and facial expression intensity, but also with regard to their robustness towards environmental changes.  相似文献   

14.
The use of avatars with emotionally expressive faces is potentially highly beneficial to communication in collaborative virtual environments (CVEs), especially when used in a distance learning context. However, little is known about how, or indeed whether, emotions can effectively be transmitted through the medium of a CVE. Given this, an avatar head model with limited but human-like expressive abilities was built, designed to enrich CVE communication. Based on the facial action coding system (FACS), the head was designed to express, in a readily recognisable manner, the six universal emotions. An experiment was conducted to investigate the efficacy of the model. Results indicate that the approach of applying the FACS model to virtual face representations is not guaranteed to work for all expressions of a particular emotion category. However, given appropriate use of the model, emotions can effectively be visualised with a limited number of facial features. A set of exemplar facial expressions is presented.  相似文献   

15.
To test whether synthetic emotions expressed by a virtual human elicit positive or negative emotions in a human conversation partner and affect satisfaction towards the conversation, an experiment was conducted where the emotions of a virtual human were manipulated during both the listening and speaking phase of the dialogue. Twenty-four participants were recruited and were asked to have a real conversation with the virtual human on six different topics. For each topic the virtual human’s emotions in the listening and speaking phase were different, including positive, neutral and negative emotions. The results support our hypotheses that (1) negative compared to positive synthetic emotions expressed by a virtual human can elicit a more negative emotional state in a human conversation partner, (2) synthetic emotions expressed in the speaking phase have more impact on a human conversation partner than emotions expressed in the listening phase, (3) humans with less speaking confidence also experience a conversation with a virtual human as less positive, and (4) random positive or negative emotions of a virtual human have a negative effect on the satisfaction with the conversation. These findings have practical implications for the treatment of social anxiety as they allow therapists to control the anxiety evoking stimuli, i.e., the expressed emotion of a virtual human in a virtual reality exposure environment of a simulated conversation. In addition, these findings may be useful to other virtual applications that include conversations with a virtual human.  相似文献   

16.
人脸表情的形变线性拟合方法   总被引:1,自引:0,他引:1  
提出了用于人脸表情合成的形变线性拟合方法. 该方法利用人脸图像形变模型线性组合逼近的基本思想, 确定合成表情图像的形状信息和纹理信息, 其步骤简单, 容易实现. 该方法能有效地从中性表情人脸图像合成出具有表情的图像, 并且得到的人脸表情自然、逼真、具有说服力. 更为重要的是, 该方法能从闭着嘴的中性表情人脸图像合成出具有张开嘴露出牙齿效果的人脸表情图像, 克服了当前大多数人脸表情合成方法不能实现这一效果的不足.  相似文献   

17.
With technology allowing for increased realism in video games, realistic, human-like characters risk falling into the Uncanny Valley. The Uncanny Valley phenomenon implies that virtual characters approaching full human-likeness will evoke a negative reaction from the viewer, due to aspects of the character’s appearance and behavior differing from the human norm. This study investigates if “uncanniness” is increased for a character with a perceived lack of facial expression in the upper parts of the face. More important, our study also investigates if the magnitude of this increased uncanniness varies depending on which emotion is being communicated. Individual parameters for each facial muscle in a 3D model were controlled for the six emotions: anger, disgust, fear, happiness, sadness and surprise in addition to a neutral expression. The results indicate that even fully and expertly animated characters are rated as more uncanny than humans and that, in virtual characters, a lack of facial expression in the upper parts of the face during speech exaggerates the uncanny by inhibiting effective communication of the perceived emotion, significantly so for fear, sadness, disgust, and surprise but not for anger and happiness. Based on our results, we consider the implications for virtual character design.  相似文献   

18.
As technology advances, robots and virtual agents will be introduced into the home and healthcare settings to assist individuals, both young and old, with everyday living tasks. Understanding how users recognize an agent׳s social cues is therefore imperative, especially in social interactions. Facial expression, in particular, is one of the most common non-verbal cues used to display and communicate emotion in on-screen agents (Cassell et al., 2000). Age is important to consider because age-related differences in emotion recognition of human facial expression have been supported (Ruffman et al., 2008), with older adults showing a deficit for recognition of negative facial expressions. Previous work has shown that younger adults can effectively recognize facial emotions displayed by agents (Bartneck and Reichenbach, 2005, Courgeon et al., 2009, Courgeon et al., 2011, Breazeal, 2003); however, little research has compared in-depth younger and older adults’ ability to label a virtual agent׳s facial emotions, an import consideration because social agents will be required to interact with users of varying ages. If such age-related differences exist for recognition of virtual agent facial expressions, we aim to understand if those age-related differences are influenced by the intensity of the emotion, dynamic formation of emotion (i.e., a neutral expression developing into an expression of emotion through motion), or the type of virtual character differing by human-likeness. Study 1 investigated the relationship between age-related differences, the implication of dynamic formation of emotion, and the role of emotion intensity in emotion recognition of the facial expressions of a virtual agent (iCat). Study 2 examined age-related differences in recognition expressed by three types of virtual characters differing by human-likeness (non-humanoid iCat, synthetic human, and human). Study 2 also investigated the role of configural and featural processing as a possible explanation for age-related differences in emotion recognition. First, our findings show age-related differences in the recognition of emotions expressed by a virtual agent, with older adults showing lower recognition for the emotions of anger, disgust, fear, happiness, sadness, and neutral. These age-related difference might be explained by older adults having difficulty discriminating similarity in configural arrangement of facial features for certain emotions; for example, older adults often mislabeled the similar emotions of fear as surprise. Second, our results did not provide evidence for the dynamic formation improving emotion recognition; but, in general, the intensity of the emotion improved recognition. Lastly, we learned that emotion recognition, for older and younger adults, differed by character type, from best to worst: human, synthetic human, and then iCat. Our findings provide guidance for design, as well as the development of a framework of age-related differences in emotion recognition.  相似文献   

19.
虚拟人面部行为的合成   总被引:17,自引:2,他引:17  
虚拟人是虚拟现实环境中很重要的一部分,对于虚拟人行为的研究除了应从宏观上考虑虚拟人的群体行为属性之外,以个体行为属性的研究也非常重要。个体行为包括自然行为和意识行为。自然行为主要是和脸部、头部以及四肢运动有关的行为。而意识行为则包括与语言和心理活动相关联的表情、发声以及对应的唇动手势动作等。本文旨在研究与意识行为有关的虚拟人面部图像合成技术,讨论了标准人脸图像的参数合成方法,给出了特定人脸图像与标  相似文献   

20.
基于情感计算的e-Learning系统建模   总被引:3,自引:0,他引:3  
e-Learning也叫数字化学习,是通过因特网或其他数字化媒体进行学习与教学的活动。情感计算是指关于情感、情感产生以及影响情感的计算,试图创建一种能感知、识别和理解人的情感,并能针对人的情感做出智能、灵敏、友好反应的计算机系统。本文将 e-Learning系统和情感计算结合在一起,提出了一个基于情感计算的 e-Learning系统模型,旨在有效地解决 e-Learning系统中情感交流匮乏的问题。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号