首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The current work will describe an approach to synthesize expressions, including intermediate ones, via the tools provided in the MPEG-4 standard based on real measurements and on universally accepted assumptions of their meaning, taking into account results of Whissel’s study. Additionally, MPEG-4 facial animation parameters are used in order to evaluate theoretical predictions for intermediate expressions of a given emotion episode, based on Scherer’s appraisal theory. MPEG-4 FAPs and action units are combined in modeling the effects of appraisal checks on facial expressions and temporal evolution issues of facial expressions are investigated. The results of the synthesizing process can then be applied to Embodied Conversational Agents (ECAs), rendering their interaction with humans, or other ECAs, more affective.  相似文献   

2.
该研究采用事件相关电位(ERP)观察了被试在识别积极、中性和消极的脸部情绪时,在大脑颞枕部电极点上引发的N170效应,来探索阅读严肃文学小说是否会影响人对他人情绪的反应。阅读组被试在两次脸部情绪识别测试之间阅读严肃文学小说,而对照组没有。第二次测试相比第一次测试,N170的幅度增大,但是阅读严肃文学小说会抑制N170幅度增益,且对情绪越积极的刺激图片抑制越大。据此,阅读对他人脸部情绪的识别确有影响。研究推测阅读可能抑制大脑中的脸部情绪特异性,进而可能提高对脸部情绪的感知力。  相似文献   

3.
利用主动外观模型合成动态人脸表情   总被引:2,自引:2,他引:0  
人脸表情发生变化时,面部纹理也相应地改变;为了方便有效地模拟这一动态的表情变化过程,提出一种基于主动外观模型的人脸表情合成方法.首先离线学习人脸表情与人脸形状和外观参数之间的关系,利用该学习结果实现对输入人脸图像的表情合成;针对合成图像中眼睛和牙齿模糊的缺点,利用合成的眼睛图像和牙齿模板来替代模糊纹理.实验结果表明,该方法能合成不同表情强度和类型的表情图像;合成的眼睛图像不仅增强了表情的真实感,同时也便于实现眼睛的动画.  相似文献   

4.
学业情绪能够影响和调节学习者的注意、记忆、思维等认知活动,情绪自动识别是智慧学习环境中情感交互和教学决策的基础。目前情绪识别研究主要集中在离散情绪的识别,其在时间轴上是非连续的,无法精准刻画学生学业情绪演变过程,为解决这个问题,基于众包方法建立真实在线学习情境中的中学生学习维度情感数据集,设计基于连续维度情感预测的深度学习分析模型。实验中根据学生学习风格确定触发学生学业情绪的学习材料,并招募32位实验人员进行自主在线学习,实时采集被试面部图像,获取157个学生学业情绪视频;对每个视频进行情感Arousal和Valence二维化,建立包含2 178张学生面部表情的维度数据库;建立基于ConvLSTM网络的维度情感模型,并在面向中学生的维度情感数据库上进行实验,得到一致性相关系数(Concordance Correlation Coefficient,CCC)均值为0.581,同时在Aff-Wild公开数据集上进行实验,得到的一致相关系数均值为0.222。实验表明,提出的基于维度情感模型在Aff-Wild公开数据集维度情绪识别中CCC相关度系数指标提升了7.6%~43.0%。  相似文献   

5.
In this paper, we address the analysis and recognition of facial expressions in continuous videos. More precisely, we study classifiers performance that exploit head pose independent temporal facial action parameters. These are provided by an appearance-based 3D face tracker that simultaneously provides the 3D head pose and facial actions. The use of such tracker makes the recognition pose- and texture-independent. Two different schemes are studied. The first scheme adopts a dynamic time warping technique for recognizing expressions where training data are given by temporal signatures associated with different universal facial expressions. The second scheme models temporal signatures associated with facial actions with fixed length feature vectors (observations), and uses some machine learning algorithms in order to recognize the displayed expression. Experiments quantified the performance of different schemes. These were carried out on CMU video sequences and home-made video sequences. The results show that the use of dimension reduction techniques on the extracted time series can improve the classification performance. Moreover, these experiments show that the best recognition rate can be above 90%.  相似文献   

6.
表情识别的性能依赖于所提取表情特征的有效性,现有方法提取的表情基本上是人脸与表情的融合体,然而不同个体的人脸差异是表情识别的主要干扰因素。在表情识别时,理想情况是将个体相关的人脸特征和与个体无关的表情特征相分离。针对此问题,在三维空间建立人脸张量;然后用张量分析的方法将人脸特征与表情特征进行分离,使获取的表情参数与人脸无关。从而排除不同个体的人脸差异对表情识别的干扰。最后,在JAFFE表情数据库上验证了该方法的有效性。  相似文献   

7.
简化路况模式下驾驶员情绪模型的研究   总被引:1,自引:0,他引:1  
解仑  王志良  任冬淳  滕少冬 《自动化学报》2010,36(12):1732-1743
驾驶辅助系统中的驾驶员模型较为单一, 没有考虑驾驶员的情绪状态对驾驶策略的影响. 为此, 本文研究了简化路况下驾驶员的情绪模型. 基于OCC (Ortony-clore-collins) 模型、情绪状态自发转移过程的马尔科夫模型和情绪状态刺激转移的隐马尔科夫模型(Hidden Markov model, HMM), 本文提出路况变化和无路况两种情况下的情绪模型, 并对驾驶员的跟驰、切换车道和超车过程中的情绪变化进行了研究. 在自发转移过程中, 结合情绪实时变化的特性, 提出了时变的自发转移过程,而在情绪刺激转移中, 考虑了情感对刺激的记忆效应, 即同种刺激先后对情感影响不同. 讨论了认知情感的变化对驾驶策略的影响. 针对车距、路宽和周围车辆车速对驾驶员的情感影响程度、刺激敏感程度以及特定事件对驾驶员的影响过程, 进行了仿真实验, 预估出驾驶员在特定事件刺激下会采取何种驾驶策略. 并进行了实测数据验证, 实验结果验证了所提出模型的有效性, 为驾驶辅助系统中建立驾驶员模型提供了有借鉴意义的基础理论.  相似文献   

8.
Authentic facial expression analysis   总被引:1,自引:0,他引:1  
There is a growing trend toward emotional intelligence in human–computer interaction paradigms. In order to react appropriately to a human, the computer would need to have some perception of the emotional state of the human. We assert that the most informative channel for machine perception of emotions is through facial expressions in video. One current difficulty in evaluating automatic emotion detection is that there are currently no international databases which are based on authentic emotions. The current facial expression databases contain facial expressions which are not naturally linked to the emotional state of the test subject. Our contributions in this work are twofold: first, we create the first authentic facial expression database where the test subjects are showing the natural facial expressions based upon their emotional state. Second, we evaluate the several promising machine learning algorithms for emotion detection which include techniques such as Bayesian networks, SVMs, and decision trees.  相似文献   

9.
目的 针对当前视频情感判别方法大多仅依赖面部表情、而忽略了面部视频中潜藏的生理信号所包含的情感信息,本文提出一种基于面部表情和血容量脉冲(BVP)生理信号的双模态视频情感识别方法。方法 首先对视频进行预处理获取面部视频;然后对面部视频分别提取LBP-TOP和HOG-TOP两种时空表情特征,并利用视频颜色放大技术获取BVP生理信号,进而提取生理信号情感特征;接着将两种特征分别送入BP分类器训练分类模型;最后利用模糊积分进行决策层融合,得出情感识别结果。结果 在实验室自建面部视频情感库上进行实验,表情单模态和生理信号单模态的平均识别率分别为80%和63.75%,而融合后的情感识别结果为83.33%,高于融合前单一模态的情感识别精度,说明了本文融合双模态进行情感识别的有效性。结论 本文提出的双模态时空特征融合的情感识别方法更能充分地利用视频中的情感信息,有效增强了视频情感的分类性能,与类似的视频情感识别算法对比实验验证了本文方法的优越性。另外,基于模糊积分的决策层融合算法有效地降低了不可靠决策信息对融合的干扰,最终获得更优的识别精度。  相似文献   

10.
Abstract

Creating facial expressions manually needs to be enhanced through a set of easy operating rules, which involves adjectives and the manipulating combinations of facial design elements. This article tries to analyse viewers' cognition of artificial facial expressions in an objective and scientific way. We chose four adjectives – ‘satisfied’, ‘sarcastic’, ‘disdainful’, and ‘nervous’ – as the experimental subjects. The manipulative key factors of facial expressions (eyebrows, eyes, pupils, mouth and head rotation) were used to create permutations and combinations and to make 81 stimuli of different facial expressions with a 3-D face model in order to conduct a survey. Next, we used Quantification Theory Type I to find the best combinations that participants agreed on as representing these adjectives. The conclusions of this research are that: (1) there are differences in adopting facial features between creating artificial characters' expressions and recognising real humans' expressions; (2) using survey and statistics can scientifically analyse viewers' cognition of facial expressions with form changing; and (3) the results of this research can promote designers' efficiency in working with subtler facial expressions.  相似文献   

11.
Facial expressions are one of the most powerful, natural and immediate means for human being to communicate their emotions and intensions. Recognition of facial expression has many applications including human-computer interaction, cognitive science, human emotion analysis, personality development etc. In this paper, we propose a new method for the recognition of facial expressions from single image frame that uses combination of appearance and geometric features with support vector machines classification. In general, appearance features for the recognition of facial expressions are computed by dividing face region into regular grid (holistic representation). But, in this paper we extracted region specific appearance features by dividing the whole face region into domain specific local regions. Geometric features are also extracted from corresponding domain specific regions. In addition, important local regions are determined by using incremental search approach which results in the reduction of feature dimension and improvement in recognition accuracy. The results of facial expressions recognition using features from domain specific regions are also compared with the results obtained using holistic representation. The performance of the proposed facial expression recognition system has been validated on publicly available extended Cohn-Kanade (CK+) facial expression data sets.  相似文献   

12.
This paper presents a fuzzy relational approach to human emotion recognition from facial expressions and its control. The proposed scheme uses external stimulus to excite specific emotions in human subjects whose facial expressions are analyzed by segmenting and localizing the individual frames into regions of interest. Selected facial features such as eye opening, mouth opening, and the length of eyebrow constriction are extracted from the localized regions, fuzzified, and mapped onto an emotion space by employing Mamdani-type relational models. A scheme for the validation of the system parameters is also presented. This paper also provides a fuzzy scheme for controlling the transition of emotion dynamics toward a desired state. Experimental results and computer simulations indicate that the proposed scheme for emotion recognition and control is simple and robust, with good accuracy.  相似文献   

13.
The ability to recognize facial emotions is target behaviour when treating people with social impairment. When assessing this ability, the most widely used facial stimuli are photographs. Although their use has been shown to be valid, photographs are unable to capture the dynamic aspects of human expressions. This limitation can be overcome by creating virtual agents with feasible expressed emotions. The main objective of the present study was to create a new set of dynamic virtual faces with high realism that could be integrated into a virtual reality (VR) cyberintervention to train people with schizophrenia in the full repertoire of social skills. A set of highly realistic virtual faces was created based on the Facial Action Coding System. Facial movement animation was also included so as to mimic the dynamism of human facial expressions. Consecutive healthy participants (n = 98) completed a facial emotion recognition task using both natural faces (photographs) and virtual agents expressing five basic emotions plus a neutral one. Repeated-measures ANOVA revealed no significant difference in participants’ accuracy of recognition between the two presentation conditions. However, anger was better recognized in the VR images, and disgust was better recognized in photographs. Age, the participant’s gender and reaction times were also explored. Implications of the use of virtual agents with realistic human expressions in cyberinterventions are discussed.  相似文献   

14.
In this article the role of different categories of postures in the detection, recognition, and interpretation of emotion in contextually rich scenarios, including ironic items, is investigated. Animated scenarios are designed with 3D virtual agents in order to test 3 conditions: In the “still” condition, the narrative content was accompanied by emotional facial expressions without any body movements; in the “idle” condition, emotionally neutral body movements were introduced; and in the “congruent” condition, emotional body postures congruent with the character's facial expressions were displayed. Those conditions were examined by 27 subjects, and their impact on the viewers’ attentional and emotional processes was assessed. The results highlight the importance of the contextual information to emotion recognition and irony interpretation. It is also shown that both idle and emotional postures improve the detection of emotional expressions. Moreover, emotional postures increase the perceived intensity of emotions and the realism of the animations.  相似文献   

15.
Most studies use the facial expression to recognize a user’s emotion; however, gestures, such as nodding, shaking the head, or stillness can also be indicators of the user’s emotion. In our research, we use the facial expression and gestures to detect and recognize a user’s emotion. The pervasive Microsoft Kinect sensor captures video data, from which several features representing facial expressions and gestures are extracted. An in-house extensible markup language-based genetic programming engine (XGP) evolves the emotion recognition module of our system. To improve the computational performance of the recognition module, we implemented and compared several approaches, including directed evolution, collaborative filtering via canonical voting, and a genetic algorithm, for an automated voting system. The experimental results indicate that XGP is feasible for evolving emotion classifiers. In addition, the obtained results verify that collaborative filtering improves the generality of recognition. From a psychological viewpoint, the results prove that different people might express their emotions differently, as the emotion classifiers that are evolved for particular users might not be applied successfully to other user(s).  相似文献   

16.
We present techniques for improving performance driven facial animation, emotion recognition, and facial key-point or landmark prediction using learned identity invariant representations. Established approaches to these problems can work well if sufficient examples and labels for a particular identity are available and factors of variation are highly controlled. However, labeled examples of facial expressions, emotions and key-points for new individuals are difficult and costly to obtain. In this paper we improve the ability of techniques to generalize to new and unseen individuals by explicitly modeling previously seen variations related to identity and expression. We use a weakly-supervised approach in which identity labels are used to learn the different factors of variation linked to identity separately from factors related to expression. We show how probabilistic modeling of these sources of variation allows one to learn identity-invariant representations for expressions which can then be used to identity-normalize various procedures for facial expression analysis and animation control. We also show how to extend the widely used techniques of active appearance models and constrained local models through replacing the underlying point distribution models which are typically constructed using principal component analysis with identity–expression factorized representations. We present a wide variety of experiments in which we consistently improve performance on emotion recognition, markerless performance-driven facial animation and facial key-point tracking.  相似文献   

17.
The use of expressive Virtual Characters is an effective complementary means of communication for social networks offering multi-user 3D-chatting environment. In such contexts, the facial expression channel offers a rich medium to translate the ongoing emotions conveyed by the text-based exchanges. However, until recently, only purely symmetric facial expressions have been considered for that purpose. In this article we examine human sensitivity to facial asymmetry in the expression of both basic and complex emotions. The rationale for introducing asymmetry in the display of facial expressions stems from two well-established observations in cognitive neuroscience: first that the expression of basic emotions generally displays a small asymmetry, second that more complex emotions such as ambivalent feeling may reflect in the partial display of different, potentially opposite, emotions on each side of the face. A frequent occurrence of this second case results from the conflict between the truly felt emotion and the one that should be displayed due to social conventions. Our main hypothesis is that a much larger expressive and emotional space can only be automatically synthesized by means of facial asymmetry when modeling emotions with a general Valence-Arousal-Dominance dimensional approach. Besides, we want also to explore the general human sensitivity to the introduction of a small degree of asymmetry into the expression of basic emotions. We conducted an experiment by presenting 64 pairs of static facial expressions, one symmetric and one asymmetric, illustrating eight emotions (three basic and five complex ones) alternatively for a male and a female character. Each emotion was presented four times by swapping the symmetric and asymmetric positions and by mirroring the asymmetrical expression. Participants were asked to grade, on a continuous scale, the correctness of each facial expression with respect to a short definition. Results confirm the potential of introducing facial asymmetry for a subset of the complex emotions. Guidelines are proposed for designers of embodied conversational agent and emotionally reflective avatars.  相似文献   

18.
本文研究了基于Isomap的非线性降维方法,对由面部表情序列提取的面部动画参数特征进行降维,分析了降维后的流形特征空间与认知心理学情感空间之间的关系。实验结果表明,Isomap降维后的情感流形特征能够表现情感的强度变化,而且比PCA降维特征对情感强度的描述更加合理和平滑;情感识别实验也表明,使用Isomap降维流形特征的识别率要高于原始情感特征和PCA降维特征,而且对各种情感的识别结果更加均衡。  相似文献   

19.
A social robot should be able to autonomously interpret human affect and adapt its behavior accordingly in order for successful social human–robot interaction to take place. This paper presents a modular non-contact automated affect-estimation system that employs support vector regression over a set of novel facial expression parameters to estimate a person’s affective states using a valence-arousal two-dimensional model of affect. The proposed system captures complex and ambiguous emotions that are prevalent in real-world scenarios by utilizing a continuous two-dimensional model, rather than a traditional discrete categorical model for affect. As the goal is to incorporate this recognition system in robots, real-time estimation of spontaneous natural facial expressions in response to environmental and interactive stimuli is an objective. The proposed system can be combined with affect detection techniques using other modes, such as speech, body language and/or physiological signals, etc., in order to develop an accurate multi-modal affect estimation system for social HRI applications. Experiments presented herein demonstrate the system’s ability to successfully estimate the affect of a diverse group of unknown individuals exhibiting spontaneous natural facial expressions.  相似文献   

20.
In order to enable personalized natural interaction in service robots, artificial emotion is needed which helps robots to appear as individuals. In the emotion modeling theory of emotional Markov chain model (eMCM) for spontaneous transfer and emotional hidden Markov model (eHMM) for stimulated transfer, there are three problems: 1) Emotion distinguishing problem: whether adjusting parameters of the model have any effects on individual emotions; 2) How much effect the change makes; 3) The problem of different initial emotional states leading to different resultant emotions from a given stimuli. To solve these problems, a research method of individual emotional difference is proposed based on metric multidimensional scaling theory. Using a dissimilarity matrix, a scalar product matrix is calculated. Subsequently, an individual attribute reconstructing matrix can be obtained by principal component factor analysis. This can display individual emotion difference with low dimension. In addition, some mathematical proofs are carried out to explain experimental results. Synthesizing the results and proofs, corresponding conclusions are obtained. This new method provides guidance for the adjustment of parameters of emotion models in artificial emotion theory.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号