首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Authentic facial expression analysis   总被引:1,自引:0,他引:1  
There is a growing trend toward emotional intelligence in human–computer interaction paradigms. In order to react appropriately to a human, the computer would need to have some perception of the emotional state of the human. We assert that the most informative channel for machine perception of emotions is through facial expressions in video. One current difficulty in evaluating automatic emotion detection is that there are currently no international databases which are based on authentic emotions. The current facial expression databases contain facial expressions which are not naturally linked to the emotional state of the test subject. Our contributions in this work are twofold: first, we create the first authentic facial expression database where the test subjects are showing the natural facial expressions based upon their emotional state. Second, we evaluate the several promising machine learning algorithms for emotion detection which include techniques such as Bayesian networks, SVMs, and decision trees.  相似文献   

2.
In our previously developed method for the facial expression recognition of a speaker, the positions of feature vectors in the feature vector space in image processing were generated with imperfections. The imperfections, which caused misrecognition of the facial expression, tended to be far from the center of gravity of the class to which the feature vectors belonged. In the present study, to omit the feature vectors generated with imperfections, a method using reject criteria in the feature vector space was applied to facial expression recognition. Using the proposed method, the facial expressions of two subjects were discriminable with 86.8 % accuracy for the three facial expressions of “happy”, “neutral”, and “others” when they exhibited one of the five intentional facial expressions of “angry”, “happy”, “neutral”, “sad”, and “surprised”, whereas these expressions were discriminable with 78.0 % accuracy by the conventional method. Moreover, the proposed method effectively judged whether the training data were acceptable for facial expression recognition at the moment.  相似文献   

3.
Understanding facial expressions in image sequences is an easy task for humans. Some of us are capable of lipreading by interpreting the motion of the mouth. Automatic lipreading by a computer is a challenging task, with so far limited success. The inverse problem of synthesizing real looking lip movements is also highly non-trivial. Today, the technology to automatically generate an image series that imitates natural postures is far from perfect. We introduce a new framework for facial image representation, analysis and synthesis, in which we focus just on the lower half of the face, specifically the mouth. It includes interpretation and classification of facial expressions and visual speech recognition, as well as a synthesis procedure of facial expressions that yields natural looking mouth movements. Our image analysis and synthesis processes are based on a parametrization of the mouth configuration set of images. These images are represented as points on a two-dimensional flat manifold that enables us to efficiently define the pronunciation of each word and thereby analyze or synthesize the motion of the lips. We present some examples of automatic lips motion synthesis and lipreading, and propose a generalization of our solution to the problem of lipreading different subjects.  相似文献   

4.
Facial expressions are one of the most powerful, natural and immediate means for human being to communicate their emotions and intensions. Recognition of facial expression has many applications including human-computer interaction, cognitive science, human emotion analysis, personality development etc. In this paper, we propose a new method for the recognition of facial expressions from single image frame that uses combination of appearance and geometric features with support vector machines classification. In general, appearance features for the recognition of facial expressions are computed by dividing face region into regular grid (holistic representation). But, in this paper we extracted region specific appearance features by dividing the whole face region into domain specific local regions. Geometric features are also extracted from corresponding domain specific regions. In addition, important local regions are determined by using incremental search approach which results in the reduction of feature dimension and improvement in recognition accuracy. The results of facial expressions recognition using features from domain specific regions are also compared with the results obtained using holistic representation. The performance of the proposed facial expression recognition system has been validated on publicly available extended Cohn-Kanade (CK+) facial expression data sets.  相似文献   

5.
颜文靖  蒋柯  傅小兰   《智能系统学报》2022,17(5):1039-1053
自动表情识别是心理学与计算机科学等深度交叉的前沿领域。情绪心理学、模式识别、情感计算等领域的研究者发展表情识别相关的理论、数据库和算法,极大地推动了自动表情识别技术的进步。文章基于心理学视角,结合我们前期开展的相关工作,首先梳理自动表情识别的心理学基础、情绪的面部表达方式、表情数据的演化、表情样本的标注等方面的理论观点与实践进展,然后分析指出自动表情识别面临的主要问题,最后基于预测加工理论的建构观点,提出注重交互过程中的表情“理解”,有望进一步提高自动表情识别的有效性,并预期这可能是自动表情识别研究的未来发展方向。  相似文献   

6.
The current work will describe an approach to synthesize expressions, including intermediate ones, via the tools provided in the MPEG-4 standard based on real measurements and on universally accepted assumptions of their meaning, taking into account results of Whissel’s study. Additionally, MPEG-4 facial animation parameters are used in order to evaluate theoretical predictions for intermediate expressions of a given emotion episode, based on Scherer’s appraisal theory. MPEG-4 FAPs and action units are combined in modeling the effects of appraisal checks on facial expressions and temporal evolution issues of facial expressions are investigated. The results of the synthesizing process can then be applied to Embodied Conversational Agents (ECAs), rendering their interaction with humans, or other ECAs, more affective.  相似文献   

7.
A facial expression emotion recognition based human-robot interaction (FEER-HRI) system is proposed, for which a four-layer system framework is designed. The FEERHRI system enables the robots not only to recognize human emotions, but also to generate facial expression for adapting to human emotions. A facial emotion recognition method based on 2D-Gabor, uniform local binary pattern (LBP) operator, and multiclass extreme learning machine (ELM) classifier is presented, which is applied to real-time facial expression recognition for robots. Facial expressions of robots are represented by simple cartoon symbols and displayed by a LED screen equipped in the robots, which can be easily understood by human. Four scenarios, i.e., guiding, entertainment, home service and scene simulation are performed in the human-robot interaction experiment, in which smooth communication is realized by facial expression recognition of humans and facial expression generation of robots within 2 seconds. As a few prospective applications, the FEERHRI system can be applied in home service, smart home, safe driving, and so on.   相似文献   

8.
提出一种三维人脸动画数据编辑与合成的有效方法,使用户可以在三维人脸模型上选定控制点,并在二维平面上指定表情动作的约束条件.根据人脸动画数据训练一个先验概率模型,将较少的用户约束传播到人脸网格的其他部分,从而生成完整生动的人脸表情;通过Isomap学习算法对三维人脸动画知识进行建模,并结合用户指定的关键帧拟合高维曲面上的平滑测地线,以自动合成新的人脸动画序列.实验结果表明,该方法可以直观地对人脸动画的生成进行交互式控制,并能生成较为逼真的表情动画.  相似文献   

9.
A fully automated, multistage system for real-time recognition of facial expression is presented. The system uses facial motion to characterize monochrome frontal views of facial expressions and is able to operate effectively in cluttered and dynamic scenes, recognizing the six emotions universally associated with unique facial expressions, namely happiness, sadness, disgust, surprise, fear, and anger. Faces are located using a spatial ratio template tracker algorithm. Optical flow of the face is subsequently determined using a real-time implementation of a robust gradient model. The expression recognition system then averages facial velocity information over identified regions of the face and cancels out rigid head motion by taking ratios of this averaged motion. The motion signatures produced are then classified using Support Vector Machines as either nonexpressive or as one of the six basic emotions. The completed system is demonstrated in two simple affective computing applications that respond in real-time to the facial expressions of the user, thereby providing the potential for improvements in the interaction between a computer user and technology.  相似文献   

10.
For effective interaction between humans and socially adept, intelligent service robots, a key capability required by this class of sociable robots is the successful interpretation of visual data. In addition to crucial techniques like human face detection and recognition, an important next step for enabling intelligence and empathy within social robots is that of emotion recognition. In this paper, an automated and interactive computer vision system is investigated for human facial expression recognition and tracking based on the facial structure features and movement information. Twenty facial features are adopted since they are more informative and prominent for reducing the ambiguity during classification. An unsupervised learning algorithm, distributed locally linear embedding (DLLE), is introduced to recover the inherent properties of scattered data lying on a manifold embedded in high-dimensional input facial images. The selected person-dependent facial expression images in a video are classified using the DLLE. In addition, facial expression motion energy is introduced to describe the facial muscle’s tension during the expressions for person-independent tracking for person-independent recognition. This method takes advantage of the optical flow which tracks the feature points’ movement information. Finally, experimental results show that our approach is able to separate different expressions successfully.  相似文献   

11.
面部表情是一种非语言暗示,在人际交往关系中发挥重要的作用.静态图像中分析处理面部表情,人机交互的处理方法就有很多.首先利用Gabor对静态图像进行特征提取,由PCA降维之后再用贝叶斯估计参数构建的贝叶斯分类器进行分类,从而达到表情识别的目的.  相似文献   

12.
13.
学业情绪能够影响和调节学习者的注意、记忆、思维等认知活动,情绪自动识别是智慧学习环境中情感交互和教学决策的基础。目前情绪识别研究主要集中在离散情绪的识别,其在时间轴上是非连续的,无法精准刻画学生学业情绪演变过程,为解决这个问题,基于众包方法建立真实在线学习情境中的中学生学习维度情感数据集,设计基于连续维度情感预测的深度学习分析模型。实验中根据学生学习风格确定触发学生学业情绪的学习材料,并招募32位实验人员进行自主在线学习,实时采集被试面部图像,获取157个学生学业情绪视频;对每个视频进行情感Arousal和Valence二维化,建立包含2 178张学生面部表情的维度数据库;建立基于ConvLSTM网络的维度情感模型,并在面向中学生的维度情感数据库上进行实验,得到一致性相关系数(Concordance Correlation Coefficient,CCC)均值为0.581,同时在Aff-Wild公开数据集上进行实验,得到的一致相关系数均值为0.222。实验表明,提出的基于维度情感模型在Aff-Wild公开数据集维度情绪识别中CCC相关度系数指标提升了7.6%~43.0%。  相似文献   

14.
Affective computing is important in human–computer interaction. Especially in interactive cloud computing within big data, affective modeling and analysis have extremely high complexity and uncertainty for emotional status as well as decreased computational accuracy. In this paper, an approach for affective experience evaluation in an interactive environment is presented to help enhance the significance of those findings. Based on a person-independent approach and the cooperative interaction as core factors, facial expression features and states as affective indicators are applied to do synergetic dependence evaluation and to construct a participant’s affective experience distribution map in interactive Big Data space. The resultant model from this methodology is potentially capable of analyzing the consistency between a participant’s inner emotional status and external facial expressions regardless of hidden emotions within interactive computing. Experiments are conducted to evaluate the rationality of the affective experience modeling approach outlined in this paper. The satisfactory results on real-time camera demonstrate an availability and validity comparable to the best results achieved through the facial expressions only from reality big data. It is suggested that the person-independent model with cooperative interaction and synergetic dependence evaluation has the characteristics to construct a participant’s affective experience distribution, and can accurately perform real-time analysis of affective experience consistency according to interactive big data. The affective experience distribution is considered as the most individual intelligent method for both an analysis model and affective computing, based on which we can further comprehend affective facial expression recognition and synthesis in interactive cloud computing.  相似文献   

15.
Appraisal theories in psychology study facial expressions in order to deduct information regarding the underlying emotion elicitation processes. Scherer’s component process model provides predictions regarding particular face muscle deformations that are attributed as reactions to the cognitive appraisal stimuli in the study of emotion episodes. In the current work, MPEG-4 facial animation parameters are used in order to evaluate these theoretical predictions for intermediate and final expressions of a given emotion episode. We manipulate parameters such as intensity and temporal evolution of synthesized facial expressions. In emotion episodes originating from identical stimuli, by varying the cognitive appraisals of the stimuli and mapping them to different expression intensities and timings, various behavioral patterns can be generated and thus different agent character profiles can be defined. The results of the synthesis process are consequently applied to Embodied Conversational Agents (ECAs), aiming to render their interaction with humans, or other ECAs, more affective.  相似文献   

16.
This paper describes a new method for generating facial animation in which facial expression and shape can be changed simultaneously in real time. A 2D parameter space independent of facial shape is defined, on which facial expressions are superimposed so that the expressions can be applied to various facial shapes. A facial model is transformed by a bilinear interpolation, which enables a rapid change in facial expression with metamorphosis. The practical efficiency of this method has been demonstrated by a real-time animation system based on this method in live theater.  相似文献   

17.
Abstract

Creating facial expressions manually needs to be enhanced through a set of easy operating rules, which involves adjectives and the manipulating combinations of facial design elements. This article tries to analyse viewers' cognition of artificial facial expressions in an objective and scientific way. We chose four adjectives – ‘satisfied’, ‘sarcastic’, ‘disdainful’, and ‘nervous’ – as the experimental subjects. The manipulative key factors of facial expressions (eyebrows, eyes, pupils, mouth and head rotation) were used to create permutations and combinations and to make 81 stimuli of different facial expressions with a 3-D face model in order to conduct a survey. Next, we used Quantification Theory Type I to find the best combinations that participants agreed on as representing these adjectives. The conclusions of this research are that: (1) there are differences in adopting facial features between creating artificial characters' expressions and recognising real humans' expressions; (2) using survey and statistics can scientifically analyse viewers' cognition of facial expressions with form changing; and (3) the results of this research can promote designers' efficiency in working with subtler facial expressions.  相似文献   

18.
Automatic perception of human affective behaviour from facial expressions and recognition of intentions and social goals from dialogue contexts would greatly enhance natural human robot interaction. This research concentrates on intelligent neural network based facial emotion recognition and Latent Semantic Analysis based topic detection for a humanoid robot. The work has first of all incorporated Facial Action Coding System describing physical cues and anatomical knowledge of facial behaviour for the detection of neutral and six basic emotions from real-time posed facial expressions. Feedforward neural networks (NN) are used to respectively implement both upper and lower facial Action Units (AU) analysers to recognise six upper and 11 lower facial actions including Inner and Outer Brow Raiser, Lid Tightener, Lip Corner Puller, Upper Lip Raiser, Nose Wrinkler, Mouth Stretch etc. An artificial neural network based facial emotion recogniser is subsequently used to accept the derived 17 Action Units as inputs to decode neutral and six basic emotions from facial expressions. Moreover, in order to advise the robot to make appropriate responses based on the detected affective facial behaviours, Latent Semantic Analysis is used to focus on underlying semantic structures of the data and go beyond linguistic restrictions to identify topics embedded in the users’ conversations. The overall development is integrated with a modern humanoid robot platform under its Linux C++ SDKs. The work presented here shows great potential in developing personalised intelligent agents/robots with emotion and social intelligence.  相似文献   

19.
Facial expressions have always attracted considerable attention as a form of nonverbal communication. In visual applications such as movies, games, and animations, people tend to be interested in exaggerated expressions rather than regular expressions since the exaggerated ones deliver more vivid emotions. In this paper, we propose an automatic method for exaggeration of facial expressions from motion-captured data with a certain personality type. The exaggerated facial expressions are generated by using the exaggeration mapping (EM) that transforms facial motions into exaggerated motions. As all individuals do not have identical personalities, a conceptual mapping of the individual’s personality type for exaggerating facial expressions needs to be considered. The Myers–Briggs type indicator, which is a popular method for classifying personality types, is employed to define the personality-type-based EM. Further, we have experimentally validated the EM and simulations of facial expressions.  相似文献   

20.
Human face plays a crucial role in interpersonal communication. If we synthesize vivid expressional face in cyberspace, we could make the interaction between computer and human more natural and friendly. In this paper, we present a simple methodology for mimicking realistic face by manipulating emotional states. Compared with traditional methods of facial expression synthesis, our approach takes three advantages at the same time. They are (1) generating facial expressions under quantitative control of emotional states, (2) rendering shape and illumination changes on face simultaneously and (3) synthesizing expressional face for any new person by only utilizing a neutral face image. We have discussed the implementation method in the paper and demonstrated the effects of our approach by using a series of interesting experiments, such as predicting unseen expressions for an unfamiliar person, simulating one’s facial expressions with someone else’s style, extracting pure emotional expressions from the admixtures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号