首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Given a person’s neutral face, we can predict his/her unseen expression by machine learning techniques for image processing. Different from the prior expression cloning or image analogy approaches, we try to hallucinate the person’s plausible facial expression with the help of a large face expression database. In the first step, regularization network based nonlinear manifold learning is used to obtain a smooth estimation for unseen facial expression, which is better than the reconstruction results of PCA. In the second step, Markov network is adopted to learn the low-level local facial feature’s relationship between the residual neutral and the expressional face image’s patches in the training set, then belief propagation is employed to infer the expressional residual face image for that person. By integrating the two approaches, we obtain the final results. The experimental results show that the hallucinated facial expression is not only expressive but also close to the ground truth.  相似文献   

2.
In this paper we present our embodied conversational agent (ECA) capable of displaying a vast set of facial expressions to communicate its emotional states as well as its social relations. Our agent is able to superpose and mask its emotional states as well as fake or inhibit them. We defined complex facial expressions as expressions arising from these displays. In the following, we describe a model based on fuzzy methods that enables to generate complex facial expressions of emotions. It uses fuzzy similarity to compute the degree of resemblance between facial expressions of the ECA. We also present an algorithm that adapts the facial behaviour of the agent depending on its social relationship with the interactants. This last algorithm is based on the theory of politeness by Brown and Levinson (1987). It outputs complex facial expressions that are socially adequate.  相似文献   

3.
Face alive icon     
In this paper, we propose a methodology to synthesize facial expressions from photographs for devices with limited processing power, network bandwidth and display area, which is referred as “LLL” environment. The facial images are reduced to small-sized face alive icons (FAI). Expressions are decomposed into the expression-unrelated facial features and the expression-related expressional features. As a result, the common features can be identified and reused across expressions using a discrete model constructed from the statistical analysis on training dataset. Semantic synthesis rules are introduced to reveal the inner relations of expressions. Verified by the experimental prototype system and usability study, the approach can produce acceptable facial expression images utilizing much less computing, network and storage resource than the traditional approaches.  相似文献   

4.
Facial expressional image synthesis controlled by emotional parameters   总被引:2,自引:0,他引:2  
  相似文献   

5.
The human face forms a canvas wherein various non-verbal expressions are communicated. These expressional cues and verbal communication represent the accurate perception of the actual intent. In many cases, a person may present an outward expression that might differ from the genuine emotion or the feeling that the person experiences. Even when people try to hide these emotions, the real emotions that are internally felt might reflect as facial expressions in the form of micro expressions. These micro expressions cannot be masked and reflect the actual emotional state of a person under study. Such micro expressions are on display for a tiny time frame, making it difficult for a typical person to spot and recognize them. This necessitates a place for Machine Learning, where machines can be trained to look for these micro expressions and categorize them once they are on display. The study’s primary purpose is to spot and correctly classify these micro expressions, which are very difficult for a casual observer to identify. This research improves upon the accuracy of the recognition by using a novel learning technique that not only captures and recognizes multimodal facial micro expressions but also has features for aligning, cropping, and superimposing these feature frames to produce highly accurate and consistent results. A modified variant of the deep learning architecture of Convolutional Neural Networks combined with the swarm-based optimality technique of the Artificial Bee Colony Algorithm is proposed to effectively get an accuracy of more than 85% in identifying and classifying these micro expressions in contrast to other algorithms that have relatively less accuracy. One of the main aspects of processing these expressions from video or live feeds is aligning the frames homographically and identifying these concise bursts of micro expressions, which significantly increases the accuracy of the outcomes. The proposed swarm-based technique handles this in the research to precisely align and crop the subsequent frames, resulting in much superior detection rates in identifying the micro expressions when on display.  相似文献   

6.
Automatic analysis of human facial expression is a challenging problem with many applications. Most of the existing automated systems for facial expression analysis attempt to recognize a few prototypic emotional expressions, such as anger and happiness. Instead of representing another approach to machine analysis of prototypic facial expressions of emotion, the method presented in this paper attempts to handle a large range of human facial behavior by recognizing facial muscle actions that produce expressions. Virtually all of the existing vision systems for facial muscle action detection deal only with frontal-view face images and cannot handle temporal dynamics of facial actions. In this paper, we present a system for automatic recognition of facial action units (AUs) and their temporal models from long, profile-view face image sequences. We exploit particle filtering to track 15 facial points in an input face-profile sequence, and we introduce facial-action-dynamics recognition from continuous video input using temporal rules. The algorithm performs both automatic segmentation of an input video into facial expressions pictured and recognition of temporal segments (i.e., onset, apex, offset) of 27 AUs occurring alone or in a combination in the input face-profile video. A recognition rate of 87% is achieved.  相似文献   

7.
A method of facial expression recognition based on Gabor and NMF   总被引:1,自引:0,他引:1  
The technology of facial expression recognition is a challenging problem in the field of intelligent human-computer interaction. An algorithm based on the Gabor wavelet transformation and non-negative matrix factorization (G-NMF) is presented. The main process includes image preprocessing, feature extraction and classification. At first, the face region containing emotional information is obtained and normalized. Then, expressional features are extracted by Gabor wavelet transformation and the high-dimensional data are reduced by non-negative matrix factorization (NMF). Finally, two-layer classifier (TLC) is designed for expression recognition. Experiments are done on JAFFE facial expressions database. The results show that the method proposed has a better performance.  相似文献   

8.
In this paper, we present an automatic and efficient approach to the capture of dense facial motion parameters, which extends our previous work of 3D reconstruction from mirror-reflected multiview video. To narrow search space and rapidly generate 3D candidate position lists, we apply mirrored-epipolar bands. For automatic tracking, we utilize spatial proximity of facial surfaces and temporal coherence to find the best trajectories and rectify statuses of missing and false tracking. More than 300 markers on a subject’s face are tracked from video at a process speed of 9.2 frames per second (fps) on a regular PC. The estimated 3D facial motion trajectories have been applied to our facial animation system and can be used for facial motion analysis.  相似文献   

9.
In this article we discuss the aspects of designing facial expressions for virtual humans (VHs) with a specific culture. First we explore the notion of cultures and its relevance for applications with a VH. Then we give a general scheme of designing emotional facial expressions, and identify the stages where a human is involved, either as a real person with some specific role, or as a VH displaying facial expressions. We discuss how the display and the emotional meaning of facial expressions may be measured in objective ways, and how the culture of displayers and the judges may influence the process of analyzing human facial expressions and evaluating synthesized ones. We review psychological experiments on cross-cultural perception of emotional facial expressions. By identifying the culturally critical issues of data collection and interpretation with both real and VHs, we aim at providing a methodological reference and inspiration for further research.  相似文献   

10.
This paper presents work towards recognizing facial expressions that are used in sign language communication. Facial features are tracked to effectively capture temporal visual cues on the signers' face during signing. Face shape constraints are used for robust tracking within a Bayesian framework. The constraints are specified through a set of face shape subspaces learned by Probabilistic Principal Component Analysis (PPCA). An update scheme is also used to adapt to persons with different face shapes. Two tracking algorithms are presented, which differ in the way the face shape constraints are enforced. The results show that the proposed trackers can track facial features with large head motions, substantial facial deformations, and temporary facial occlusions by hand. The tracked results are input to a recognition system comprising Hidden Markov Models (HMM) and a support vector machine (SVM) to recognize six isolated facial expressions representing grammatical markers in American sign language (ASL). Tracking error of less than four pixels (on 640×480 videos) was obtained with probability greater than 90%; in comparison the KLT tracker yielded this accuracy with 76% probability. Recognition accuracy obtained for ASL facial expressions was 91.76% in person dependent tests and 87.71% in person independent tests.  相似文献   

11.
This paper presents a hierarchical multi-state pose-dependent approach for facial feature detection and tracking under varying facial expression and face pose. For effective and efficient representation of feature points, a hybrid representation that integrates Gabor wavelets and gray-level profiles is proposed. To model the spatial relations among feature points, a hierarchical statistical face shape model is proposed to characterize both the global shape of human face and the local structural details of each facial component. Furthermore, multi-state local shape models are introduced to deal with shape variations of some facial components under different facial expressions. During detection and tracking, both facial component states and feature point positions, constrained by the hierarchical face shape model, are dynamically estimated using a switching hypothesized measurements (SHM) model. Experimental results demonstrate that the proposed method accurately and robustly tracks facial features in real time under different facial expressions and face poses.  相似文献   

12.
A human face does not play its role in the identification of an individual but also communicates useful information about a person’s emotional state at a particular time. No wonder automatic face expression recognition has become an area of great interest within the computer science, psychology, medicine, and human–computer interaction research communities. Various feature extraction techniques based on statistical to geometrical data have been used for recognition of expressions from static images as well as real-time videos. In this paper, we present a method for automatic recognition of facial expressions from face images by providing discrete wavelet transform features to a bank of seven parallel support vector machines (SVMs). Each SVM is trained to recognize a particular facial expression, so that it is most sensitive to that expression. Multi-classification is achieved by combining multiple SVMs performing binary classification using one-against-all approach. The outputs of all SVMs are combined using a maximum function. The classification efficiency is tested on static images from the publicly available Japanese Female Facial Expression database. The experiments using the proposed method demonstrate promising results.  相似文献   

13.
本文描述了一种结合MPEG4标准和人脸肌肉模型对人类表情进行归一化定量描述的方法。该方法通过对人脸肌肉运动参数的归一化处理,实现了人脸模型表情的定量描述,为人脸动画制作和建立人脸动画表情库,提供了简洁的途径。  相似文献   

14.
Computing environment is moving towards human-centered designs instead of computer centered designs and human's tend to communicate wealth of information through affective states or expressions. Traditional Human Computer Interaction (HCI) based systems ignores bulk of information communicated through those affective states and just caters for user's intentional input. Generally, for evaluating and benchmarking different facial expression analysis algorithms, standardized databases are needed to enable a meaningful comparison. In the absence of comparative tests on such standardized databases it is difficult to find relative strengths and weaknesses of different facial expression recognition algorithms. In this article we present a novel video database for Children's Spontaneous facial Expressions (LIRIS-CSE). Proposed video database contains six basic spontaneous facial expressions shown by 12 ethnically diverse children between the ages of 6 and 12 years with mean age of 7.3 years. To the best of our knowledge, this database is first of its kind as it records and shows spontaneous facial expressions of children. Previously there were few database of children expressions and all of them show posed or exaggerated expressions which are different from spontaneous or natural expressions. Thus, this database will be a milestone for human behavior researchers. This database will be a excellent resource for vision community for benchmarking and comparing results. In this article, we have also proposed framework for automatic expression recognition based on Convolutional Neural Network (CNN) architecture with transfer learning approach. Proposed architecture achieved average classification accuracy of 75% on our proposed database i.e. LIRIS-CSE.  相似文献   

15.
We present an algorithm for generating facial expressions for a continuum of pure and mixed emotions of varying intensity. Based on the observation that in natural interaction among humans, shades of emotion are much more frequently encountered than expressions of basic emotions, a method to generate more than Ekmans six basic emotions (joy, anger, fear, sadness, disgust and surprise) is required. To this end, we have adapted the algorithm proposed by Tsapatsoulis et al. [1] to be applicable to a physics-based facial animation system and a single, integrated emotion model. A physics-based facial animation system was combined with an equally flexible and expressive text-to-speech synthesis system, based upon the same emotion model, to form a talking head capable of expressing non-basic emotions of varying intensities. With a variety of life-like intermediate facial expressions captured as snapshots from the system we demonstrate the appropriateness of our approach.
Hans-Peter SeidelEmail:
  相似文献   

16.
17.
该文基于三维扫描数据抽取特定人面部特征点的三维运动,转化为FAP训练数据。然后通过对获取数据应用独立元分析获得一般人脸动画模式,最终使用ICA参数空间生成任意特定人的面部表情。实验结果表明,ICA比PCA给出更加紧致准确的一般人脸动画表达模式,当两种分量的数目相同时,ICA的重建误差比PCA的重建误差小。表情参数影响动画人脸不同部分的独立性和相关性,改善了不同表情人脸动画的真实性。  相似文献   

18.
Affective computing is important in human–computer interaction. Especially in interactive cloud computing within big data, affective modeling and analysis have extremely high complexity and uncertainty for emotional status as well as decreased computational accuracy. In this paper, an approach for affective experience evaluation in an interactive environment is presented to help enhance the significance of those findings. Based on a person-independent approach and the cooperative interaction as core factors, facial expression features and states as affective indicators are applied to do synergetic dependence evaluation and to construct a participant’s affective experience distribution map in interactive Big Data space. The resultant model from this methodology is potentially capable of analyzing the consistency between a participant’s inner emotional status and external facial expressions regardless of hidden emotions within interactive computing. Experiments are conducted to evaluate the rationality of the affective experience modeling approach outlined in this paper. The satisfactory results on real-time camera demonstrate an availability and validity comparable to the best results achieved through the facial expressions only from reality big data. It is suggested that the person-independent model with cooperative interaction and synergetic dependence evaluation has the characteristics to construct a participant’s affective experience distribution, and can accurately perform real-time analysis of affective experience consistency according to interactive big data. The affective experience distribution is considered as the most individual intelligent method for both an analysis model and affective computing, based on which we can further comprehend affective facial expression recognition and synthesis in interactive cloud computing.  相似文献   

19.
《Advanced Robotics》2013,27(4):341-355
The human face serves a variety of different communicative functions in social interaction. The face mediates person identification, the perception of emotional expressions and lipreading. Perceiving the direction of social attention, and facial attractiveness, also affects interpersonal behaviour. This paper reviews these different uses made of facial information, and considers their computational demands. The possible link between the perception of faces and deeper levels of social understanding is emphasized through a discussion of developmental deficits affecting social cognition. Finally, the implications for the development of communication between robots and humans are discussed. It is concluded that it could be useful both for robots to understand human faces, and also to display buman-like facial gestures themselves.  相似文献   

20.
《Advanced Robotics》2013,27(6):585-604
We are attempting to introduce a 3D, realistic human-like animated face robot to human-robot communication. The face robot can recognize human facial expressions as well as produce realistic facial expressions in real time. For the animated face robot to communicate interactively, we propose a new concept of 'active human interface', and we investigate the performance of real time recognition of facial expressions by neural networks (NN) and the expressionability of facial messages on the face robot. We find that the NN recognition of facial expressions and the face robot's performance in generating facial expressions are of almost same level as that in humans. We also construct an artificial emotion model able to generate six basic emotions in accordance with the recognition of a given facial expression and the situational context. This implies a high potential for the animated face robot to undertake interactive communication with humans, when integrating these three component technologies into the face robot.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号