首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper presents a novel facial expression recognition scheme based on extension theory. The facial region is detected and segmented by using feature invariant approaches. Accurate positions of the lips are then extracted as the features of a face. Next, based on the extension theory, basic facial expressions are classified by evaluating the correlation functions among various lip types and positions of the corners of the mouth. Additionally, the proposed algorithm is implemented using the XScale PXA270 embedded system in order to achieve real-time recognition for various facial expressions. Experimental results demonstrate that the proposed scheme can recognize facial expressions precisely and efficiently.  相似文献   

2.
《Advanced Robotics》2013,27(6):585-604
We are attempting to introduce a 3D, realistic human-like animated face robot to human-robot communication. The face robot can recognize human facial expressions as well as produce realistic facial expressions in real time. For the animated face robot to communicate interactively, we propose a new concept of 'active human interface', and we investigate the performance of real time recognition of facial expressions by neural networks (NN) and the expressionability of facial messages on the face robot. We find that the NN recognition of facial expressions and the face robot's performance in generating facial expressions are of almost same level as that in humans. We also construct an artificial emotion model able to generate six basic emotions in accordance with the recognition of a given facial expression and the situational context. This implies a high potential for the animated face robot to undertake interactive communication with humans, when integrating these three component technologies into the face robot.  相似文献   

3.
4.
人脸面部混合表情识别系统   总被引:20,自引:1,他引:20  
金辉  高文 《计算机学报》2000,23(6):602-608
根据心理学家对表情的研究和前人的工作成果,在对动态表情图像序列的时序分析的基础上,提出了对混合表情的识别系统,把脸部分成各个表情特征区域,分别提取其运动特征,按时序组成特征序列,通过分析不同特征区域所包含的不同表情信息的含谘和表情的含量,识别时 序长度的、复杂的混合表情图像序列。  相似文献   

5.
In this paper, we present a fully-automatic and real-time approach for person-independent recognition of facial expressions from dynamic sequences of 3D face scans. In the proposed solution, first a set of 3D facial landmarks are automatically detected, then the local characteristics of the face in the neighborhoods of the facial landmarks and their mutual distances are used to model the facial deformation. Training two hidden Markov models for each facial expression to be recognized, and combining them to form a multiclass classifier, an average recognition rate of 79.4 % has been obtained for the 3D dynamic sequences showing the six prototypical facial expressions of the Binghamton University 4D Facial Expression database. Comparisons with competitor approaches on the same database show that our solution is able to obtain effective results with the advantage of being capable to process facial sequences in real-time.  相似文献   

6.
A real-time speech-driven synthetic talking face provides an effective multimodal communication interface in distributed collaboration environments. Nonverbal gestures such as facial expressions are important to human communication and should be considered by speech-driven face animation systems. In this paper, we present a framework that systematically addresses facial deformation modeling, automatic facial motion analysis, and real-time speech-driven face animation with expression using neural networks. Based on this framework, we learn a quantitative visual representation of the facial deformations, called the motion units (MUs). A facial deformation can be approximated by a linear combination of the MUs weighted by MU parameters (MUPs). We develop an MU-based facial motion tracking algorithm which is used to collect an audio-visual training database. Then, we construct a real-time audio-to-MUP mapping by training a set of neural networks using the collected audio-visual training database. The quantitative evaluation of the mapping shows the effectiveness of the proposed approach. Using the proposed method, we develop the functionality of real-time speech-driven face animation with expressions for the iFACE system. Experimental results show that the synthetic expressive talking face of the iFACE system is comparable with a real face in terms of the effectiveness of their influences on bimodal human emotion perception.  相似文献   

7.
Facial expressions are one of the most powerful, natural and immediate means for human being to communicate their emotions and intensions. Recognition of facial expression has many applications including human-computer interaction, cognitive science, human emotion analysis, personality development etc. In this paper, we propose a new method for the recognition of facial expressions from single image frame that uses combination of appearance and geometric features with support vector machines classification. In general, appearance features for the recognition of facial expressions are computed by dividing face region into regular grid (holistic representation). But, in this paper we extracted region specific appearance features by dividing the whole face region into domain specific local regions. Geometric features are also extracted from corresponding domain specific regions. In addition, important local regions are determined by using incremental search approach which results in the reduction of feature dimension and improvement in recognition accuracy. The results of facial expressions recognition using features from domain specific regions are also compared with the results obtained using holistic representation. The performance of the proposed facial expression recognition system has been validated on publicly available extended Cohn-Kanade (CK+) facial expression data sets.  相似文献   

8.
9.
目的 目前2D表情识别方法对于一些混淆性较高的表情识别率不高并且容易受到人脸姿态、光照变化的影响,利用RGBD摄像头Kinect获取人脸3D特征点数据,提出了一种结合像素2D特征和特征点3D特征的实时表情识别方法。方法 首先,利用3种经典的LBP(局部二值模式)、Gabor滤波器、HOG(方向梯度直方图)提取了人脸表情2D像素特征,由于2D像素特征对于人脸表情描述能力的局限性,进一步提取了人脸特征点之间的角度、距离、法向量3种3D表情特征,以对不同表情的变化情况进行更加细致地描述。为了提高算法对混淆性高的表情识别能力并增加鲁棒性,将2D像素特征和3D特征点特征分别训练了3组随机森林模型,通过对6组随机森林分类器的分类结果加权组合,得到最终的表情类别。结果 在3D表情数据集Face3D上验证算法对9种不同表情的识别效果,结果表明结合2D像素特征和3D特征点特征的方法有利于表情的识别,平均识别率达到了84.7%,高出近几年提出的最优方法4.5%,而且相比单独地2D、3D融合特征,平均识别率分别提高了3.0%和5.8%,同时对于混淆性较强的愤怒、悲伤、害怕等表情识别率均高于80%,实时性也达到了10~15帧/s。结论 该方法结合表情图像的2D像素特征和3D特征点特征,提高了算法对于人脸表情变化的描述能力,而且针对混淆性较强的表情分类,对多组随机森林分类器的分类结果加权平均,有效地降低了混淆性表情之间的干扰,提高了算法的鲁棒性。实验结果表明了该方法相比普通的2D特征、3D特征等对于表情的识别不仅具有一定的优越性,同时还能保证算法的实时性。  相似文献   

10.
在人机交互过程中,理解人类的情绪是计算机和人进行交流必备的技能之一。最能表达人类情绪的就是面部表情。设计任何现实情景中的人机界面,面部表情识别是必不可少的。在本文中,我们提出了交互式计算环境中的一种新的实时面部表情识别框架。文章对这个领域的研究主要有两大贡献:第一,提出了一种新的网络结构和基于AdaBoost的嵌入式HMM的参数学习算法。第二,将这种优化的嵌入式HMM用于实时面部表情识别。本文中,嵌入式HMM把二维离散余弦变形后的系数作为观测向量,这和以前利用像素深度来构建观测向量的嵌入式HMM方法不同。因为算法同时修正了嵌入式HMM的网络结构和参数,大大提高了分类的精确度。该系统减少了训练和识别系统的复杂程度,提供了更加灵活的框架,且能应用于实时人机交互应用软件中。实验结果显示该方法是一种高效的面部表情识别方法。  相似文献   

11.
Face localization, feature extraction, and modeling are the major issues in automatic facial expression recognition. In this paper, a method for facial expression recognition is proposed. A face is located by extracting the head contour points using the motion information. A rectangular bounding box is fitted for the face region using those extracted contour points. Among the facial features, eyes are the most prominent features used for determining the size of a face. Hence eyes are located and the visual features of a face are extracted based on the locations of eyes. The visual features are modeled using support vector machine (SVM) for facial expression recognition. The SVM finds an optimal hyperplane to distinguish different facial expressions with an accuracy of 98.5%.  相似文献   

12.
A facial expression emotion recognition based human-robot interaction (FEER-HRI) system is proposed, for which a four-layer system framework is designed. The FEERHRI system enables the robots not only to recognize human emotions, but also to generate facial expression for adapting to human emotions. A facial emotion recognition method based on 2D-Gabor, uniform local binary pattern (LBP) operator, and multiclass extreme learning machine (ELM) classifier is presented, which is applied to real-time facial expression recognition for robots. Facial expressions of robots are represented by simple cartoon symbols and displayed by a LED screen equipped in the robots, which can be easily understood by human. Four scenarios, i.e., guiding, entertainment, home service and scene simulation are performed in the human-robot interaction experiment, in which smooth communication is realized by facial expression recognition of humans and facial expression generation of robots within 2 seconds. As a few prospective applications, the FEERHRI system can be applied in home service, smart home, safe driving, and so on.   相似文献   

13.
基于特征运动的表情人脸识别   总被引:3,自引:0,他引:3       下载免费PDF全文
人脸像的面部表情识别一直是人脸识别的一个难点,为了提高表情人脸识别的鲁棒性,提出了一种基于特征运动的人脸识别方法,该方法首先利用块匹配的方法来确定表情人脸和无表情人脸之间的运动向量,然后利用主成分分析方法(PCA)从这些运动向量中,产生低维子空间,称之为特征运动空间,测试时,先将测试人脸与无表情人脸之间的运动向量投影到特征运动空间,再根据这个运动向量在特征运动空间里的残差进行人脸识别,同时还介绍了基于特征运动的个人模型方法和公共模型方法,实验结果证明,该新算法在表情人脸的识别上,优于特征脸方法,有非常高的识别率。  相似文献   

14.
Facial expression is a powerful mechanism used by humans to communicate their emotions, intentions, and opinions to each other. The recognition of facial expressions is extremely important for a responsive and socially interactive human-computer interface. Such an interface with a robust capability to recognize human facial expressions should enable an automated system to effectively deploy in a variety of applications, including human computer interaction, security, law enforcement, psychiatry, and education. In this paper, we examine several core problems in face expression analysis from the perspective of landmarks and distances between them using a statistical approach. We have used statistical analysis to determine the landmarks and features that are best suited to recognize the expressions in a face. We have used a standard database to examine the effectiveness of landmark based approach to classify an expression (a) when a face with a neutral expression is available, and (b) when there is no a priori information about the face.  相似文献   

15.
This paper presents a novel emotion recognition model using the system identification approach. A comprehensive data driven model using an extended Kohonen self-organizing map (KSOM) has been developed whose input is a 26 dimensional facial geometric feature vector comprising eye, lip and eyebrow feature points. The analytical face model using this 26 dimensional geometric feature vector has been effectively used to describe the facial changes due to different expressions. This paper thus includes an automated generation scheme of this geometric facial feature vector. The proposed non-heuristic model has been developed using training data from MMI facial expression database. The emotion recognition accuracy of the proposed scheme has been compared with radial basis function network, multi-layered perceptron model and support vector machine based recognition schemes. The experimental results show that the proposed model is very efficient in recognizing six basic emotions while ensuring significant increase in average classification accuracy over radial basis function and multi-layered perceptron. It also shows that the average recognition rate of the proposed method is comparatively better than multi-class support vector machine.  相似文献   

16.
Extracting and understanding of emotion is of high importance for the interaction between human and machine communication systems. The most expressive way to display the human’s emotion is through facial expression analysis. This paper proposes a multiple emotion recognition system that can recognize combinations of up to a maximum of three different emotions using an active appearance model (AAM), the proposed classification standard, and a k-nearest neighbor (k-NN) classifier in mobile environments. AAM can take the expression of variations that are calculated by the proposed classification standard according to changes in human expressions in real time. The proposed k-NN can classify basic emotions (normal, happy, sad, angry, surprise) as well as more ambiguous emotions by combining the basic emotions in real time, and each recognized emotion that can be subdivided has strength. Whereas most previous methods of emotion recognition recognize various kind of a single emotion, this paper recognizes various emotions with a combination of the five basic emotions. To be easily understood, the recognized result is presented in three ways on a mobile camera screen. The result of the experiment was an average 85 % recognition rate and a 40 % performance showed optimized emotions. The implemented system can be represented by one of the example for augmented reality on displaying combination of real face video and virtual animation with user’s avatar.  相似文献   

17.
This study presents a facial expression recognition system which separates the non-rigid facial expression from the rigid head rotation and estimates the 3D rigid head rotation angle in real time. The extracted trajectories of the feature points contain both rigid head motion components and non-rigid facial expression motion components. A 3D virtual face model is used to obtain accurate estimation of the head rotation angle such that the non-rigid motion components can be precisely separated to enhance the facial expression recognition performance. The separation performance of the proposed system is further improved through the use of a restoration mechanism designed to recover feature points lost during large pan rotations. Having separated the rigid and non-rigid motions, hidden Markov models (HMMs) are employed to recognize a prescribed set of facial expressions defined in terms of facial action coding system (FACS) action units (AUs).  相似文献   

18.
Most studies use the facial expression to recognize a user’s emotion; however, gestures, such as nodding, shaking the head, or stillness can also be indicators of the user’s emotion. In our research, we use the facial expression and gestures to detect and recognize a user’s emotion. The pervasive Microsoft Kinect sensor captures video data, from which several features representing facial expressions and gestures are extracted. An in-house extensible markup language-based genetic programming engine (XGP) evolves the emotion recognition module of our system. To improve the computational performance of the recognition module, we implemented and compared several approaches, including directed evolution, collaborative filtering via canonical voting, and a genetic algorithm, for an automated voting system. The experimental results indicate that XGP is feasible for evolving emotion classifiers. In addition, the obtained results verify that collaborative filtering improves the generality of recognition. From a psychological viewpoint, the results prove that different people might express their emotions differently, as the emotion classifiers that are evolved for particular users might not be applied successfully to other user(s).  相似文献   

19.
The automatic recognition of facial expressions is critical to applications that are required to recognize human emotions, such as multimodal user interfaces. A novel framework for recognizing facial expressions is presented in this paper. First, distance-based features are introduced and are integrated to yield an improved discriminative power. Second, a bag of distances model is applied to comprehend training images and to construct codebooks automatically. Third, the combined distance-based features are transformed into mid-level features using the trained codebooks. Finally, a support vector machine (SVM) classifier for recognizing facial expressions can be trained. The results of this study show that the proposed approach outperforms the state-of-the-art methods regarding the recognition rate, using a CK+ dataset.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号