首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Sparse representation is a new approach that has received significant attention for image classification and recognition. This paper presents a PCA-based dictionary building for sparse representation and classification of universal facial expressions. In our method, expressive facials images of each subject are subtracted from a neutral facial image of the same subject. Then the PCA is applied to these difference images to model the variations within each class of facial expressions. The learned principal components are used as the atoms of the dictionary. In the classification step, a given test image is sparsely represented as a linear combination of the principal components of six basic facial expressions. Our extensive experiments on several publicly available face datasets (CK+, MMI, and Bosphorus datasets) show that our framework outperforms the recognition rate of the state-of-the-art techniques by about 6%. This approach is promising and can further be applied to visual object recognition.  相似文献   

2.
This paper presents a hierarchical animation method for transferring facial expressions extracted from a performance video to different facial sketches. Without any expression example obtained from target faces, our approach can transfer expressions by motion retargetting to facial sketches. However, in practical applications, the image noise in each frame will reduce the feature extraction accuracy from source faces. And the shape difference between source and target faces will influence the animation quality for representing expressions. To solve these difficulties, we propose a robust neighbor-expression transfer (NET) model, which aims at modeling the spatial relations among sparse facial features. By learning expression behaviors from neighbor face examples, the NET model can reconstruct facial expressions from noisy signals. Based on the NET model, we present a hierarchical method to animate facial sketches. The motion vectors on the source face are adjusted from coarse to fine on the target face. Accordingly, the animation results are generated to replicate source expressions. Experimental results demonstrate that the proposed method can effectively and robustly transfer expressions by noisy animation signals.  相似文献   

3.
Facial expressions are an important source of information for human interaction. Therefore, it would be desirable if computers were able to use this information to interact more naturally with the user. However, facial expressions are not always unambiguously interpreted even by competent humans. Consequently, soft computing techniques in which interpretations are given some belief value would seem appropriate. This paper describes how the mass assignment approach to constructing fuzzy sets from probability distributions has been applied to the low-level classification of pixels into facial feature classes based on their colour. It will also describe how similar approaches can be used for the analysis of facial expressions themselves.  相似文献   

4.
Emotions of human beings are largely represented by facial expressions. Facial expressions, simple as well as complex, are well decoded by facial action units. Any facial expression can be detected and analyzed if facial action units are decoded well. In the presented work, an attempt has been made to detect facial action unit intensity by mapping the features based on their cosine similarity. Distance metric learning based on cosine similarity maps the data by learning a metric that measures orientation rather than magnitude. The motivation behind using cosine similarity is that change in facial expressions can be better represented by changes in orientation as compared to the magnitude. The features are applied to support vector machine for classification of various intensities of action units. Experimental results on the popularly accepted database such as DISFA database and UNBC McMaster shoulder pain database confirm the efficacy of the proposed approach.  相似文献   

5.
The main goal of this paper is to illustrate a geometric analysis of 3D facial shapes in the presence of varying facial expressions. This approach consists of the following two main steps: (1) Each facial surface is automatically denoised and preprocessed to result in an indexed collection of facial curves. During this step, one detects the tip of the nose and defines a surface distance function with that tip as the reference point. The level curves of this distance function are the desired facial curves. (2) Comparisons between faces are based on optimal deformations from one to another. This, in turn, is based on optimal deformations of the corresponding facial curves across surfaces under an elastic metric. The experimental results, generated using a subset of the Face Recognition Grand Challenge v2 data set, demonstrate the success of the proposed framework in recognizing people under different facial expressions. The recognition rates obtained here exceed those for a baseline ICP algorithm on the same data set.  相似文献   

6.
7.

Face recognition has become an accessible issue for experts as well as ordinary people as it is a focal non-interfering biometric modality. In this paper, we introduced a new approach to perform face recognition under varying facial expressions. The proposed approach consists of two main steps: facial expression recognition and face recognition. They are two complementary steps to improve face recognition across facial expression variation. In the first step, we selected the most expressive regions responsible for facial expression appearance using the Mutual Information technique. Such a process helps not only improve the facial expression classification accuracy but also reduce the features vector size. In the second step, we used the Principal Component Analysis (PCA) to build EigenFaces for each facial expression class. Then, a face recognition is performed by projecting the face onto the corresponding facial expression Eigenfaces. The PCA technique significantly reduces the dimensionality of the original space since the face recognition is carried out in the reduced Eigenfaces space. An experimental study was conducted to evaluate the performance of the proposed approach in terms of face recognition accuracy and spatial-temporal complexity.

  相似文献   

8.
9.
人类面部表情是其心理情绪变化的最直观刻画,不同人的面部表情具有很大差异,现有表情识别方法均利用面部统计特征区分不同表情,其缺乏对于人脸细节信息的深度挖掘。根据心理学家对面部行为编码的定义可以看出,人脸的局部细节信息决定了其表情意义。因此该文提出一种基于多尺度细节增强的面部表情识别方法,针对面部表情受图像细节影响较大的特点,提出利用高斯金字塔提取图像细节信息,并对图像进行细节增强,从而强化人脸表情信息。针对面部表情的局部性特点,提出利用层次结构的局部梯度特征计算方法,描述面部特征点局部形状特征。最后,使用支持向量机(SVM)对面部表情进行分类。该文在CK+表情数据库中的实验结果表明,该方法不仅验证了图像细节对面部表情识别过程的重要作用,而且在小规模训练数据下也能够得到非常好的识别结果,表情平均识别率达到98.19%。  相似文献   

10.
当前,动画及其实现技术受到业界广泛关注,而人脸动画如喜、怒、哀、乐的表达其真实感还不够好.以Waters肌肉模型为基础,提出NURBS弹性肌肉模型,该方法依据解剖学知识,用非均匀有理B样条曲线仿真肌肉.通过改变曲线控制点的权重,可以找到一个动作向量控制肌肉的运动,进而合成人脸的各种表情.控制点数量越多,肌肉就越好控制,那么就可以更加真实地仿真人脸表情.  相似文献   

11.
The present paper describes a novel method for the segmentation of faces, extraction of facial features and tracking of the face contour and features over time. Robust segmentation of faces out of complex scenes is done based on color and shape information. Additionally, face candidates are verified by searching for facial features in the interior of the face. As interesting facial features we employ eyebrows, eyes, nostrils, mouth and chin. We consider incomplete feature constellations as well. If a face and its features are detected once reliably, we track the face contour and the features over time. Face contour tracking is done by using deformable models like snakes. Facial feature tracking is performed by block matching. The success of our approach was verified by evaluating 38 different color image sequences, containing features as beard, glasses and changing facial expressions.  相似文献   

12.
People instinctively recognize facial expression as a key to nonverbal communication, which has been confirmed by many different research projects. A change in intensity or magnitude of even one specific facial expression can cause different interpretations. A systematic method for generating facial expression syntheses, while mimicking realistic facial expressions and intensities, is a strong need in various applications. Although manually produced animation is typically of high quality, the process is slow and costly-therefore, often unrealistic for low polygonal applications. In this paper, we present a simple and efficient emotional-intensity-based expression cloning process for low-polygonal-based applications, by generating a customized face, as well as by cloning facial expressions. We define intensity mappings to measure expression intensity. Once a source expression is determined by a set of suitable parameter values in a customized 3D face and its embedded muscles, expressions for any target face(s) can be easily cloned by using the same set of parameters. Through experimental study, including facial expression simulation and cloning with intensity mapping, our research reconfirms traditional psychological findings. Additionally, we discuss the method's overall usability and how it allows us to automatically adjust a customized face with embedded facial muscles while mimicking the user's facial configuration, expression, and intensity.  相似文献   

13.
In this paper, the authors have developed a system that animates 3D facial agents based on real-time facial expression analysis techniques and research on synthesizing facial expressions and text-to-speech capabilities. This system combines visual, auditory, and primary interfaces to communicate one coherent multimodal chat experience. Users can represent themselves using agents they select from a group that we have predefined. When a user shows a particular expression while typing a text, the 3D agent at the receiving end speaks the message aloud while it replays the recognized facial expression sequences and also augments the synthesized voice with appropriate emotional content. Because the visual data exchange is based on the MPEG-4 high-level Facial Animation Parameter for facial expressions (FAP 2), rather than real-time video, the method requires very low bandwidth.  相似文献   

14.
Facial expressions contain most of the information on human face which is essential for human–computer interaction. Development of robust algorithms for automatic recognition of facial expressions with high recognition rates has been a challenge for the last 10 years. In this paper, we propose a novel feature selection procedure which recognizes basic facial expressions with high recognition rates by utilizing three-Dimensional (3D) geometrical facial feature positions. The paper presents a system of classifying expressions in one of the six basic emotional categories which are anger, disgust, fear, happiness, sadness, and surprise. The paper contributes on feature selections for each expression independently and achieves high recognition rates with the proposed geometric facial features selected for each expression. The novel feature selection procedure is entropy based, and it is employed independently for each of the six basic expressions. The system’s performance is evaluated using the 3D facial expression database, BU-3DFE. Experimental results show that the proposed method outperforms the latest methods reported in the literature.  相似文献   

15.
In the domain of telecommunication applications, videophony, teleconferency, the representation and modelization of human face, and its expressions, knows an important development. In this paper, we present the basic principles of image sequences coding with main approaches and methods to lead to 3D model-based coding. Then, we introduce our 3D wire-frame model with which we have developed some compression and triangulated surface representation methods. An original approach to simulate and reproduce facial expressions with radial basis functions is also presented.  相似文献   

16.
Facial expressions are the best way of communicating human emotions. This paper proposes a novel Monogenic Directional Pattern (MDP) for extracting features from the face. To reduce the time spent on choosing the best kernel, a novel pseudo-Voigt kernel is chosen as the common kernel for dimension reduction proposed as pseudo-Voigt kernel-based Generalized Discriminant Analysis (PVK-GDA). The pseudo-Voigt kernel-based Extreme Learning Machine (PVK-ELM) is used for better recognition of facial emotions. The efficiency of the approach is proved by experimenting with the Japanese Female Facial Expression (JAFFE), Cohn Kanade (CK+), Multimedia Understanding Group (MUG), Static Facial Expressions in the Wild (SFEW) and Oulu-Chinese Academy of Science, Institute of Automation (Oulu-CASIA) datasets. This approach achieves better classification accuracy of 96.7% for JAFFE, 99.4% for CK+, 98.6% for MUG, 35.6% for SFEW and 88% for Oulu-CASIA, which is certainly higher when compared to other techniques in the literature.  相似文献   

17.
Face recognition has been addressed with pattern recognition techniques such as composite correlation filters. These filters are synthesized from training sets which are representative of facial classes. For this reason, the filter performance depends greatly on the appropriate selection of the training set. This set can be selected either by a filter designer or by a conventional method. This paper presents an optimization-based methodology for the automatic selection of the training set. Given an optimization algorithm, the proposed methodology uses its main mechanics to iteratively examine a given set of available images in order to find the best subset for the training set. To this end, three objective functions are proposed as optimization criteria for training set selection. The proposed methodology was evaluated by undertaking face recognition under variable illumination and facial expressions. Four optimization algorithms and three composite correlation filters were used to test the proposed methodology. The Maximum Average Correlation Height filter designed by Grey Wolf Optimizer obtained the best performance under homogeneous illumination and facial expressions, while the Unconstrained Nonlinear Composite Filter designed by either Grey Wolf Optimizer or (1+1)-Evolution Strategy obtained the best performance under variable illumination. The proposed methodology selects training sets for the synthesis of composite filters with competitive results comparable to the results reported in the face recognition literature.  相似文献   

18.
19.
在日常的沟通与交流过程中,运用面部表情可以促使沟通交流变得更加顺畅,因此对于人类而言,进行面部表情的解读也是获取相关沟通交流内容的重要程序。随着科学技术的不断发展,人工智能在日常人类交流沟通中运用的越发广泛,因此面部表情人工智能识别这一项技术的发展与创新也更加受到关注。文章将对卷积神经网络的人脸表情识别技术进行深入的研究与探析。  相似文献   

20.
One crucial application of intelligent robotic systems is remote surveillance using a security robot. A fundamental need in security is the ability to automatically verify an intruder into a secure or restricted area, to alert remote security personnel, and then to enable them to track the intruder. In this article, we propose an Internet-based security robot system. The face recognition approach possesses "invariant" recognition characteristics, including face recognition where facial expressions, viewing perspectives, three-dimensional poses, individual appearance, and lighting vary and occluding structures are present. The experiment uses a 33.6-kb/s modem Internet connection to successfully remotely control a mobile robot, proving that the streaming technology-based approach greatly improves the "sensibility" of robot teleoperation. This improvement ensures that security personnel can effectively and at low cost use the Internet to remotely control a mobile robot to track and identify a potential intruder.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号