首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
For effective interaction between humans and socially adept, intelligent service robots, a key capability required by this class of sociable robots is the successful interpretation of visual data. In addition to crucial techniques like human face detection and recognition, an important next step for enabling intelligence and empathy within social robots is that of emotion recognition. In this paper, an automated and interactive computer vision system is investigated for human facial expression recognition and tracking based on the facial structure features and movement information. Twenty facial features are adopted since they are more informative and prominent for reducing the ambiguity during classification. An unsupervised learning algorithm, distributed locally linear embedding (DLLE), is introduced to recover the inherent properties of scattered data lying on a manifold embedded in high-dimensional input facial images. The selected person-dependent facial expression images in a video are classified using the DLLE. In addition, facial expression motion energy is introduced to describe the facial muscle’s tension during the expressions for person-independent tracking for person-independent recognition. This method takes advantage of the optical flow which tracks the feature points’ movement information. Finally, experimental results show that our approach is able to separate different expressions successfully.  相似文献   

2.
The current paper presents an automatic and context sensitive system for the dynamic recognition of pain expression among the six basic facial expressions and neutral on acted and spontaneous sequences. A machine learning approach based on the Transferable Belief Model, successfully used previously to categorize the six basic facial expressions in static images [2], [61], is extended in the current paper for the automatic and dynamic recognition of pain expression from video sequences in a hospital context application. The originality of the proposed method is the use of the dynamic information for the recognition of pain expression and the combination of different sensors, permanent facial features behavior, transient features behavior, and the context of the study, using the same fusion model. Experimental results, on 2-alternative forced choices and, for the first time, on 8-alternative forced choices (i.e. pain expression is classified among seven other facial expressions), show good classification rates even in the case of spontaneous pain sequences. The mean classification rates on acted and spontaneous data reach 81.2% and 84.5% for the 2-alternative and 8-alternative forced choices, respectively. Moreover, the system performances compare favorably to the human observer rates (76%), and lead to the same doubt states in the case of blend expressions.  相似文献   

3.
In this paper, a novel approach to automatic facial expression recognition from static images is proposed. The face area is first divided automatically into small regions, from which the local binary pattern (LBP) histograms are extracted and concatenated into a single feature histogram, efficiently representing facial expressions—anger, disgust, fear, happiness, sadness, surprise, and neutral. Then, a linear programming (LP) technique is used to classify the seven facial expressions. Experimental results demonstrate an average expression recognition accuracy of 93.8% on the JAFFE database, which outperforms the rate of all other reported methods on the same database.  相似文献   

4.
One of the main challenges in facial expression recognition is illumination invariance. Our long-term goal is to develop a system for automatic facial expression recognition that is robust to light variations. In this paper, we introduce a novel 3D Relightable Facial Expression (ICT-3DRFE) database that enables experimentation in the fields of both computer graphics and computer vision. The database contains 3D models for 23 subjects and 15 expressions, as well as photometric information that allow for photorealistic rendering. It is also facial action units annotated, using FACS standards. Using the ICT-3DRFE database we create an image set of different expressions/illuminations to study the effect of illumination on automatic expression recognition. We compared the output scores from automatic recognition with expert FACS annotations and found that they agree when the illumination is uniform. Our results show that the output distribution of the automatic recognition can change significantly with light variations and sometimes causes the discrimination of two different expressions to be diminished. We propose a ratio-based light transfer method, to factor out unwanted illuminations from given images and show that it reduces the effect of illumination on expression recognition.  相似文献   

5.
For facial expression recognition, we selected three images: (i) just before speaking, (ii) speaking the first vowel, and (iii) speaking the last vowel in an utterance. In this study, as a pre-processing module, we added a judgment function to distinguish a front-view face for facial expression recognition. A frame of the front-view face in a dynamic image is selected by estimating the face direction. The judgment function measures four feature parameters using thermal image processing, and selects the thermal images that have all the values of the feature parameters within limited ranges which were decided on the basis of training thermal images of front-view faces. As an initial investigation, we adopted the utterance of the Japanese name “Taro,” which is semantically neutral. The mean judgment accuracy of the front-view face was 99.5% for six subjects who changed their face direction freely. Using the proposed method, the facial expressions of six subjects were distinguishable with 84.0% accuracy when they exhibited one of the intentional facial expressions of “angry,” “happy,” “neutral,” “sad,” and “surprised.” We expect the proposed method to be applicable for recognizing facial expressions in daily conversation.  相似文献   

6.
Facial expression is one of the major distracting factors for face recognition performance. Pose and illumination variations on face images also influence the performance of face recognition systems. The combination of three variations (facial expression, pose and illumination) seriously degrades the recognition accuracy. In this paper, three experimental protocols are designed in such a way that the successive performance degradation due to the increasing variations (expressions, expressions with illumination effect and expressions with illumination and pose effect) on face images can be examined. The whole experiment is carried out using North-East Indian (NEI) face images with the help of four well-known classification algorithms namely Linear Discriminant Analysis (LDA), K-Nearest Neighbor algorithm (KNN), combination of Principal Component Analysis and Linear Discriminant Analysis (PCA + LDA), combination of Principal Component Analysis and K-Nearest Neighbor algorithm (PCA + KNN). The experimental observations are analyzed through confusion matrices and graphs. This paper also describes the creation of NEI facial expression database, which contains visual static face images of different ethnic groups of the North-East states. The database is useful for future researchers in the area of forensic science, medical applications, affective computing, intelligent environments, lie detection, psychiatry, anthropology, etc.  相似文献   

7.
8.
Automatic analysis of human facial expression is a challenging problem with many applications. Most of the existing automated systems for facial expression analysis attempt to recognize a few prototypic emotional expressions, such as anger and happiness. Instead of representing another approach to machine analysis of prototypic facial expressions of emotion, the method presented in this paper attempts to handle a large range of human facial behavior by recognizing facial muscle actions that produce expressions. Virtually all of the existing vision systems for facial muscle action detection deal only with frontal-view face images and cannot handle temporal dynamics of facial actions. In this paper, we present a system for automatic recognition of facial action units (AUs) and their temporal models from long, profile-view face image sequences. We exploit particle filtering to track 15 facial points in an input face-profile sequence, and we introduce facial-action-dynamics recognition from continuous video input using temporal rules. The algorithm performs both automatic segmentation of an input video into facial expressions pictured and recognition of temporal segments (i.e., onset, apex, offset) of 27 AUs occurring alone or in a combination in the input face-profile video. A recognition rate of 87% is achieved.  相似文献   

9.
A novel method based on fusion of texture and shape information is proposed for facial expression and Facial Action Unit (FAU) recognition from video sequences. Regarding facial expression recognition, a subspace method based on Discriminant Non-negative Matrix Factorization (DNMF) is applied to the images, thus extracting the texture information. In order to extract the shape information, the system firstly extracts the deformed Candide facial grid that corresponds to the facial expression depicted in the video sequence. A Support Vector Machine (SVM) system designed on an Euclidean space, defined over a novel metric between grids, is used for the classification of the shape information. Regarding FAU recognition, the texture extraction method (DNMF) is applied on the differences images of the video sequence, calculated taking under consideration the neutral and the expressive frame. An SVM system is used for FAU classification from the shape information. This time, the shape information consists of the grid node coordinate displacements between the neutral and the expressed facial expression frame. The fusion of texture and shape information is performed using various approaches, among which are SVMs and Median Radial Basis Functions (MRBFs), in order to detect the facial expression and the set of present FAUs. The accuracy achieved using the Cohn–Kanade database is 92.3% when recognizing the seven basic facial expressions (anger, disgust, fear, happiness, sadness, surprise and neutral), and 92.1% when recognizing the 17 FAUs that are responsible for facial expression development.  相似文献   

10.
11.
In this paper, an analysis of the effect of partial occlusion on facial expression recognition is investigated. The classification from partially occluded images in one of the six basic facial expressions is performed using a method based on Gabor wavelets texture information extraction, a supervised image decomposition method based on Discriminant Non-negative Matrix Factorization and a shape-based method that exploits the geometrical displacement of certain facial features. We demonstrate how partial occlusion affects the above mentioned methods in the classification of the six basic facial expressions, and indicate the way partial occlusion affects human observers when recognizing facial expressions. An attempt to specify which part of the face (left, right, lower or upper region) contains more discriminant information for each facial expression, is also made and conclusions regarding the pairs of facial expressions misclassifications that each type of occlusion introduces, are drawn.  相似文献   

12.
目的 目前2D表情识别方法对于一些混淆性较高的表情识别率不高并且容易受到人脸姿态、光照变化的影响,利用RGBD摄像头Kinect获取人脸3D特征点数据,提出了一种结合像素2D特征和特征点3D特征的实时表情识别方法。方法 首先,利用3种经典的LBP(局部二值模式)、Gabor滤波器、HOG(方向梯度直方图)提取了人脸表情2D像素特征,由于2D像素特征对于人脸表情描述能力的局限性,进一步提取了人脸特征点之间的角度、距离、法向量3种3D表情特征,以对不同表情的变化情况进行更加细致地描述。为了提高算法对混淆性高的表情识别能力并增加鲁棒性,将2D像素特征和3D特征点特征分别训练了3组随机森林模型,通过对6组随机森林分类器的分类结果加权组合,得到最终的表情类别。结果 在3D表情数据集Face3D上验证算法对9种不同表情的识别效果,结果表明结合2D像素特征和3D特征点特征的方法有利于表情的识别,平均识别率达到了84.7%,高出近几年提出的最优方法4.5%,而且相比单独地2D、3D融合特征,平均识别率分别提高了3.0%和5.8%,同时对于混淆性较强的愤怒、悲伤、害怕等表情识别率均高于80%,实时性也达到了10~15帧/s。结论 该方法结合表情图像的2D像素特征和3D特征点特征,提高了算法对于人脸表情变化的描述能力,而且针对混淆性较强的表情分类,对多组随机森林分类器的分类结果加权平均,有效地降低了混淆性表情之间的干扰,提高了算法的鲁棒性。实验结果表明了该方法相比普通的2D特征、3D特征等对于表情的识别不仅具有一定的优越性,同时还能保证算法的实时性。  相似文献   

13.

Emotion recognition from facial images is considered as a challenging task due to the varying nature of facial expressions. The prior studies on emotion classification from facial images using deep learning models have focused on emotion recognition from facial images but face the issue of performance degradation due to poor selection of layers in the convolutional neural network model.To address this issue, we propose an efficient deep learning technique using a convolutional neural network model for classifying emotions from facial images and detecting age and gender from the facial expressions efficiently. Experimental results show that the proposed model outperformed baseline works by achieving an accuracy of 95.65% for emotion recognition, 98.5% for age recognition, and 99.14% for gender recognition.

  相似文献   

14.

Facial expressions are essential in community based interactions and in the analysis of emotions behaviour. The automatic identification of face is a motivating topic for the researchers because of its numerous applications like health care, video conferencing, cognitive science etc. In the computer vision with the facial images, the automatic detection of facial expression is a very challenging issue to be resolved. An innovative methodology is introduced in the presented work for the recognition of facial expressions. The presented methodology is described in subsequent stages. At first, input image is taken from the facial expression database and pre-processed with high frequency emphasis (HFE) filtering and modified histogram equalization (MHE). After the process of image enhancement, Viola Jones (VJ) framework is utilized to detect the face in the images and also the face region is cropped by finding the face coordinates. Afterwards, different effective features such as shape information is extracted from enhanced histogram of gradient (EHOG feature), intensity variation is extracted with mean, standard deviation and skewness, facial movement variation is extracted with facial action coding (FAC),texture is extracted using weighted patch based local binary pattern (WLBP) and spatial information is extracted byentropy based Spatial feature. Subsequently, dimensionality of the features are reduced by attaining the most relevant features using Residual Network (ResNet). Finally, extended wavelet deep convolutional neural network (EWDCNN) classifier uses the extracted features and accurately detects the face expressions as sad, happy, anger, fear disgust, surprise and neutral classes. The implementation platform used in the work is PYTHON. The presented technique is tested with the three datasets such as JAFFE, CK+ and Oulu-CASIA.

  相似文献   

15.
基于特征点表情变化的3维人脸识别   总被引:1,自引:1,他引:0       下载免费PDF全文
目的 为克服表情变化对3维人脸识别的影响,提出一种基于特征点提取局部区域特征的3维人脸识别方法。方法 首先,在深度图上应用2维图像的ASM(active shape model)算法粗略定位出人脸特征点,再根据Shape index特征在人脸点云上精确定位出特征点。其次,提取以鼻中为中心的一系列等测地轮廓线来表征人脸形状;然后,提取具有姿态不变性的Procrustean向量特征(距离和角度)作为识别特征;最后,对各条等测地轮廓线特征的分类结果进行了比较,并对分类结果进行决策级融合。结果 在FRGC V2.0人脸数据库分别进行特征点定位实验和识别实验,平均定位误差小于2.36 mm,Rank-1识别率为98.35%。结论 基于特征点的3维人脸识别方法,通过特征点在人脸近似刚性区域提取特征,有效避免了受表情影响较大的嘴部区域。实验证明该方法具有较高的识别精度,同时对姿态、表情变化具有一定的鲁棒性。  相似文献   

16.

Visible face recognition systems are subjected to failure when recognizing the faces in unconstrained scenarios. So, recognizing faces under variable and low illumination conditions are more important since most of the security breaches happen during night time. Near Infrared (NIR) spectrum enables to acquire high quality images, even without any external source of light and hence it is a good method for solving the problem of illumination. Further, the soft biometric trait, gender classification and non verbal communication, facial expression recognition has also been addressed in the NIR spectrum. In this paper, a method has been proposed to recognize the face along with gender classification and facial expression recognition in NIR spectrum. The proposed method is based on transfer learning and it consists of three core components, i) training with small scale NIR images ii) matching NIR-NIR images (homogeneous) and iii) classification. Training on NIR images produce features using transfer learning which has been pre-trained on large scale VIS face images. Next, matching is performed between NIR-NIR spectrum of both training and testing faces. Then it is classified using three, separate SVM classifiers, one for face recognition, the second one for gender classification and the third one for facial expression recognition. It has been observed that the method gives state-of-the-art accuracy on the publicly available, challenging, benchmark datasets CASIA NIR-VIS 2.0, Oulu-CASIA NIR-VIS, PolyU, CBSR, IIT Kh and HITSZ for face recognition. Further, for gender classification the Oulu-CASIA NIR-VIS, PolyU,and IIT Kh has been analyzed and for facial expression the Oulu-CASIA NIR-VIS dataset has been analyzed.

  相似文献   

17.
多尺度图像的Gabor表示在计算机视觉领域有着广泛的应用,本文探讨了面部表情图像的Gabor表示方法,为了减少特征矢量的维数,本文对Gabor小波系数进行了下采样处理和并采用PCA二次降维。最后利用Adaboost方法对面部表情进行识别,通过实验表明该方法对已知人脸的表情识别率达到95%以上,对未知人脸的表情识别率达72%,识别效果比较好。  相似文献   

18.
We previously developed a method for the facial expression recognition of a speaker. For facial expression recognition, we selected three static images at the timing positions of just before speaking and while speaking the phonemes of the first and last vowels. Then, only the static image of the front-view face was used for facial expression recognition. However, frequent updates of the training data were time-consuming. To reduce the time for updates, we found that the classifications of “neutral”, “happy”, and “others” were efficient and accurate for facial expression recognition. Using the proposed method with updated training data of “happy” and “neutral” after an interval such as approximately three and a half years, the facial expressions of two subjects were discriminable with 87.0 % accuracy for the facial expressions of “happy”, “neutral”, and “others” when exhibiting the intentional facial expressions of “angry”, “happy”, “neutral”, “sad”, and “surprised”.  相似文献   

19.
表情识别是在人脸检测基础之上的更进一步研究,是计算机视觉领域的一个重要研究方向.将研究的目标定位于基于微视频的表情自动识别,研究在大数据环境下,如何使用深度学习技术来辅助和促进表情识别技术的发展.针对表情智能识别过程中存在的一些关键性技术难题,设计了一个全自动表情识别模型.该模型结合深度自编码网络和自注意力机制,构建了...  相似文献   

20.
Variations in illumination degrade the performance of appearance based face recognition. We present a novel algorithm for the normalization of color facial images using a single image and its co-registered 3D pointcloud (3D image). The algorithm borrows the physically based Phong’s lighting model from computer graphics which is used for rendering computer images and employs it in a reverse mode for the calculation of face albedo from real facial images. Our algorithm estimates the number of the dominant light sources and their directions from the specularities in the facial image and the corresponding 3D points. The intensities of the light sources and the parameters of the Phong’s model are estimated by fitting the Phong’s model onto the facial skin data. Unlike existing approaches, our algorithm takes into account both Lambertian and specular reflections as well as attached and cast shadows. Moreover, our algorithm is invariant to facial pose and expression and can effectively handle the case of multiple extended light sources. The algorithm was tested on the challenging FRGC v2.0 data and satisfactory results were achieved. The mean fitting error was 6.3% of the maximum color value. Performing face recognition using the normalized images increased both identification and verification rates.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号