首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
在真实世界中,每个个体对表情的表现方式不同.基于上述事实,文中提出局部特征聚类(LFA)损失函数,能够在深度神经网络的训练过程中减小相同类图像之间的差异,扩大不同类图像之间的差异,从而削弱表情的多态性对深度学习方式提取特征的影响.同时,具有丰富表情的局部区域可以更好地表现面部表情特征,所以提出融入LFA损失函数的深度学习网络框架,提取的面部图像的局部特征用于面部表情识别.实验结果表明文中方法在真实世界的RAF数据集及实验室条件下的CK+数据集上的有效性.  相似文献   

2.
基于生成式对抗网络的鲁棒人脸表情识别   总被引:1,自引:0,他引:1  
人们在自然情感交流中经常伴随着头部旋转和肢体动作,它们往往导致较大范围的人脸遮挡,使得人脸图像损失部分表情信息.现有的表情识别方法大多基于通用的人脸特征和识别算法,未考虑表情和身份的差异,导致对新用户的识别不够鲁棒.本文提出了一种对人脸局部遮挡图像进行用户无关表情识别的方法.该方法包括一个基于Wasserstein生成式对抗网络(Wasserstein generative adversarial net,WGAN)的人脸图像生成网络,能够为图像中的遮挡区域生成上下文一致的补全图像;以及一个表情识别网络,能够通过在表情识别任务和身份识别任务之间建立对抗关系来提取用户无关的表情特征并推断表情类别.实验结果表明,我们的方法在由CK+,Multi-PIE和JAFFE构成的混合数据集上用户无关的平均识别准确率超过了90%.在CK+上用户无关的识别准确率达到了96%,其中4.5%的性能提升得益于本文提出的对抗式表情特征提取方法.此外,在45°头部旋转范围内,本文方法还能够用于提高非正面表情的识别准确率.  相似文献   

3.

Emotion recognition from facial images is considered as a challenging task due to the varying nature of facial expressions. The prior studies on emotion classification from facial images using deep learning models have focused on emotion recognition from facial images but face the issue of performance degradation due to poor selection of layers in the convolutional neural network model.To address this issue, we propose an efficient deep learning technique using a convolutional neural network model for classifying emotions from facial images and detecting age and gender from the facial expressions efficiently. Experimental results show that the proposed model outperformed baseline works by achieving an accuracy of 95.65% for emotion recognition, 98.5% for age recognition, and 99.14% for gender recognition.

  相似文献   

4.
The challenge of coping with non-frontal head poses during facial expression recognition results in considerable reduction of accuracy and robustness when capturing expressions that occur during natural communications. In this paper, we attempt to recognize facial expressions under poses with large rotation angles from 2D videos. A depth-patch based 4D expression representation model is proposed. It was reconstructed from 2D dynamic images for delineating continuous spatial changes and temporal context under non-frontal cases. Furthermore, we present an effective deep neural network classifier, which can accurately capture pose-variant expression features from the depth patches and recognize non-frontal expressions. Experimental results on the BU-4DFE database show that the proposed method achieves a high recognition accuracy of 86.87% for non-frontal facial expressions within a range of head rotation angle of up to 52°, outperforming existing methods. We also present a quantitative analysis of the components contributing to the performance gain through tests on the BU-4DFE and Multi-PIE datasets.  相似文献   

5.
人脸的层次化描述模型及识别研究   总被引:7,自引:0,他引:7  
人脸自动识别是一个困难但有重要意义的工作。文中提出了一种基于人脸层次化描述的识别方法。该方法首先对人脸进行快速准确的特征定位及标准化,然后采用主元分析神经网络分别对定位的人脸及其特征区域进行最佳特征提取,从而得到人脸在低分辨率和较高分辨率上的两层特征描述用以识别,具有识别率高、特征数据量适中、可用于大量人像识别等特点。此方法在1300幅人像上进行了测试,结果表明其在人脸转动、表情变化或入脸未经训练  相似文献   

6.
The automatic recognition of facial expressions is critical to applications that are required to recognize human emotions, such as multimodal user interfaces. A novel framework for recognizing facial expressions is presented in this paper. First, distance-based features are introduced and are integrated to yield an improved discriminative power. Second, a bag of distances model is applied to comprehend training images and to construct codebooks automatically. Third, the combined distance-based features are transformed into mid-level features using the trained codebooks. Finally, a support vector machine (SVM) classifier for recognizing facial expressions can be trained. The results of this study show that the proposed approach outperforms the state-of-the-art methods regarding the recognition rate, using a CK+ dataset.  相似文献   

7.
为了更好地将现有深度卷积神经网络应用于表情识别,提出将构建自然表情图像集预训练和多任务深度学习相结合的方法。首先,利用社交网络图像构建一个自发面部表情数据集,对现有深度卷积神经网络进行预训练;然后,以双层树分类器替换输出层的平面softmax分类器,构建深度多任务人脸表情识别模型。实验结果表明,本文提出的方法有效提高了人脸表情识别准确率。  相似文献   

8.
目的 表情识别在商业、安全、医学等领域有着广泛的应用前景,能够快速准确地识别出面部表情对其研究与应用具有重要意义。传统的机器学习方法需要手工提取特征且准确率难以保证。近年来,卷积神经网络因其良好的自学习和泛化能力得到广泛应用,但还存在表情特征提取困难、网络训练时间过长等问题,针对以上问题,提出一种基于并行卷积神经网络的表情识别方法。方法 首先对面部表情图像进行人脸定位、灰度统一以及角度调整等预处理,去除了复杂的背景、光照、角度等影响,得到了精确的人脸部分。然后针对表情图像设计一个具有两个并行卷积池化单元的卷积神经网络,可以提取细微的表情部分。该并行结构具有3个不同的通道,分别提取不同的图像特征并进行融合,最后送入SoftMax层进行分类。结果 实验使用提出的并行卷积神经网络在CK+、FER2013两个表情数据集上进行了10倍交叉验证,最终的结果取10次验证的平均值,在CK+及FER2013上取得了94.03%与65.6%的准确率。迭代一次的时间分别为0.185 s和0.101 s。结论 为卷积神经网络的设计提供了一种新思路,可以在控制深度的同时扩展广度,提取更多的表情特征。实验结果表明,针对数量、分辨率、大小等差异较大的表情数据集,该网络模型均能够获得较高的识别率并缩短训练时间。  相似文献   

9.

Facial expressions are essential in community based interactions and in the analysis of emotions behaviour. The automatic identification of face is a motivating topic for the researchers because of its numerous applications like health care, video conferencing, cognitive science etc. In the computer vision with the facial images, the automatic detection of facial expression is a very challenging issue to be resolved. An innovative methodology is introduced in the presented work for the recognition of facial expressions. The presented methodology is described in subsequent stages. At first, input image is taken from the facial expression database and pre-processed with high frequency emphasis (HFE) filtering and modified histogram equalization (MHE). After the process of image enhancement, Viola Jones (VJ) framework is utilized to detect the face in the images and also the face region is cropped by finding the face coordinates. Afterwards, different effective features such as shape information is extracted from enhanced histogram of gradient (EHOG feature), intensity variation is extracted with mean, standard deviation and skewness, facial movement variation is extracted with facial action coding (FAC),texture is extracted using weighted patch based local binary pattern (WLBP) and spatial information is extracted byentropy based Spatial feature. Subsequently, dimensionality of the features are reduced by attaining the most relevant features using Residual Network (ResNet). Finally, extended wavelet deep convolutional neural network (EWDCNN) classifier uses the extracted features and accurately detects the face expressions as sad, happy, anger, fear disgust, surprise and neutral classes. The implementation platform used in the work is PYTHON. The presented technique is tested with the three datasets such as JAFFE, CK+ and Oulu-CASIA.

  相似文献   

10.
亲属关系验证是人脸识别的一个重要分支,可以用于寻找失散亲人、搜寻走失儿童、构建家庭图谱、社交媒体分析等重要场景。父母和孩子的人脸图像之间往往存在较大的差异,如何从人脸中提取到有鉴别力的特征是提高亲属关系验证准确率的关键。因此,提出了一种基于深度学习和人脸局部特征增强的亲属关系验证方法,构建了人脸局部特征增强验证网络(Local Facial Feature Enhancement Verification Net,LFFEV Net),获取用于亲属关系验证的具有强鉴别力的人脸特征表示。LFFEV Net由局部特征注意力网络和残差验证网络两部分组成。局部特征注意力网络提取人脸局部关键特征,将获取的局部关键特征和对应的原始图像一同输入到残差验证网络中获取更具鉴别力的人脸特征,将特征经过融合并结合Family ID信息进行亲属关系验证。算法在公开的亲属关系数据集KinFaceW-I和KinFaceW-II上进行测试,实验结果表明,所设计的方法在亲属关系验证任务中有较高的识别率。  相似文献   

11.
目的 目前2D表情识别方法对于一些混淆性较高的表情识别率不高并且容易受到人脸姿态、光照变化的影响,利用RGBD摄像头Kinect获取人脸3D特征点数据,提出了一种结合像素2D特征和特征点3D特征的实时表情识别方法。方法 首先,利用3种经典的LBP(局部二值模式)、Gabor滤波器、HOG(方向梯度直方图)提取了人脸表情2D像素特征,由于2D像素特征对于人脸表情描述能力的局限性,进一步提取了人脸特征点之间的角度、距离、法向量3种3D表情特征,以对不同表情的变化情况进行更加细致地描述。为了提高算法对混淆性高的表情识别能力并增加鲁棒性,将2D像素特征和3D特征点特征分别训练了3组随机森林模型,通过对6组随机森林分类器的分类结果加权组合,得到最终的表情类别。结果 在3D表情数据集Face3D上验证算法对9种不同表情的识别效果,结果表明结合2D像素特征和3D特征点特征的方法有利于表情的识别,平均识别率达到了84.7%,高出近几年提出的最优方法4.5%,而且相比单独地2D、3D融合特征,平均识别率分别提高了3.0%和5.8%,同时对于混淆性较强的愤怒、悲伤、害怕等表情识别率均高于80%,实时性也达到了10~15帧/s。结论 该方法结合表情图像的2D像素特征和3D特征点特征,提高了算法对于人脸表情变化的描述能力,而且针对混淆性较强的表情分类,对多组随机森林分类器的分类结果加权平均,有效地降低了混淆性表情之间的干扰,提高了算法的鲁棒性。实验结果表明了该方法相比普通的2D特征、3D特征等对于表情的识别不仅具有一定的优越性,同时还能保证算法的实时性。  相似文献   

12.
苏志明  王烈  蓝峥杰 《计算机工程》2021,47(12):299-307,315
人脸表情细微的类间差异和显著的类内变化增加了人脸表情识别难度。构建一个基于多尺度双线性池化神经网络的识别模型。设计3种不同尺度网络提取人脸表情全局特征,并引入分层双线性池化层,集成多个同一网络及不同网络的多尺度跨层双线性特征以捕获不同层级间的部分特征关系,从而增强模型对面部表情细微特征的表征及判别能力。同时,使用逐层反卷积融合多层特征信息,解决神经网络通过多层卷积层、池化层提取特征时丢失部分关键特征的问题。实验结果表明,该模型在FER2013和CK+公开数据集上的识别率分别为73.725%、98.28%,优于SLPM、CL、JNS等人脸表情识别模型。  相似文献   

13.
Recognizing expressions is a key part of human social interaction, and processing of facial expression information is largely automatic for humans, but it is a non-trivial task for a computational system. The purpose of this work is to develop computational models capable of differentiating between a range of human facial expressions. Raw face images are examples of high-dimensional data, so here we use two dimensionality reduction techniques: principal component analysis and curvilinear component analysis. We also preprocess the images with a bank of Gabor filters, so that important features in the face images may be identified. Subsequently, the faces are classified using a support vector machine. We show that it is possible to differentiate faces with a prototypical expression from the neutral expression. Moreover, we can achieve this with data that has been massively reduced in size: in the best case the original images are reduced to just 5 components. We also investigate the effect size on face images, a concept which has not been reported previously on faces. This enables us to identify those areas of the face that are involved in the production of a facial expression.  相似文献   

14.
This study proposes a system that is able to classify a facial expression in one of the six categories, namely, Joy, Disgust, Anger, Sadness, Fear and Surprise and also to assign to each expression its intensity in the range: High, Medium and Low. This is carried out in two independent and parallel processes. Permanent and transient facial features are detected from still images, and pertinent information about the presence of transient features on specific facial regions and about facial distances computed from permanent facial features is extracted. Both classification and quantification processes are based on transient and permanent features. The belief theory is used with the two processes because of its ability in fusing data coming from different sensors. The system outputs a recognised and quantified expression. The quantification process allows recognising a new subset of expressions deduced from the basic ones. Indeed, by associating to each expression three intensities low, medium and high, we deduce three facial expressions. Finally, a set of 18 facial expressions is categorised instead of the six ones. Experimental results are given to show the system classification accuracy.  相似文献   

15.
基于Boosting RBF神经网络的人脸年龄估计   总被引:1,自引:0,他引:1       下载免费PDF全文
胡斓  夏利民 《计算机工程》2006,32(19):199-201
年龄变化是引起人脸外观变化的主要原因,但每个人的生活方式不同,难以准确地从人脸图像中估计年龄。该文提出了一种基于人脸图像的年龄估计方法,用NMF方法提取人脸特征,通过RBF神经网络确定一个人脸图像及其相符年龄之间的估计函数。在此基础上,为了提高神经网络的泛化能力和故障诊断的准确性,利用Boosting方法构造一个基于神经网络的函数序列,将它们组合成一个加强的估计函数,实验结果表明了该方法的正确性。  相似文献   

16.
随着CG技术的发展,人的身体形状和动作画像,可以通过电脑生成。人类逐步进入信息社会,人与机器的对话加入人物像后,可以使对话稳定和顺利进行,该文对人物面部表情的仿真,特别是自然、多彩的表情生成,从技术角度做进一步探讨。  相似文献   

17.
In this paper, we propose a recursive framework to recognize facial expressions from images in real scenes. Unlike traditional approaches that typically focus on developing and refining algorithms for improving recognition performance on an existing dataset, we integrate three important components in a recursive manner: facial dataset generation, facial expression recognition model building, and interactive interfaces for testing and new data collection. To start with, we first create candid images for facial expression (CIFE) dataset. We then apply a convolutional neural network (CNN) to CIFE and build a CNN model for web image expression classification. In order to increase the expression recognition accuracy, we also fine-tune the CNN model and thus obtain a better CNN facial expression recognition model. Based on the fine-tuned CNN model, we design a facial expression game engine and collect a new and more balanced dataset, GaMo. The images of this dataset are collected from the different expressions our game users make when playing the game. Finally, we run yet another recursive step—a self-evaluation of the quality of the data labeling and propose a self-cleansing mechanism for improve the quality of the data. We evaluate the GaMo and CIFE datasets and show that our recursive framework can help build a better facial expression model for dealing with real scene facial expression tasks.  相似文献   

18.
Image-based animation of facial expressions   总被引:1,自引:0,他引:1  
We present a novel technique for creating realistic facial animations given a small number of real images and a few parameters for the in-between images. This scheme can also be used for reconstructing facial movies where the parameters can be automatically extracted from the images. The in-between images are produced without ever generating a three-dimensional model of the face. Since facial motion due to expressions are not well defined mathematically our approach is based on utilizing image patterns in facial motion. These patterns were revealed by an empirical study which analyzed and compared image motion patterns in facial expressions. The major contribution of this work is showing how parameterized “ideal” motion templates can generate facial movies for different people and different expressions, where the parameters are extracted automatically from the image sequence. To test the quality of the algorithm, image sequences (one of which was taken from a TV news broadcast) were reconstructed, yielding movies hardly distinguishable from the originals. Published online: 2 October 2002 Correspondence to: A. Tal Work has been supported in part by the Israeli Ministry of Industry and Trade, The MOST Consortium  相似文献   

19.
针对人脸轮廓特征区域的局部化限定,结合关键特征点的提取和脸部邻近颜色区域的融合,并引入注意力机制,提出了一种基于CycleGAN的关键人脸轮廓区域卡通风格化生成算法,以此作为初始样本构建生成对抗网络(GAN)并获取自然融合的局部卡通风格化人脸图像.利用人脸轮廓及关键特征点进行提取,结合颜色特征信息限定关键人脸风格化区域...  相似文献   

20.
In this paper, an analysis of the effect of partial occlusion on facial expression recognition is investigated. The classification from partially occluded images in one of the six basic facial expressions is performed using a method based on Gabor wavelets texture information extraction, a supervised image decomposition method based on Discriminant Non-negative Matrix Factorization and a shape-based method that exploits the geometrical displacement of certain facial features. We demonstrate how partial occlusion affects the above mentioned methods in the classification of the six basic facial expressions, and indicate the way partial occlusion affects human observers when recognizing facial expressions. An attempt to specify which part of the face (left, right, lower or upper region) contains more discriminant information for each facial expression, is also made and conclusions regarding the pairs of facial expressions misclassifications that each type of occlusion introduces, are drawn.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号