首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper we address the problem of 3D facial expression recognition. We propose a local geometric shape analysis of facial surfaces coupled with machine learning techniques for expression classification. A computation of the length of the geodesic path between corresponding patches, using a Riemannian framework, in a shape space provides a quantitative information about their similarities. These measures are then used as inputs to several classification methods. The experimental results demonstrate the effectiveness of the proposed approach. Using multiboosting and support vector machines (SVM) classifiers, we achieved 98.81% and 97.75% recognition average rates, respectively, for recognition of the six prototypical facial expressions on BU-3DFE database. A comparative study using the same experimental setting shows that the suggested approach outperforms previous work.  相似文献   

2.
3.
Su  Chan  Wei  Jianguo  Lin  Deyu  Kong  Linghe 《Pattern Analysis & Applications》2023,26(2):543-553
Pattern Analysis and Applications - Both the multiple sources of the available in-the-wild datasets and noisy information of images lead to huge challenges for discriminating subtle distinctions...  相似文献   

4.
多模态情感识别是当前情感计算研究领域的重要内容,针对人脸表情和动作姿态开展双模态情感识别研究,提出一种基于双边稀疏偏最小二乘的表情和姿态的双模态情感识别方法.首先,从视频图像系列中分别提取表情和姿态两种模态的空时特征作为情感特征矢量.然后,通过双边稀疏偏最小二乘(BSPLS)的数据降维方法来进一步提取两组模态中的情感特征,并组合成新的情感特征向量.最后,采用了两种分类器来进行情感的分类识别.以国际上广泛采用的FABO表情和姿态的双模态情感数据库为实验数据,并与多种子空间方法(主成分分析、典型相关分析、偏最小二乘回归)进行对比实验来评估本文方法的识别性能.实验结果表明,两种模态融合后相比单模态更加有效,双边稀疏偏最小二乘(BSPLS)算法在几种方法中得到最高的情感识别率.  相似文献   

5.
针对人脸表情呈现方式多样化以及人脸表情识别易受光照、姿势、遮挡等非线性因素影响的问题,提出了一种深度多尺度融合注意力残差网络(deep multi-scale fusion attention residual network, DMFA-ResNet)。该模型基于ResNet-50残差网络,设计了新的注意力残差模块,由7个具有三条支路的注意残差学习单元构成,能够对输入图像进行并行多卷积操作,以获得多尺度特征,同时引入注意力机制,突出重点局部区域,有利于遮挡图像的特征学习。通过在注意力残差模块之间增加过渡层以去除冗余信息,简化网络复杂度,在保证感受野的情况下减少计算量,实现网络抗过拟合效果。在3组数据集上的实验结果表明,本文提出的算法均优于对比的其他先进方法。  相似文献   

6.
目的 针对当前视频情感判别方法大多仅依赖面部表情、而忽略了面部视频中潜藏的生理信号所包含的情感信息,本文提出一种基于面部表情和血容量脉冲(BVP)生理信号的双模态视频情感识别方法。方法 首先对视频进行预处理获取面部视频;然后对面部视频分别提取LBP-TOP和HOG-TOP两种时空表情特征,并利用视频颜色放大技术获取BVP生理信号,进而提取生理信号情感特征;接着将两种特征分别送入BP分类器训练分类模型;最后利用模糊积分进行决策层融合,得出情感识别结果。结果 在实验室自建面部视频情感库上进行实验,表情单模态和生理信号单模态的平均识别率分别为80%和63.75%,而融合后的情感识别结果为83.33%,高于融合前单一模态的情感识别精度,说明了本文融合双模态进行情感识别的有效性。结论 本文提出的双模态时空特征融合的情感识别方法更能充分地利用视频中的情感信息,有效增强了视频情感的分类性能,与类似的视频情感识别算法对比实验验证了本文方法的优越性。另外,基于模糊积分的决策层融合算法有效地降低了不可靠决策信息对融合的干扰,最终获得更优的识别精度。  相似文献   

7.
针对ResNet50中的Bottleneck经过1×1卷积降维后主干分支丢失部分特征信息而导致在表情识别中准确率不高的问题,本文通过引入Ghost模块和深度可分离卷积分别替换Bottleneck中的1×1卷积和3×3卷积,保留更多原始特征信息,提升主干分支的特征提取能力;利用Mish激活函数替换Bottleneck中的ReLU激活函数,提高了表情识别的准确率;在此基础上,通过在改进的Bottleneck之间添加非对称残差注意力模块(asymmetric residual attention block, ARABlock)来提升模型对重要信息的表示能力,从而提出一种面向表情识别的重影非对称残差注意力网络(ghost asymmetric residual attention network, GARAN)模型。对比实验结果表明,本文方法在FER2013和CK+表情数据集上具有较高的识别准确率。  相似文献   

8.

Emotion recognition from facial images is considered as a challenging task due to the varying nature of facial expressions. The prior studies on emotion classification from facial images using deep learning models have focused on emotion recognition from facial images but face the issue of performance degradation due to poor selection of layers in the convolutional neural network model.To address this issue, we propose an efficient deep learning technique using a convolutional neural network model for classifying emotions from facial images and detecting age and gender from the facial expressions efficiently. Experimental results show that the proposed model outperformed baseline works by achieving an accuracy of 95.65% for emotion recognition, 98.5% for age recognition, and 99.14% for gender recognition.

  相似文献   

9.
Zou  Wei  Zhang  Dong  Lee  Dah-Jye 《Applied Intelligence》2022,52(3):2918-2929
Applied Intelligence - Using lightweight networks for facial expression recognition (FER) is becoming an important research topic in recent years. The key to the success of FER with lightweight...  相似文献   

10.
Zhao  Dezhu  Qian  Yufeng  Liu  Jun  Yang  Min 《The Journal of supercomputing》2022,78(4):4681-4708
The Journal of Supercomputing - A facial expression recognition (FER) algorithm is built on the advanced convolutional neural network (CNN) to improve the current FER algorithms’ recognition...  相似文献   

11.
人脸表情识别是计算机视觉领域的研究热点之一。针对自然状态下的人脸存在多视角变化、脸部信息缺失等问题,提出了一种基于MVFE-LightNet(Multi-View Facial Expression Lightweight Network)的多视角人脸表情识别方法。首先,在残差网络的基础上设计卷积网络提取不同视角下的表情特征,引入深度可分离卷积来减少网络参数。其次,嵌入压缩和奖惩网络模块学习特征权重,利用特征重新标定方式提高网络表示能力,并通过加入空间金字塔池化增强网络的鲁棒性。最后,为了进一步优化识别结果,采用AdamW(Adam with Weight decay)优化方法使网络模型加速收敛。在RaFD、BU-3DFE和Fer2013表情库上的实验表明,该方法具有较高的识别率,且减少网络计算时间。  相似文献   

12.
针对传统的矿工面部表情识别方法识别率较低、算法复杂等问题,以卷积神经网络为基础,结合支持向量机算法中的非线性映射函数,提出了基于卷积神经网络的矿工面部表情识别方法。卷积神经网络采用权值共享的策略,运用固定权值直接构造卷积层,并依照匹配生长规则确定网络层次结构。将经过预处理的矿工面部表情图像作为卷积神经网络的测试集和训练集,使用支持向量机对表征矿工面部表情特征的神经元进行分类,从而实现对矿工面部表情的分类识别。实验结果表明,该方法对矿工面部表情的识别率达到90.71%,能够满足实际应用需要。  相似文献   

13.
This article proposes a novel framework for the recognition of six universal facial expressions. The framework is based on three sets of features extracted from a face image: entropy, brightness, and local binary pattern. First, saliency maps are obtained using the state-of-the-art saliency detection algorithm “frequency-tuned salient region detection”. The idea is to use saliency maps to determine appropriate weights or values for the extracted features (i.e., brightness and entropy).We have performed a visual experiment to validate the performance of the saliency detection algorithm against the human visual system. Eye movements of 15 subjects were recorded using an eye-tracker in free-viewing conditions while they watched a collection of 54 videos selected from the Cohn-Kanade facial expression database. The results of the visual experiment demonstrated that the obtained saliency maps are consistent with the data on human fixations. Finally, the performance of the proposed framework is demonstrated via satisfactory classification results achieved with the Cohn-Kanade database, FG-NET FEED database, and Dartmouth database of children’s faces.  相似文献   

14.
由于现实生活场景差异大,人类在不同场景中表现的情感也不尽相同,导致获取到的情感数据集标签分布不均衡;同时传统方法多采用模型预训练和特征工程来增强与表情相关特征的表示能力,但没有考虑不同特征表达之间的互补性,限制了模型的泛化性和鲁棒性.针对上述问题,提出了一种包含网络集成模型Ens-Net的端到端深度学习框架EE-GAN...  相似文献   

15.
We present a system for facial expression recognition that is evaluated on multiple databases. Automated facial expression recognition systems face a number of characteristic challenges. Firstly, obtaining natural training data is difficult, especially for facial configurations expressing emotions like sadness or fear. Therefore, publicly available databases consist of acted facial expressions and are biased by the authors’ design decisions. Secondly, evaluating trained algorithms towards real-world behavior is challenging, again due to the artificial conditions in available image data. To tackle these challenges and since our goal is to train classifiers for an online system, we use several databases in our evaluation. Comparing classifiers across data-bases determines the classifiers capability to generalize more reliable than traditional self-classification.  相似文献   

16.
心理学上的研究表明,面部表情变化主要集中在眼睛、眉毛、鼻子、嘴巴等面部器官上.受其启发,提出一种基于面部结构的表情识别方法,重点分析眼睛、眉毛、鼻子、嘴巴等关键区域的联动变化来分析表情.首先,使用鲁棒的判别响应图拟合(discriminative response map fitting,DRMF)方法自动检测出对识别人脸表情最为关键的局部人脸区域,即眼睛、眉毛、鼻子和嘴巴的部分;然后从这些关键部分中提取Haar特征;最后采用Boosting学习和联动机制,学习得到基于联合Haar特征的表情分类器.在CMU表情数据库和JAFFE表情数据库上的实验结果表明了上述方法的良好性能,即基于面部构件识别表情的方法获得了与手工精准标注人脸面部构件识别表情方法相近的识别效果.  相似文献   

17.
Automatic perception of human affective behaviour from facial expressions and recognition of intentions and social goals from dialogue contexts would greatly enhance natural human robot interaction. This research concentrates on intelligent neural network based facial emotion recognition and Latent Semantic Analysis based topic detection for a humanoid robot. The work has first of all incorporated Facial Action Coding System describing physical cues and anatomical knowledge of facial behaviour for the detection of neutral and six basic emotions from real-time posed facial expressions. Feedforward neural networks (NN) are used to respectively implement both upper and lower facial Action Units (AU) analysers to recognise six upper and 11 lower facial actions including Inner and Outer Brow Raiser, Lid Tightener, Lip Corner Puller, Upper Lip Raiser, Nose Wrinkler, Mouth Stretch etc. An artificial neural network based facial emotion recogniser is subsequently used to accept the derived 17 Action Units as inputs to decode neutral and six basic emotions from facial expressions. Moreover, in order to advise the robot to make appropriate responses based on the detected affective facial behaviours, Latent Semantic Analysis is used to focus on underlying semantic structures of the data and go beyond linguistic restrictions to identify topics embedded in the users’ conversations. The overall development is integrated with a modern humanoid robot platform under its Linux C++ SDKs. The work presented here shows great potential in developing personalised intelligent agents/robots with emotion and social intelligence.  相似文献   

18.
The paper presents novel modifications to radial basis functions (RBFs) and a neural network based classifier for holistic recognition of the six universal facial expressions from static images. The new basis functions, called cloud basis functions (CBFs) use a different feature weighting, derived to emphasize features relevant to class discrimination. Further, these basis functions are designed to have multiple boundary segments, rather than a single boundary as for RBFs. These new enhancements to the basis functions along with a suitable training algorithm allow the neural network to better learn the specific properties of the problem domain. The proposed classifiers have demonstrated superior performance compared to conventional RBF neural networks as well as several other types of holistic techniques used in conjunction with RBF neural networks. The CBF neural network based classifier yielded an accuracy of 96.1%, compared to 86.6%, the best accuracy obtained from all other conventional RBF neural network based classification schemes tested using the same database.  相似文献   

19.
基于局部特征和整体特征融合的面部表情识别   总被引:2,自引:0,他引:2  
提出融合局部特征和整体特征的方法实现人脸面部表情特征的提取。在每一个人脸图像上测量10个距离,把这些距离标准化后作为局部表情特征,用Fisher线性判别提取面部表情的整体特征;为了解决小样本问题,采取“PCA+FLD”的策略,先通过PCA把人脸图像向量投影到一个较低维的空间,再通过标准的FLD提取表情特征。融合后的特征输入到基于反向传播的前向型神经网络进行分类。在耶鲁大学yaleface数据库和日本ART建立的日本女性表情数据库(JAFFE)上实验,得到令人满意的结果。  相似文献   

20.
Facial expression recognition generally requires that faces be described in terms of a set of measurable features. The selection and quality of the features representing each face have a considerable bearing on the success of subsequent facial expression classification. Feature selection is the process of choosing a subset of features in order to increase classifier efficiency and allow higher classification accuracy. Many current dimensionality reduction techniques, used for facial expression recognition, involve linear transformations of the original pattern vectors to new vectors of lower dimensionality. In this paper, we present a methodology for the selection of features that uses nondominated sorting genetic algorithm-II (NSGA-II), which is one of the latest genetic algorithms developed for resolving problems with multiobjective approach with high accuracy. In the proposed feature selection process, NSGA-II optimizes a vector of feature weights, which increases the discrimination, by means of class separation. The proposed methodology is evaluated using 3D facial expression database BU-3DFE. Classification results validates the effectiveness and the flexibility of the proposed approach when compared with results reported in the literature using the same experimental settings.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号