首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
奚琰 《计算机系统应用》2022,31(11):175-183
和实验室环境不同,现实生活中的人脸表情图像场景复杂,其中最常见的局部遮挡问题会造成面部外观的显著改变,使得模型提取到的全局特征包含与情感无关的冗余信息从而降低了判别力.针对此问题,本文提出了一种结合对比学习和通道-空间注意力机制的人脸表情识别方法,学习各局部显著情感特征并关注局部特征与全局特征之间的关系.首先引入对比学习,通过特定的数据增强方法设计新的正负样本选取策略,对大量易获得的无标签情感数据进行预训练,学习具有感知遮挡能力的表征,再将此表征迁移到下游人脸表情识别任务以提高识别性能.在下游任务中,将每张人脸图像的表情分析问题转化为多个局部区域的情感检测问题,使用通道-空间注意力机制学习人脸不同局部区域的细粒度注意力图,并对加权特征进行融合,削弱遮挡内容带来的噪声影响,最后提出约束损失联合训练,优化最终用于分类的融合特征.实验结果表明,无论是在公开的非遮挡人脸表情数据集(RAFDB和FER2013)还是人工合成的遮挡人脸表情数据集上,所提方法都取得了与现有先进方法可媲美的结果.  相似文献   

2.
为更好获取人脸局部表情特征,提出了一种融合局部二值模式(Local Binary Pattern,LBP)和局部稀疏表示的人脸表情特征与识别方法。为深入分析表情对人脸子区域的影响,根据五官特征对人脸进行非均匀分区,并提取局部LBP特征;为精细刻画人脸局部纹理,整合人脸局部特征,设计了人脸局部稀疏重构表示方法,并根据表情对各局部子区域的影响因子,加权融合局部重构残差进行人脸表情识别。在JAFFE2表情人脸库上的对比实验,验证了该方法的可行性和鲁棒性。  相似文献   

3.
In this paper, we propose a new approach for face representation and recognition based on Adaptively Weighted Sub-Gabor Array (AWSGA) when only one sample image per enrolled subject is available. Instead of using holistic representation of face images which is not effective under different facial expressions and partial occlusions, the proposed algorithm utilizes a local Gabor array to represent faces partitioned into sub-patterns. Especially, in order to perform matching in the sense of the richness of identity information rather than the size of a local area and to handle the partial occlusion problem, the proposed method employs an adaptively weighting scheme to weight the Sub-Gabor features extracted from local areas based on the importance of the information they contain and their similarities to the corresponding local areas in the general face image. An extensive experimental investigation is conducted using AR and Yale face databases covering face recognition under controlled/ideal condition, different illumination condition, different facial expression and partial occlusion. The system performance is compared with the performance of four benchmark approaches. The promising experimental results indicate that the proposed method can greatly improve the recognition rates under different conditions.  相似文献   

4.
5.
In expression recognition, feature representation is critical for successful recognition since it contains distinctive information of expressions. In this paper, a new approach for representing facial expression features is proposed with its objective to describe features in an effective and efficient way in order to improve the recognition performance. The method combines the facial action coding system(FACS) and "uniform" local binary patterns(LBP) to represent facial expression features from coarse to fine. The facial feature regions are extracted by active shape models(ASM) based on FACS to obtain the gray-level texture. Then, LBP is used to represent expression features for enhancing the discriminant. A facial expression recognition system is developed based on this feature extraction method by using K nearest neighborhood(K-NN) classifier to recognize facial expressions. Finally, experiments are carried out to evaluate this feature extraction method. The significance of removing the unrelated facial regions and enhancing the discrimination ability of expression features in the recognition process is indicated by the results, in addition to its convenience.  相似文献   

6.
This paper proposes a novel binary particle swarm optimization (PSO) algorithm using artificial immune system (AIS) for face recognition. Inspired by face recognition ability in human visual system (HVS), this algorithm fuses the information of the holistic and partial facial features. The holistic facial features are extracted by using principal component analysis (PCA), while the partial facial features are extracted by non-negative matrix factorization with sparseness constraints (NMFs). Linear discriminant analysis (LDA) is then applied to enhance adaptability to illumination and expression. The proposed algorithm is used to select the fusion rules by minimizing the Bayesian error cost. The fusion rules are finally applied for face recognition. Experimental results using UMIST and ORL face databases show that the proposed fusion algorithm outperforms individual algorithm based on PCA or NMFs.  相似文献   

7.
Face localization, feature extraction, and modeling are the major issues in automatic facial expression recognition. In this paper, a method for facial expression recognition is proposed. A face is located by extracting the head contour points using the motion information. A rectangular bounding box is fitted for the face region using those extracted contour points. Among the facial features, eyes are the most prominent features used for determining the size of a face. Hence eyes are located and the visual features of a face are extracted based on the locations of eyes. The visual features are modeled using support vector machine (SVM) for facial expression recognition. The SVM finds an optimal hyperplane to distinguish different facial expressions with an accuracy of 98.5%.  相似文献   

8.
目的 3维人脸的表情信息不均匀地分布在五官及脸颊附近,对表情进行充分的描述和合理的权重分配是提升识别效果的重要途径。为提高3维人脸表情识别的准确率,提出了一种基于带权重局部旋度模式的3维人脸表情识别算法。方法 首先,为了提取具有较强表情分辨能力的特征,提出对3维人脸的旋度向量进行编码,获取局部旋度模式作为表情特征;然后,提出将ICNP(interactive closest normal points)算法与最小投影偏差算法结合,前者实现3维人脸子区域的不规则划分,划分得到的11个子区域保留了表情变化下面部五官和肌肉的完整性,后者根据各区域对表情识别的贡献大小为各区域的局部旋度模式特征分配权重;最后,带有权重的局部旋度模式特征被输入到分类器中实现表情识别。结果 基于BU-3DFE 3维人脸表情库对本文提出的局部旋度模式特征进行评估,结果表明其分辨能力较其他表情特征更强;基于BU-3DFE库进行表情识别实验,与其他3维人脸表情识别算法相比,本文算法取得了最高的平均识别率,达到89.67%,同时对易混淆的“悲伤”、“愤怒”和“厌恶”等表情的误判率也较低。结论 局部旋度模式特征对3维人脸的表情有较强的表征能力; ICNP算法与最小投影偏差算法的结合,能够实现区域的有效划分和权重的准确计算,有效提高特征对表情的识别能力。试验结果表明本文算法对3维人脸表情具有较高的识别率,并对易混淆的相似表情仍具有较好的识别效果。  相似文献   

9.
This paper proposes a hybrid-boost learning algorithm for multi-pose face detection and facial expression recognition. To speed-up the detection process, the system searches the entire frame for the potential face regions by using skin color detection and segmentation. Then it scans the skin color segments of the image and applies the weak classifiers along with the strong classifier for face detection and expression classification. This system detects human face in different scales, various poses, different expressions, partial-occlusion, and defocus. Our major contribution is proposing the weak hybrid classifiers selection based on the Harr-like (local) features and Gabor (global) features. The multi-pose face detection algorithm can also be modified for facial expression recognition. The experimental results show that our face detection system and facial expression recognition system have better performance than the other classifiers.  相似文献   

10.
目的表情变化是3维人脸识别面临的主要问题。为克服表情影响,提出了一种基于面部轮廓线对表情鲁棒的3维人脸识别方法。方法首先,对人脸进行预处理,包括人脸区域切割、平滑处理和姿态归一化,将所有的人脸置于姿态坐标系下;然后,从3维人脸模型的半刚性区域提取人脸多条垂直方向的轮廓线来表征人脸面部曲面;最后,利用弹性曲线匹配算法计算不同3维人脸模型间对应的轮廓线在预形状空间(preshape space)中的测地距离,将其作为相似性度量,并且对所有轮廓线的相似度向量加权融合,得到总相似度用于分类。结果在FRGC v2.0数据库上进行识别实验,获得97.1%的Rank-1识别率。结论基于面部轮廓线的3维人脸识别方法,通过从人脸的半刚性区域提取多条面部轮廓线来表征人脸,在一定程度上削弱了表情的影响,同时还提高了人脸匹配速度。实验结果表明,该方法具有较强的识别性能,并且对表情变化具有较好的鲁棒性。  相似文献   

11.
为了降低样貌、姿态、眼镜以及表情定义不统一等因素对人脸表情识别的影响,提出一种人脸样貌独立判别的协作表情识别算法。首先,采用自动的人脸检测算法定位、对齐视频每帧的人脸区域,并从人脸视频序列中选择峰值表情的人脸;然后,采用峰值人脸与某个表情类内的所有人脸产生表情类内差异人脸信息,并通过计算峰值表情人脸与表情类内差异人脸的差异信息获得协作的表情表示;最终,采用基于稀疏的分类器与表情表示决定每个人脸表情的标签。采用欧美与亚洲人脸的数据库进行仿真实验,结果表明本算法获得了较好的表情识别准确率,对不同样貌、佩戴眼镜的人脸样本也具有较好的识别效果。  相似文献   

12.
This paper presents a novel facial expression recognition scheme based on extension theory. The facial region is detected and segmented by using feature invariant approaches. Accurate positions of the lips are then extracted as the features of a face. Next, based on the extension theory, basic facial expressions are classified by evaluating the correlation functions among various lip types and positions of the corners of the mouth. Additionally, the proposed algorithm is implemented using the XScale PXA270 embedded system in order to achieve real-time recognition for various facial expressions. Experimental results demonstrate that the proposed scheme can recognize facial expressions precisely and efficiently.  相似文献   

13.
在表情中含有最多特征信息的是面部眉毛、眼睛和嘴巴这三个区域,为充分利用这些特征,减少图像中无用信息在识别过程中对计算机内存的占用,提高人脸表情识别系统的准确率和速度,首先采用haar 和 adaboost人脸检测算法,对图像中的人脸进行识别,获得人脸图像并提取眉毛、眼睛和嘴巴,生成局部(眉毛、眼睛、嘴巴)二值化图,利用PCA方法对人脸图像降维,降维后的全局和局部灰度特征值组成一个列向量。样本由表情数据库产生,经过神经网络样本训练后,进行表情识别。结果表明,该系统对人脸表情识别速度明显快于Gabor 小波算法;识别的准确率高于单独使用PCA算法和神经网络算法;消耗内存比用Gabor 小波算法少,运行较流畅。得出结论:因为提取出包含表情特征信息集中区的眉毛、眼睛和嘴巴,尽可能地多保留了这些局部特征信息,因而提高了表情识别准确率,同时,采用PCA方法对原始图像进行降维处理,有效的减少了信息冗余。  相似文献   

14.
This paper presents a hierarchical multi-state pose-dependent approach for facial feature detection and tracking under varying facial expression and face pose. For effective and efficient representation of feature points, a hybrid representation that integrates Gabor wavelets and gray-level profiles is proposed. To model the spatial relations among feature points, a hierarchical statistical face shape model is proposed to characterize both the global shape of human face and the local structural details of each facial component. Furthermore, multi-state local shape models are introduced to deal with shape variations of some facial components under different facial expressions. During detection and tracking, both facial component states and feature point positions, constrained by the hierarchical face shape model, are dynamically estimated using a switching hypothesized measurements (SHM) model. Experimental results demonstrate that the proposed method accurately and robustly tracks facial features in real time under different facial expressions and face poses.  相似文献   

15.
为了克服表情变化致使三维人脸识别性能不佳的问题,提出基于鼻尖点区域分割的表情鲁棒三维人脸识别方法。首先,根据表情对人脸影响具有区域性的特点,提出仅依赖鼻尖点的表情不变区域(刚性区域)和表情易变(非刚性区域)划分方法;然后针对表情不变区域和表情易变区域使用不同的特征描述方式并计算匹配相似度;最后将表情不变区域和表情易变的相似度进行加权融合实现最终身份识别。提出的方法分别在FRGC v2.0和自建WiseFace表情人脸数据库上达到98.52%和99.01%的rank 1识别率,证明该方法对表情变化具有较强的鲁棒性。  相似文献   

16.
17.
人脸图像中不同子区域对表情识别的贡献度不同,而且同一子区域对不同年龄段人(如中老年、青年、儿童)的表情识别贡献度也不同。因此,若采用单一固定的子区域加权模式进行人脸表情识别,无法达到最佳识别率。为了提高识别率,提出一种可变加权值的表情识别方法。对中老年人、青年人和儿童分别建立表情数据库,分割出纯人脸区域、眼睛区域和嘴巴区域。对这些区域提取特征后将其进行加权融合,通过设置不同的权值研究其对不同年龄段人脸表情识别的影响。实验结果表明,采用可变加权值比采用固定加权值方法的识别率明显更高。对中老年人的表情识别率提高了8.6%,对青年人的表情识别率提高了4.8%,对儿童的表情识别率提高了1.4%。  相似文献   

18.

Facial expressions are essential in community based interactions and in the analysis of emotions behaviour. The automatic identification of face is a motivating topic for the researchers because of its numerous applications like health care, video conferencing, cognitive science etc. In the computer vision with the facial images, the automatic detection of facial expression is a very challenging issue to be resolved. An innovative methodology is introduced in the presented work for the recognition of facial expressions. The presented methodology is described in subsequent stages. At first, input image is taken from the facial expression database and pre-processed with high frequency emphasis (HFE) filtering and modified histogram equalization (MHE). After the process of image enhancement, Viola Jones (VJ) framework is utilized to detect the face in the images and also the face region is cropped by finding the face coordinates. Afterwards, different effective features such as shape information is extracted from enhanced histogram of gradient (EHOG feature), intensity variation is extracted with mean, standard deviation and skewness, facial movement variation is extracted with facial action coding (FAC),texture is extracted using weighted patch based local binary pattern (WLBP) and spatial information is extracted byentropy based Spatial feature. Subsequently, dimensionality of the features are reduced by attaining the most relevant features using Residual Network (ResNet). Finally, extended wavelet deep convolutional neural network (EWDCNN) classifier uses the extracted features and accurately detects the face expressions as sad, happy, anger, fear disgust, surprise and neutral classes. The implementation platform used in the work is PYTHON. The presented technique is tested with the three datasets such as JAFFE, CK+ and Oulu-CASIA.

  相似文献   

19.
基于特征点表情变化的3维人脸识别   总被引:1,自引:1,他引:0       下载免费PDF全文
目的 为克服表情变化对3维人脸识别的影响,提出一种基于特征点提取局部区域特征的3维人脸识别方法。方法 首先,在深度图上应用2维图像的ASM(active shape model)算法粗略定位出人脸特征点,再根据Shape index特征在人脸点云上精确定位出特征点。其次,提取以鼻中为中心的一系列等测地轮廓线来表征人脸形状;然后,提取具有姿态不变性的Procrustean向量特征(距离和角度)作为识别特征;最后,对各条等测地轮廓线特征的分类结果进行了比较,并对分类结果进行决策级融合。结果 在FRGC V2.0人脸数据库分别进行特征点定位实验和识别实验,平均定位误差小于2.36 mm,Rank-1识别率为98.35%。结论 基于特征点的3维人脸识别方法,通过特征点在人脸近似刚性区域提取特征,有效避免了受表情影响较大的嘴部区域。实验证明该方法具有较高的识别精度,同时对姿态、表情变化具有一定的鲁棒性。  相似文献   

20.
针对人脸表情时空域特征信息的有效提取,本文提出了一种CBP-TOP(Centralized Binary Patterns From Three Orthogonal Panels)特征和SVM分类器相结合的人脸表情识别新方法。该方法首先将原始图像序列进行图像预处理,包括人脸检测、图像截取和图像尺度归一化,然后用CBP-TOP算子对图像序列进行分块提取特征,最后采用SVM分类器进行表情识别。实验结果表明,该方法能更有效提取图像序列的运动特征和动态纹理信息,提高了表情识别的准确率。和VLBP特征相比, CBP-TOP特征在表情识别中具有更高的识别率和更快的识别速度。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号