首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A novel adaptive feature selection based on reconstruction residual and accurately located landmarks for expression-robust 3D face recognition is proposed in this paper. Firstly, the novel facial coarse-to-fine landmarks localization method based on Active Shape Model and Gabor wavelets transformation is proposed to exactly and automatically locate facial landmarks in range image. Secondly, the multi-scale fusion of the pyramid local binary patterns (F-PLBP) based on the irregular segmentation associated with the located landmarks is proposed to extract the discriminative feature. Thirdly, a sparse representation-based classifier based on the adaptive feature selection (A-SRC) using the distribution of the reconstruction residual is presented to select the expression-robust feature and identify the faces. Finally, the experimental evaluation based on FRGC v2.0 indicates that the adaptive feature selection method using F-PLBP combined with the A-SRC can obtain the high recognition accuracy by performing the higher discriminative power to overcome the influence from the facial expression variations.  相似文献   

2.
In this paper, a novel feature extraction method is proposed for facial expression recognition by extracting the feature from facial depth and 3D mesh alongside texture. Accordingly, the 3D Facial Expression Generic Elastic Model (3D FE-GEM) method is used to reconstruct an expression-invariant 3D model from the human face. Then, the texture, depth and mesh are extracted from the reconstructed face model. Afterwards, the Local Binary Pattern (LBP), proposed 3D High-Low Local Binary Pattern (3DH-LLBP) and Local Normal Binary Patterns (LNBPs) are applied to texture, depth and mesh of the face, respectively, to extract the feature from 2D images. Finally, the final feature vectors are generated through feature fusion and are classified by the Support Vector Machine (SVM). Convincing results are acquired for facial expression recognition on the CK+, CK, JAFFE and Bosphorus image databases compared to several state-of-the-art methods.  相似文献   

3.
A feature selection technique along with an information fusion procedure for improving the recognition accuracy of a visual and thermal image-based facial recognition system is presented in this paper. A novel modular kernel eigenspaces approach is developed and implemented on the phase congruency feature maps extracted from the visual and thermal images individually. Smaller sub-regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are then projected into higher dimensional spaces using kernel methods. The proposed localized nonlinear feature selection procedure helps to overcome the bottlenecks of illumination variations, partial occlusions, expression variations and variations due to temperature changes that affect the visual and thermal face recognition techniques. AR and Equinox databases are used for experimentation and evaluation of the proposed technique. The proposed feature selection procedure has greatly improved the recognition accuracy for both the visual and thermal images when compared to conventional techniques. Also, a decision level fusion methodology is presented which along with the feature selection procedure has outperformed various other face recognition techniques in terms of recognition accuracy.   相似文献   

4.
In this paper, two novel methods for facial expression recognition in facial image sequences are presented. The user has to manually place some of Candide grid nodes to face landmarks depicted at the first frame of the image sequence under examination. The grid-tracking and deformation system used, based on deformable models, tracks the grid in consecutive video frames over time, as the facial expression evolves, until the frame that corresponds to the greatest facial expression intensity. The geometrical displacement of certain selected Candide nodes, defined as the difference of the node coordinates between the first and the greatest facial expression intensity frame, is used as an input to a novel multiclass Support Vector Machine (SVM) system of classifiers that are used to recognize either the six basic facial expressions or a set of chosen Facial Action Units (FAUs). The results on the Cohn-Kanade database show a recognition accuracy of 99.7% for facial expression recognition using the proposed multiclass SVMs and 95.1% for facial expression recognition based on FAU detection.  相似文献   

5.
使用PCA降维,提取人脸表情特征,并结合基于距离的哈希K近邻分类算法进行人脸表情识别。首先使用类Haar特征和AdaBoost算法进行人脸检测,并对人脸图像进行预处理;接着使用PCA提取人脸表情特征,并将特征加入到哈希表;最后使用K近邻分类算法进行人脸表情的识别。将特征库重构为哈希表后,很大地提高了识别效率。  相似文献   

6.
This paper presents a method for the recognition of the six basic facial expressions in images or in image sequences using landmark points. The proposed technique relies on the observation that the vectors formed by the landmark point coordinates belong to a different manifold for each of the expressions. In addition experimental measurements validate the hypothesis that each of these manifolds can be decomposed to a small number of linear subspaces of very low dimension. This yields a parameterization of the manifolds that allows for computing the distance of a feature vector from each subspace and consequently from each one of the six manifolds. Two alternative classifiers are next proposed that use the corresponding distances as input: the first one is based on the minimum distance from the manifolds, while the second one uses SVMs that are trained with the vector of all distances from each subspace. The proposed technique is tested for two scenarios, the subject-independent and the subject-dependent one. Extensive experiments for each scenario have been performed on two publicly available datasets yielding very satisfactory expression recognition accuracy.  相似文献   

7.
在日常的沟通与交流过程中,运用面部表情可以促使沟通交流变得更加顺畅,因此对于人类而言,进行面部表情的解读也是获取相关沟通交流内容的重要程序。随着科学技术的不断发展,人工智能在日常人类交流沟通中运用的越发广泛,因此面部表情人工智能识别这一项技术的发展与创新也更加受到关注。文章将对卷积神经网络的人脸表情识别技术进行深入的研究与探析。  相似文献   

8.
In order to recognize facial expression accurately, the paper proposed a hybrid method of principal component analysis (PCA) and local binary pattern (LBP). Firstly, the method of eight eyes segmentation was introduced to extract the effective area of facial expression image, which can reduce some useless information to subsequent feature extraction. Then PCA extracted the global grayscale feature of the whole facial expression image and reduced the data size at the same time. And LBP extracted local neighbor texture feature of the mouth area, which contributes most to facial expression recognition. Fusing the global and local feature will be more effective for facial expression recognition. Finally, support vector machine (SVM) used the fusion feature to complete facial expression recognition. Experiment results show that, the method proposed in this paper can classify different expressions more effectively and can get higher recognition rate than the traditional recognition methods.  相似文献   

9.
基于小波变换和神经网络集成的人脸表情识别   总被引:8,自引:6,他引:2  
提出了一种表情识别的新方法,首先通过基于小波变换的图像分解和K-L变换等处理,来抽取面部表情区域的有效鉴别特征.之后采用神经网络集成技术对六种典型表情进行识别。在CMU表情数据库上的实验表明,该方法达到了很高的识别率.而且对光照变化也有一定的不敏感性。  相似文献   

10.
人类面部表情是其心理情绪变化的最直观刻画,不同人的面部表情具有很大差异,现有表情识别方法均利用面部统计特征区分不同表情,其缺乏对于人脸细节信息的深度挖掘。根据心理学家对面部行为编码的定义可以看出,人脸的局部细节信息决定了其表情意义。因此该文提出一种基于多尺度细节增强的面部表情识别方法,针对面部表情受图像细节影响较大的特点,提出利用高斯金字塔提取图像细节信息,并对图像进行细节增强,从而强化人脸表情信息。针对面部表情的局部性特点,提出利用层次结构的局部梯度特征计算方法,描述面部特征点局部形状特征。最后,使用支持向量机(SVM)对面部表情进行分类。该文在CK+表情数据库中的实验结果表明,该方法不仅验证了图像细节对面部表情识别过程的重要作用,而且在小规模训练数据下也能够得到非常好的识别结果,表情平均识别率达到98.19%。  相似文献   

11.
A 3D facial reconstruction and expression modeling system which creates 3D video sequences of test subjects and facilitates interactive generation of novel facial expressions is described. Dynamic 3D video sequences are generated using computational binocular stereo matching with active illumination and are used for interactive expression modeling. An individual’s 3D video set is annotated with control points associated with face subregions. Dragging a control point updates texture and depth in only the associated subregion so that the user generates new composite expressions unseen in the original source video sequences. Such an interactive manipulation of dynamic 3D face reconstructions requires as little preparation on the test subject as possible. Dense depth data combined with video-based texture results in realistic and convincing facial animations, a feature lacking in conventional marker-based motion capture systems.  相似文献   

12.
Sparse representation is a new approach that has received significant attention for image classification and recognition. This paper presents a PCA-based dictionary building for sparse representation and classification of universal facial expressions. In our method, expressive facials images of each subject are subtracted from a neutral facial image of the same subject. Then the PCA is applied to these difference images to model the variations within each class of facial expressions. The learned principal components are used as the atoms of the dictionary. In the classification step, a given test image is sparsely represented as a linear combination of the principal components of six basic facial expressions. Our extensive experiments on several publicly available face datasets (CK+, MMI, and Bosphorus datasets) show that our framework outperforms the recognition rate of the state-of-the-art techniques by about 6%. This approach is promising and can further be applied to visual object recognition.  相似文献   

13.
With better understanding of face anatomy and technical advances in computer graphics, 3D face synthesis has become one of the most active research fields for many human-machine applications, ranging from immersive telecommunication to the video games industry. In this paper we proposed a method that automatically extracts features like eyes, mouth, eyebrows and nose from the given frontal face image. Then a generic 3D face model is superimposed onto the face in accordance with the extracted facial features in order to fit the input face image by transforming the vertex topology of the generic face model. The 3D-specific face can finally be synthesized by texturing the individualized face model. Once the model is ready six basic facial expressions are generated with the help of MPEG-4 facial animation parameters. To generate transitions between these facial expressions we use 3D shape morphing between the corresponding face models and blend the corresponding textures. Novelty of our method is automatic generation of 3D model and synthesis face with different expressions from frontal neutral face image. Our method has the advantage that it is fully automatic, robust, fast and can generate various views of face by rotation of 3D model. It can be used in a variety of applications for which the accuracy of depth is not critical such as games, avatars, face recognition. We have tested and evaluated our system using standard database namely, BU-3DFE.  相似文献   

14.
This paper introduces a novel Gabor-Fisher (1936) classifier (GFC) for face recognition. The GFC method, which is robust to changes in illumination and facial expression, applies the enhanced Fisher linear discriminant model (EFM) to an augmented Gabor feature vector derived from the Gabor wavelet representation of face images. The novelty of this paper comes from (1) the derivation of an augmented Gabor feature vector, whose dimensionality is further reduced using the EFM by considering both data compression and recognition (generalization) performance; (2) the development of a Gabor-Fisher classifier for multi-class problems; and (3) extensive performance evaluation studies. In particular, we performed comparative studies of different similarity measures applied to various classifiers. We also performed comparative experimental studies of various face recognition schemes, including our novel GFC method, the Gabor wavelet method, the eigenfaces method, the Fisherfaces method, the EFM method, the combination of Gabor and the eigenfaces method, and the combination of Gabor and the Fisherfaces method. The feasibility of the new GFC method has been successfully tested on face recognition using 600 FERET frontal face images corresponding to 200 subjects, which were acquired under variable illumination and facial expressions. The novel GFC method achieves 100% accuracy on face recognition using only 62 features.  相似文献   

15.
人脸表情识别在人机交互等人工智能领域发挥着 重要作用,当前研究忽略了人脸的语 义信息。本 文提出了一种融合局部语义与全局信息的人脸表情识别网络,由两个分支组成:局部语义区 域提取分支 和局部-全局特征融合分支。首先利用人脸解析数据集训练语义分割网络得到人脸语义解析 ,通过迁移训 练的方法得到人脸表情数据集的语义解析。在语义解析中获取对表情识别有意义的区域及其 语义特征, 并将局部语义特征与全局特征融合,构造语义局部特征。最后,融合语义局部特征与全局特 征构成人脸 表情的全局语义复合特征,并通过分类器分为7种基础表情之一。本文同时提出了解冻部分 层训练策略, 该训练策略使语义特征更适用于表情识别,减 少语义信息冗余性。在两个公开数据集JAFFE 和KDEF上 的平均识别准确率分别达到了93.81%和88.78% ,表现优于目前的深度学习方法和传统方法。实验结果证 明了本文提出的融合局部语义和全局信息的网络能够很好地描述表情信息。  相似文献   

16.
17.
贾丽华  宋加涛  谢刚 《电视技术》2012,36(11):107-110,117
鼻子是人脸中一个突出的器官,其特征不易受面部表情变化的影响。鼻子检测是在图像或图像序列中搜索人鼻的位置及其轮廓线特征,其研究在人脸检测和定位、人脸识别、人脸姿态估计、3D人脸重构等方面具有重要的意义。近年来,研究者们在该领域做了大量研究,提出了很多有效的算法。对相关文献进行了综述,将现有的鼻子检测方法分为基于2D图像的方法和基于3D信息的方法,分析了这两类方法的优缺点。  相似文献   

18.
为了使计算机能更好的识别人脸表情,对基于Gabor小波变换的人脸表情识别方法进行了研究。首先对包含表情区域的静态灰度图像进行预处理,包括对确定的人脸表情区域进行尺寸和灰度归一化,然后利用二维Gabor小波变换提取脸部表情特征,使用快速PCA方法对提取的Gabor小波特征初步降维。再在低维的空间中,利用Fisher准则提取那些有利于分类的特征,最后用SVM分类器进行分类。实验结果表明,上述提出的方法比传统的方法识别速度更快,能达到实时性的要求,并且具有很好的鲁棒性,识别率高。  相似文献   

19.
三维采集设备的快速发展,极大推动了三维数据技术的研究。其中,以三维人脸数据为载体的三维面部表情识别研究成果不断涌现。三维面部表情识别可以极大克服二维识别中的姿态和光照变化等方面问题。对三维表情识别技术进行了系统概括,尤其针对三维表情的关键技术,即对表情特征提取、表情编码分类及表情数据库进行了总结分析,并提出了三维表情识别的研究建议。三维面部表情识别技术在识别率上基本满足要求,但实时性上需要进一步优化。相关内容对该领域的研究具有指导意义。  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号