首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
基于活动外观模型的人脸表情分析与识别   总被引:15,自引:2,他引:13  
分析了用活动外观模板(AAM)提取的人脸表情特征来进行人脸表情识别(FER)的可行性,尝试了以此特征向量为基础的FER。根据人脸图像的特点.先用特征眼的方法定位眼睛区域,再采用AAM的优化算法获取新对象的特征,缩短了AAM方法定位新对象的优化时间,提高了定位的准确度。采用秩相关分析和非度量多维标度(nMDS)等多变量统计学方法分析得出AAM方法提取的表情特征能够很好地表达表情的变化,并构造了神经网络分类器对人脸表情图像进行识别实验,得到93.5%的识别率。  相似文献   

3.
一种同步人脸运动跟踪与表情识别算法   总被引:1,自引:0,他引:1       下载免费PDF全文
於俊  汪增福  李睿 《电子学报》2015,43(2):371-376
针对单视频动态变化背景下的人脸表情识别问题,提出了一种同步人脸运动跟踪和表情识别算法,并在此基础上构建了一个实时系统.该系统达到了如下目标:首先在粒子滤波框架下结合在线外观模型和柱状几何模型进行人脸三维运动跟踪;接着基于生理知识来提取人脸表情的静态信息;然后基于流形学习来提取人脸表情的动态信息;最后在人脸运动跟踪过程中,结合人脸表情静态信息和动态信息来进行表情识别.实验结果表明,该系统在大姿态和丰富表情下具有较好的综合优势.  相似文献   

4.
It has been proved that the performance of a person-specific active appearance model (AAM) built to model the appearance variation of a single person across pose, illumination, and expression is substantially better than the performance of a generic AAM built to model the appearance variation of many faces. However, it is not practical to build a personal AAM before tracking an unseen subject. A virtual person-specific AAM is proposed to tackle the problem. The AAM is constructed from a set of virtual personal images with different poses and expressions which are synthesized from the annotated first frame via regressions. To preserve personal facial details on the virtual images, a poison fusion strategy is designed and applied to the virtual facial images generated via bilinear kernel ridge regression. Furthermore, the AAM subspace is sequentially updated during tracking based on sequential Karhunen–Loeve algorithm, which helps the AAM adaptive to the facial context variation. Experiments show the proposed virtual personal AAM is robust to facial context changes during tracking, and outperforms other state-of-the-art AAM on facial feature tracking accuracy and computation cost.  相似文献   

5.
基于Gabor滤波的表情动态特征提取方法   总被引:1,自引:1,他引:0  
针对目前动态特征提取方法在提取序列表情特征时人脸外貌特征也一起被提取的缺陷,提出了一种基于Gabor滤波的表情动态特征提取方法。利用Gabor滤波器在频率和方向上的选择特性,在提取表情特征时较好地抑制了人脸外貌特征的提取,从而减少了表情特征中人脸外貌特征的含量。在Cohn-Kanade和CMU-AMP人脸库上的表情识别实验表明,本文方法获得的表情动态特征对表情识别更有效。  相似文献   

6.
In order to recognize facial expression accurately, the paper proposed a hybrid method of principal component analysis (PCA) and local binary pattern (LBP). Firstly, the method of eight eyes segmentation was introduced to extract the effective area of facial expression image, which can reduce some useless information to subsequent feature extraction. Then PCA extracted the global grayscale feature of the whole facial expression image and reduced the data size at the same time. And LBP extracted local neighbor texture feature of the mouth area, which contributes most to facial expression recognition. Fusing the global and local feature will be more effective for facial expression recognition. Finally, support vector machine (SVM) used the fusion feature to complete facial expression recognition. Experiment results show that, the method proposed in this paper can classify different expressions more effectively and can get higher recognition rate than the traditional recognition methods.  相似文献   

7.
With better understanding of face anatomy and technical advances in computer graphics, 3D face synthesis has become one of the most active research fields for many human-machine applications, ranging from immersive telecommunication to the video games industry. In this paper we proposed a method that automatically extracts features like eyes, mouth, eyebrows and nose from the given frontal face image. Then a generic 3D face model is superimposed onto the face in accordance with the extracted facial features in order to fit the input face image by transforming the vertex topology of the generic face model. The 3D-specific face can finally be synthesized by texturing the individualized face model. Once the model is ready six basic facial expressions are generated with the help of MPEG-4 facial animation parameters. To generate transitions between these facial expressions we use 3D shape morphing between the corresponding face models and blend the corresponding textures. Novelty of our method is automatic generation of 3D model and synthesis face with different expressions from frontal neutral face image. Our method has the advantage that it is fully automatic, robust, fast and can generate various views of face by rotation of 3D model. It can be used in a variety of applications for which the accuracy of depth is not critical such as games, avatars, face recognition. We have tested and evaluated our system using standard database namely, BU-3DFE.  相似文献   

8.
罗元  崔叶  王艳  张毅 《半导体光电》2014,35(2):330-333,349
针对离散余弦变换(DCT)只能提取面部表情图像的全局特征,而忽略了临近像素之间的关系、不能提取纹理特征信息、不能准确区分相似表情等问题,提出一种融合离散余弦变换方法和局部二值模式(LBP)特征的表情特征提取方法。该方法首先将人脸图像经过DCT获得的低频系数作为表情的全局特征;然后用LBP对贡献率较大的嘴部、眼睛区域进行局部纹理特征提取,通过将LBP提取到的局部纹理特征与DCT提取到的全局特征进行融合,从而得到更有效的表情特征;最后利用支持向量机(SVM)进行识别。实验结果表明:该方法比单独使用DCT方法提取的表情特征更有利于识别,提高了表情识别的准确性,并将这个表情识别方法用于智能轮椅的控制上,收到了良好的效果。  相似文献   

9.
人脸表情识别在人机交互等人工智能领域发挥着 重要作用,当前研究忽略了人脸的语 义信息。本 文提出了一种融合局部语义与全局信息的人脸表情识别网络,由两个分支组成:局部语义区 域提取分支 和局部-全局特征融合分支。首先利用人脸解析数据集训练语义分割网络得到人脸语义解析 ,通过迁移训 练的方法得到人脸表情数据集的语义解析。在语义解析中获取对表情识别有意义的区域及其 语义特征, 并将局部语义特征与全局特征融合,构造语义局部特征。最后,融合语义局部特征与全局特 征构成人脸 表情的全局语义复合特征,并通过分类器分为7种基础表情之一。本文同时提出了解冻部分 层训练策略, 该训练策略使语义特征更适用于表情识别,减 少语义信息冗余性。在两个公开数据集JAFFE 和KDEF上 的平均识别准确率分别达到了93.81%和88.78% ,表现优于目前的深度学习方法和传统方法。实验结果证 明了本文提出的融合局部语义和全局信息的网络能够很好地描述表情信息。  相似文献   

10.
针对无法对面部表情进行精确识别的问题,提出了基于ResNet50网络融合双线性混合注意力机制的网络模型。针对传统池化算法造成图像特征提取残缺、模糊等问题,提出了一种基于Average-Pooling算法的自适应池化权重算法,同时基于粒子群算法对卷积神经网络模型超参数进行自适应调节,从而进一步提升模型识别精度。基于改进的网络模型,设计了一款实时面部表情识别系统。经验证,在Fer2013数据集和CK+数据集上,改进的模型在测试集中的识别精度分别为73.51%和99.86%。  相似文献   

11.
Expression cloning plays an important role in facial expression synthesis. In this paper, a novel algorithm is proposed for facial expression cloning. The proposed algorithm first introduces a new elastic model to balance the global and local warping effects, such that the impacts from facial feature diversity among people can be minimized, and thus more effective geometric warping results can be achieved. Furthermore, a muscle-distribution-based (MD) model is proposed, which utilizes the muscle distribution of the human face and results in more accurate facial illumination details. In addition, we also propose a new distance-based metric to automatically select the optimal parameters such that the global and local warping effects in the elastic model can be suitably balanced. Experimental results show that our proposed algorithm outperforms the existing methods.  相似文献   

12.
In this paper, a novel feature extraction method is proposed for facial expression recognition by extracting the feature from facial depth and 3D mesh alongside texture. Accordingly, the 3D Facial Expression Generic Elastic Model (3D FE-GEM) method is used to reconstruct an expression-invariant 3D model from the human face. Then, the texture, depth and mesh are extracted from the reconstructed face model. Afterwards, the Local Binary Pattern (LBP), proposed 3D High-Low Local Binary Pattern (3DH-LLBP) and Local Normal Binary Patterns (LNBPs) are applied to texture, depth and mesh of the face, respectively, to extract the feature from 2D images. Finally, the final feature vectors are generated through feature fusion and are classified by the Support Vector Machine (SVM). Convincing results are acquired for facial expression recognition on the CK+, CK, JAFFE and Bosphorus image databases compared to several state-of-the-art methods.  相似文献   

13.
李宏菲  李庆  周莉 《电子学报》2019,47(8):1643-1653
关于面部表情识别的应用也正在渗透至各个领域,如安全驾驶、商品销售、临床医学等等.本文对面部表情识别相关技术进行研究,主要工作及贡献如下:研究非约束条件下人脸动态表情识别,提出了一种基于多视觉描述子及音频特征融合策略的动态表情识别算法.借助多视觉描述子的空时局部特征描述实现动态表情特征的提取;而视频、音频特征的融合策略改善了表情识别性能.基于协方差矩阵及时间轴分段的动态规整,有效地解决了具有不同时长的动态表情序列的样本描述.为进一步改善表情识别模型的泛化性能,本文引入了基于多个体识别模型加权投票的集成识别模型.针对投票过程中的权值学习,提出了基于随机重采样的投票权重学习以及基于个体分类模型相对优势的投票权重学习方法.集成决策进一步改善了表情识别性能.基于AFEW5.0的动态表情库实验验证了算法的有效性.  相似文献   

14.
Facial features such as lip corners, eye corners and nose tip are critical points in a human face. Robust extraction of such facial feature locations is an important problem which is used in a wide range of applications. In this work, we propose a probabilistic framework and several methods which can extract critical points on a face using both location and texture information. The new framework enables one to learn the facial feature locations probabilistically from training data. The principle is to maximize the joint distribution of location and apperance/texture parameters. We first introduce an independence assumption which enables independent search for each feature. Then, we improve upon this model by assuming dependence of location parameters but independence of texture parameters. We model location parameters with a multi-variate Gaussian and the texture parameters are modeled with a Gaussian mixture model which are much richer as compared to the standard subspace models like principal component analysis. The location parameters are found by solving a maximum likelihood optimization problem. We show that the optimization problem can be solved using various search strategies. We introduce local gradient-based methods such as gradient ascent and Newton's method initialized from independent model locations both of which require certain non-trivial assumptions to work. We also propose a multi-candidate coordinate ascent search and a coarse-to-fine search strategy which both depend on efficiently searching among multiple candidate points. Our framework is compared in detail with the conventional statistical approaches of active shape and active appearance models. We perform extensive experiments to show that the new methods outperform the conventional approaches in facial feature extraction accuracy.  相似文献   

15.
邱玉  赵杰煜  汪燕芳 《电子学报》2016,44(6):1307-1313
脸部肌肉之间的时空关系在人脸表情识别中起着重要作用,而当前的模型无法高效地捕获人脸的复杂全局时空关系使其未被广泛应用.为了解决上述问题,本文提出一种基于区间代数贝叶斯网络的人脸表情建模方法,该方法不仅能够捕获脸部的空间关系,也能捕获脸部的复杂时序关系,从而能够更加有效地对人脸表情进行识别.且该方法仅利用基于跟踪的特征且不需要手动标记峰值帧,可提高训练与识别的速度.在标准数据库CK+和MMI上进行实验发现本文方法在识别人脸表情过程中有效提高了准确率.  相似文献   

16.

Face recognition has become an accessible issue for experts as well as ordinary people as it is a focal non-interfering biometric modality. In this paper, we introduced a new approach to perform face recognition under varying facial expressions. The proposed approach consists of two main steps: facial expression recognition and face recognition. They are two complementary steps to improve face recognition across facial expression variation. In the first step, we selected the most expressive regions responsible for facial expression appearance using the Mutual Information technique. Such a process helps not only improve the facial expression classification accuracy but also reduce the features vector size. In the second step, we used the Principal Component Analysis (PCA) to build EigenFaces for each facial expression class. Then, a face recognition is performed by projecting the face onto the corresponding facial expression Eigenfaces. The PCA technique significantly reduces the dimensionality of the original space since the face recognition is carried out in the reduced Eigenfaces space. An experimental study was conducted to evaluate the performance of the proposed approach in terms of face recognition accuracy and spatial-temporal complexity.

  相似文献   

17.
林淑瑞  张晓辉  郭敏  张卫强  王贵锦 《信号处理》2021,37(10):1889-1898
近年来,情感计算逐渐成为人机交互发展突破的关键,而情感识别作为情感计算的重要部分,也受到了广泛的关注。本文实现了基于ResNet18的面部表情识别系统和基于HGFM架构的语音情感识别模型,通过调整参数,训练出了性能较好的模型。在此基础上,通过特征级融合和决策级融合这两种多模态融合策略,实现了包含视频和音频信号的多模态情感识别系统,展现了多模态情感识别系统性能的优越性。两种不同融合策略下的音视频情感识别模型相比视频模态和音频模态,在准确率上都有一定的提升,验证了多模态模型往往比最优的单模态模型的识别性能更好的结论。本文所实现的模型取得了较好的情感识别性能,融合后的音视频双模态模型的准确率达到了76.84%,与现有最优模型相比提升了3.50%,在与现有的音视频情感识别模型的比较中具有性能上的优势。   相似文献   

18.
This paper describes a new and efficient method for facial expression generation on cloned synthetic head models. The system uses abstract facial muscles called action units (AUs) based on both anatomical muscles and the facial action coding system. The facial expression generation method has real-time performance, is less computationally expensive than physically based models, and has greater anatomical correspondence than rational free-form deformation or spline-based, techniques. Automatic cloning of a real human head is done by adapting a generic facial and head mesh to Cyberware laser scanned data. The conformation of the generic head to the individual data and the fitting of texture onto it are based on a fully automatic feature extraction procedure. Individual facial animation parameters are also automatically estimated during the conformation process. The entire animation system is hierarchical; emotions and visemes (the visual mouth shapes that occur during speech) are defined in terms of the AUs, and higher-level gestures are defined in terms of AUs, emotions, and visemes as well as the temporal relationships between them. The main emphasis of the paper is on the abstract muscle model, along with limited discussion on the automatic cloning process and higher-level animation control aspects.  相似文献   

19.
In response to the high complexity and low accuracy of current facial expression recognition networks, this paper proposes an E-Mobile Ne Xt network for facial expression recognition. E-Mobile Ne Xt is built based on our proposed E-Sand Glass block. In addition, we also improve the overall performance of the network through Rep Conv and SGE attention mechanisms. The experimental results show that the network model improves the expression recognition accuracy by 6.5% and 7.15% in RAF-DB and CK+ d...  相似文献   

20.
基于图模型的指静脉全局特征表达方法不仅可以降低成像质量对采集设备的依赖性,还能提高匹配效率。针对于目前指静脉图模型的研究中存在的图结构不稳定,匹配效率随图模型的变大而降低的问题,本文提出了一种基于SLIC(Simple Linear Iterative Clustering)超像素分割算法构建加权图的方法,并改进ChebyNet图卷积神经网络(Graph Convolutional Neural Networks, GCNs)提取加权图的图级(graph-level)特征。针对指静脉样本数普遍较少,而ChebyNet中卷积网络参数量较大容易造成过拟合以及其快速池化层不能自适应地选择节点的问题,本文提出了全局池化结构的改进GCNs模型SCheby-MgPool(Simplified Cheby-Multi gPool)。实验结果表明,本文提出的方法提取的指静脉特征在识别精度,匹配效率上都具有较好的性能。   相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号