首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 26 毫秒
1.
Machine learning approaches have produced some of the highest reported performances for facial expression recognition. However, to date, nearly all automatic facial expression recognition research has focused on optimizing performance on a few databases that were collected under controlled lighting conditions on a relatively small number of subjects. This paper explores whether current machine learning methods can be used to develop an expression recognition system that operates reliably in more realistic conditions. We explore the necessary characteristics of the training data set, image registration, feature representation, and machine learning algorithms. A new database, GENKI, is presented which contains pictures, photographed by the subjects themselves, from thousands of different people in many different real-world imaging conditions. Results suggest that human-level expression recognition accuracy in real-life illumination conditions is achievable with machine learning technology. However, the data sets currently used in the automatic expression recognition literature to evaluate progress may be overly constrained and could potentially lead research into locally optimal algorithmic solutions.  相似文献   

2.
本文主要研究了基于迁移学习的无监督跨域人脸表情识别。在过去的几年里,提出的许多方法在人脸表情识别方面取得了令人满意的识别效果。但这些方法通常认为训练和测试数据来自同一个数据集,因此其具有相同的分布。而在实际应用中,这一假设通常并不成立,特别当训练集和测试集来自不同的数据集时,即跨域人脸表情识别问题。为了解决这一问题,本文提出将一种基于联合分布对齐的迁移学习方法(domain align learning)应用于跨域人脸表情识别,该方法通过找到一个特征变换,将源域和目标域数据映射到一个公共子空间中,在该子空间中联合对齐边缘分布和条件分布来减小域之间的分布差异,然后对变换后的特征进行训练得到一个域适应分类器来预测目标域样本标签。为了验证提出算法的有效性,在CK+、Oulu-CASIA NIR和Oulu-CASIA VIS这3个不同的数据库上做了大量实验,实验结果证明所提算法在跨域表情识别上是有效性的。  相似文献   

3.
目的 大量标注数据和深度学习方法极大地提升了图像识别性能。然而,表情识别的标注数据缺乏,训练出的深度模型极易过拟合,研究表明使用人脸识别的预训练网络可以缓解这一问题。但是预训练的人脸网络可能会保留大量身份信息,不利于表情识别。本文探究如何有效利用人脸识别的预训练网络来提升表情识别的性能。方法 本文引入持续学习的思想,利用人脸识别和表情识别之间的联系来指导表情识别。方法指出网络中对人脸识别整体损失函数的下降贡献最大的参数与捕获人脸公共特征相关,对表情识别来说为重要参数,能够帮助感知面部特征。该方法由两个阶段组成:首先训练一个人脸识别网络,同时计算并记录网络中每个参数的重要性;然后利用预训练的模型进行表情识别的训练,同时通过限制重要参数的变化来保留模型对于面部特征的强大感知能力,另外非重要参数能够以较大的幅度变化,从而学习更多表情特有的信息。这种方法称之为参数重要性正则。结果 该方法在RAF-DB(real-world affective faces database),CK+(the extended Cohn-Kanade database)和Oulu-CASIA这3个数据集上进行了实验评估。在主流数据集RAF-DB上,该方法达到了88.04%的精度,相比于直接用预训练网络微调的方法提升了1.83%。其他数据集的实验结果也表明了该方法的有效性。结论 提出的参数重要性正则,通过利用人脸识别和表情识别之间的联系,充分发挥人脸识别预训练模型的作用,使得表情识别模型更加鲁棒。  相似文献   

4.
随着人脸表情识别任务逐渐从实验室受控环境转移至具有挑战性的真实世界环境,在深度学习技术的迅猛发展下,深度神经网络能够学习出具有判别能力的特征,逐渐应用于自动人脸表情识别任务。目前的深度人脸表情识别系统致力于解决以下两个问题:1)由于缺乏足量训练数据导致的过拟合问题;2)真实世界环境下其他与表情无关因素变量(例如光照、头部姿态和身份特征)带来的干扰问题。本文首先对近十年深度人脸表情识别方法的研究现状以及相关人脸表情数据库的发展进行概括。然后,将目前基于深度学习的人脸表情识别方法分为两类:静态人脸表情识别和动态人脸表情识别,并对这两类方法分别进行介绍和综述。针对目前领域内先进的深度表情识别算法,对其在常见表情数据库上的性能进行了对比并详细分析了各类算法的优缺点。最后本文对该领域的未来研究方向和机遇挑战进行了总结和展望:考虑到表情本质上是面部肌肉运动的动态活动,基于动态序列的深度表情识别网络往往能够取得比静态表情识别网络更好的识别效果。此外,结合其他表情模型如面部动作单元模型以及其他多媒体模态,如音频模态和人体生理信息能够将表情识别拓展到更具有实际应用价值的场景。  相似文献   

5.
Facial expression and emotion recognition from thermal infrared images has attracted more and more attentions in recent years. However, the features adopted in current work are either temperature statistical parameters extracted from the facial regions of interest or several hand-crafted features that are commonly used in visible spectrum. Till now there are no image features specially designed for thermal infrared images. In this paper, we propose using the deep Boltzmann machine to learn thermal features for emotion recognition from thermal infrared facial images. First, the face is located and normalized from the thermal infrared images. Then, a deep Boltzmann machine model composed of two layers is trained. The parameters of the deep Boltzmann machine model are further fine-tuned for emotion recognition after pre-training of feature learning. Comparative experimental results on the NVIE database demonstrate that our approach outperforms other approaches using temperature statistic features or hand-crafted features borrowed from visible domain. The learned features from the forehead, eye, and mouth are more effective for discriminating valence dimension of emotion than other facial areas. In addition, our study shows that adding unlabeled data from other database during training can also improve feature learning performance.  相似文献   

6.
奚琰 《计算机系统应用》2022,31(11):175-183
和实验室环境不同, 现实生活中的人脸表情图像场景复杂, 其中最常见的局部遮挡问题会造成面部外观的显著改变, 使得模型提取到的全局特征包含与情感无关的冗余信息从而降低了判别力. 针对此问题, 本文提出了一种结合对比学习和通道-空间注意力机制的人脸表情识别方法, 学习各局部显著情感特征并关注局部特征与全局特征之间的关系. 首先引入对比学习, 通过特定的数据增强方法设计新的正负样本选取策略, 对大量易获得的无标签情感数据进行预训练, 学习具有感知遮挡能力的表征, 再将此表征迁移到下游人脸表情识别任务以提高识别性能. 在下游任务中, 将每张人脸图像的表情分析问题转化为多个局部区域的情感检测问题, 使用通道-空间注意力机制学习人脸不同局部区域的细粒度注意力图, 并对加权特征进行融合, 削弱遮挡内容带来的噪声影响, 最后提出约束损失联合训练, 优化最终用于分类的融合特征. 实验结果表明, 无论是在公开的非遮挡人脸表情数据集(RAF-DB和FER2013)还是人工合成的遮挡人脸表情数据集上, 所提方法都取得了与现有先进方法可媲美的结果.  相似文献   

7.
面部表情分析是计算机通过分析人脸信息尝试理解人类情感的一种技术,目前已成为计算机视觉领域的热点话题。其挑战在于数据标注困难、多人标签一致性差、自然环境下人脸姿态大以及遮挡等。为了推动面部表情分析发展,本文概述了面部表情分析的相关任务、进展、挑战和未来趋势。首先,简述了面部表情分析的几个常见任务、基本算法框架和数据库;其次,对人脸表情识别方法进行了综述,包括传统的特征设计方法以及深度学习方法;接着,对人脸表情识别存在的问题与挑战进行总结思考;最后,讨论了未来发展趋势。通过全面综述和讨论,总结以下观点:1)针对可靠人脸表情数据库规模小的问题,从人脸识别模型进行迁移学习以及利用无标签数据进行半监督学习是两个重要策略;2)受模糊表情、低质量图像以及标注者的主观性影响,非受控自然场景的人脸表情数据的标签库存在一定的不确定性,抑制这些因素可以使得深度网络学习真正的表情特征;3)针对人脸遮挡和大姿态问题,利用局部块进行融合的策略是一个有效的策略,另一个值得考虑的策略是先在大规模人脸识别数据库中学习一个对遮挡和姿态鲁棒的模型,再进行人脸表情识别迁移学习;4)由于基于深度学习的表情识别方法受很多超参数影响,导致当前人脸表情识别方法的可比性不强,不同的表情识别方法有必要在不同的简单基线方法上进行评测。目前,虽然非受控自然环境下的表情分析得到较快发展,但是上述问题和挑战仍然有待解决。人脸表情分析是一个比较实用的任务,未来发展除了要讨论方法的精度也要关注方法的耗时以及存储消耗,也可以考虑用非受控环境下高精度的人脸运动单元检测结果进行表情类别推断。  相似文献   

8.
Learning effectiveness is normally analyzed by data collection through tests or questionnaires. However, instant feedback is usually not available. Learners’ facial emotion and learning motivation has a positive relationship. Therefore, the system identifying learners’ facial emotions can provide feedback that teachers can understand students’ learning situation and provide help or improve teaching strategy. Studies have found that convolutional neural networks provide a good performance in basic facial emotion recognition. Convolutional neural networks do not require manual design features like traditional machine learning, they automatically learn the necessary features of the entire image. This article improves the FaceLiveNet network with low and high accuracy in basic emotion recognition, and proposes the framework of Dense_FaceLiveNet. We use Dense_FaceLiveNet for two-phases of transfer learning. First, from the relatively simple data JAFFE and KDEF basic emotion recognition model transferring to the FER2013 basic emotion dataset and obtained an accuracy of 70.02%. Secondly, using the FER2013 basic emotion recognition model transferring to learning emotion recognition model, the test accuracy rate is as high as 91.93%, which is 12.9% higher than the accuracy rate of 79.03% without using the transfer learning model, which proves that the use of transfer learning can effectively improve the recognition accuracy of learning emotion recognition model. In addition, in order to test the generalization ability of the Learning Emotion Recognition Model, videos recorded by students from a national university in Taiwan during class learning were used as test data. The original database of learning emotions did not consider that students would have exceptions such as over eyebrows, eyes closed and hand hold the chin etc. To improve this situation, after adding the learning emotion database to the images of the exceptions mentioned above, the model was rebuilt, and the recognition accuracy rate of the model was 92.42%. By comparing the output of maps, the rebuilt model does have the characteristics of success in learning images such as eyebrows, chins, and eyes closed. Furthermore, after combining all the students’ image data with the original learning emotion database, the model was rebuilt and obtained the accuracy rate reached 84.59%. The result proves that the Learning Emotion Recognition Model can achieve high recognition accuracy by processing the unlearned image through transfer learning. The main contribution is to design two-phase transfer learning for establishing the learning emotion recognition model and overcome the problem for small amounts of learning emotion data. Our experiment results have shown the performance improvement of two-phase transfer learning.  相似文献   

9.
目的 从真实环境中采集到的人脸图片通常伴随遮挡、光照和表情变化等因素,对识别结果产生干扰。在许多特殊环境下,训练样本的采集数量也无法得到保证,容易产生训练样本远小于测试样本的不利条件。基于以上情况,如何排除复杂的环境变化和较少的训练样本等多重因素对识别效果的影响逐渐成为了人脸识别方向需要攻克的难题。方法 以低秩矩阵分解为基础,分别使用非凸秩近似范数和核范数进行两次低秩矩阵分解,以达到去除遮挡干扰的目的 。首先通过非凸稳健主成分分析分解得到去除了光照、遮挡等变化的低秩字典。为消除不同人脸类的五官等共通部分的影响,加快算法收敛效率,将得到的低秩字典用作初始化,进行基于核范数的第二次秩近似分解,以获得去除了类间不相关判别性的低秩字典用于分类。最后针对训练样本较少和遮挡样本占比过大等问题,选用同一数据库中不用做训练和测试的辅助数据作为辅助字典模拟可能出现的遮挡、光照等影响,通过最小化稀疏表示重构误差进行分类识别。结果 选用AR库和CK+库分别进行实验。在AR库的实验中,通过调整训练图片中遮挡、光照和表情变化的样本比例来检测算法性能。其中,在遮挡图片占比分别为1/7和3/7的训练集中,无遮挡图片由无干扰和光照表情干扰图片联合组成。在遮挡图片占比为2/7的训练集中,无遮挡图片全由光照表情变化图片组成。实验结果表明,在多种实验情况下均获得较高识别率。其中根据不同遮挡比例,分别获得97.75%、92%、95.25%和97.75%、90%、95.25%等识别率。与同类算法对比提高3%~5%。选用的外部数据从10类人脸至40类依次增加,获得的识别结果为96.75%~98%,与同类算法相比提高了2%~3%。在CK+表情库中,选用同伦算法配合分类求解,获得的识别结果为95.25%。结论 本文提出了一种在克服复杂环境变化和训练样本不足两个方面具有高效性和鲁棒性的人脸识别算法,实验结果表明,本文算法在不同数据库中都具有高效性,未来的研究方向包括将算法应用于联立人脸和表情识别,模拟更为复杂的噪声状况,以期达到更为优异的结果。  相似文献   

10.
Face recognition has attracted extensive interests due to its wide applications. However, there are many challenges in the real world scenario. For example, relatively few samples are available for training. Face images collected from surveillance cameras may consist of complex variations (e.g. illumination, expression, occlusion and pose). To address these challenges, in this paper we propose learning class-specific and intra-class variation dictionaries separately. Specifically, we first develop a discriminative class-specific dictionary amplifying the differences between training classes. We impose a constraint on sparse coefficients, which guarantees the sparse representation coefficients having small within-class scatter and large between-class scatter. Moreover, we introduce a new intra-class variation dictionary based on the assumption that similar variations from different classes may share some common features. The intra-class variation dictionary not only captures the inner-relationship of variations, but also addresses the limitation of the manually designed dictionaries that are person-specific. Finally, we apply the combined dictionary to adaptively represent face images. Experiments conducted on the AR, CMU-PIE, FERET and Extended Yale B databases show the effectiveness of the proposed method.  相似文献   

11.
针对人脸表情识别鲁棒性差,容易受身份信息干扰的问题,提出一种具有局部并行结构的深度神经网络识别算法。首先使用稀疏自编码算法训练得到不同尺度的卷积核,然后提取卷积核特征并作池化处理,使特征具有一定的平移不变性,最后采用与表情相关的7个并行的4层网络得到最终的分类结果。实验结果表明,在标准的人脸表情识别库上进行独立测试时,本文提出的局部并行深度神经网络的表情识别方法对测试集的人不出现在训练集中的情况有较好表现,相比其他算法更具有实用性。  相似文献   

12.
目的 现实中采集到的人脸图像通常受到光照、遮挡等环境因素的影响,使得同一类的人脸图像具有不同程度的差异性,不同类的人脸图像又具有不同程度的相似性,这极大地影响了人脸识别的准确性。为了解决上述问题对人脸识别造成的影响,在低秩矩阵恢复理论的基础上提出了具有识别力的结构化低秩字典学习的人脸识别算法。方法 该算法基于训练样本的标签信息将低秩正则化以及结构化稀疏同时引入到学习的具有识别力的字典上。在字典学习过程中,首先利用样本的重建误差约束样本与字典之间的关系;其次将Fisher准则应用到稀疏编码过程中,使其编码系数具有识别能力;由于训练样本中的噪声信息会影响字典的识别力,所以在低秩矩阵恢复理论的基础上将低秩正则化应用到字典学习过程中;接着,在字典学习过程中加入了结构化稀疏使其不丢失结构信息以保证对样本进行最优分类;最后再利用误差重构法对测试样本进行分类识别。结果 本文算法在AR以及ORL人脸数据库上分别进行了实验仿真。在AR人脸数据库中,为了分析样本不同维数对实验结果造成的影响,选取了第一时期拍摄的每人6幅图像,包括1幅围巾遮挡,2幅墨镜遮挡以及3幅脸部表情变化以及光照变化(未被遮挡)的图像作为训练样本,同时选取相同组合的样本图像作为测试样本,无论哪种方法,图像的维度越高识别率越高。对比SRC (sparse representation based on classification)算法与DKSVD (discriminative K-means singular value decomposition)算法的识别率可知,DKSVD算法通过字典学习减缓了训练样本中的不确定因素对识别结果的影响;对比DLRD_SR (discriminative low-rank dictionary learning for sparse representation)算法与FDDL (Fisher discriminative dictionary learning)算法的识别率可知,当图像有遮挡等噪声信息存在时,字典低秩化可以提高至少5.8%的识别率;对比本文算法与DLRD_SR算法可知,在字典学习的过程中加入Fisher准则后识别率显著提高,同时理想稀疏值能保证对样本进行最优的分类。当样本图像的维度达到500维时人脸图像在有围巾、墨镜遮挡的情况下识别率可达到85.2%;其中墨镜和围巾的遮挡程度分别可以看成是人脸图像的20%和40%,为了验证本文算法在不同脸部表情变化、光照改变以及遮挡情况下的有效性,根据训练样本的具体图像组合情况进行实验。无论哪种样本图像组合,本文算法在有遮挡存在的样本识别中具有显著优势。在训练样本只包含脸部表情变化、光照变化以及墨镜遮挡图像的情况下,本文算法的识别率高于其他算法至少2.7%,在训练样本只包含脸部表情变化、光照变化以及围巾遮挡图像的情况下,本文算法的识别率高于其他算法至少3.6%,在训练样本包含脸部表情变化、光照变化、围巾遮挡以及墨镜遮挡图像的情况下,其识别率高于其他算法至少1.9%。在ORL人脸数据库中,人脸图像在无遮挡的情况下识别率达到95.2%,稍低于FDDL算法的识别率;在随机块遮挡程度达到20%时,相比较于SRC算法、DKSVD算法、FDDL算法以及DLRD_SR算法,本文算法的识别率最高;当随机块遮挡程度达到50%时,以上算法的识别率均不高,但本文算法的其识别率仍然最高。结论 本文算法在人脸图像受到遮挡等因素的影响时具有一定的鲁棒性,实验结果表明该算法在人脸识别方面具有可行性。  相似文献   

13.
通过分析Gabor小波和稀疏表示的生物学背景和数学特性,提出一种基于Gabor小波和稀疏表示的人脸表情识别方法。采用Gabor小波变换对表情图像进行特征提取,建立训练样本Gabor特征的超完备字典,通过稀疏表示模型优化人脸表情图像的特征向量,利用融合识别方法进行多分类器融合识别分类。实验结果表明,该方法能够有效提取表情图像的特征信息,提高表情识别率。  相似文献   

14.
人脸表情识别中,利用深度网络进行训练时,往往需要大量的训练数据而且实际应用中常常缺少标签数据,域适应人脸表情迁移学习是一个重要的研究课题。现有基于域适应的人脸表情识别大多采用浅层网络、深度学习网络方法,因此提出了将条件对抗域适应方法应用于人脸表情迁移学习,以及应用熵函数保证分类器预测的不确定人脸表情图像的可迁移性,并通过嵌入注意力机制模型来改进深度学习网络对人脸表情图像的特征提取。实验表明,通过注意力机制模型改进的条件生成对抗网络能有效地提高实验室控制和现实生活中的人脸表情数据识别的准确率。  相似文献   

15.
现有基于学习的人脸超分辨率算法假设高低分辨率特征具有流形一致性(耦合字典学习),然而低分辨率图像的降质过程使得高低分辨率特征产生了“一对多”的映射关系偏差,减少了极低分辨率图像特征的判决信息,降低了超分辨率重建图像的识别率。针对这一问题,引入了半耦合稀疏字典学习模型,松弛高低分辨率流形一致性假设,同时学习稀疏表达字典和稀疏表达系数之间的映射函数,提升高低分辨率判决特征的一致性,在此基础上,引入协同分类模型,实现半耦合特征的高效分类。实验表明:相比于传统稀疏表达分类算法,算法不仅提高了识别率,并且还大幅度降低了时间开销,验证了半耦合稀疏学习字典在人脸识别中的有效性。  相似文献   

16.
针对在非特定人脸表情识别中,表情纹理特征的利用率不高问题,提出了一种改进的加权局部二值模式(LBP)和稀疏表示相结合的人脸表情识别方法。为了有效利用面部器官的局部纹理信息,采用改进的加权LBP算子提取人脸局部纹理特征,然后用获取的特征值组成训练样本,最后根据稀疏表示理论进行表情分类。在 JAFFE和CK人脸库上的实验结果表明,该方法对非特定人脸表情的识别效果有了明显提高。  相似文献   

17.
Spontaneous facial expression recognition is significantly more challenging than recognizing posed ones. We focus on two issues that are still under-addressed in this area. First, due to the inherent subtlety, the geometric and appearance features of spontaneous expressions tend to overlap with each other, making it hard for classifiers to find effective separation boundaries. Second, the training set usually contains dubious class labels which can hurt the recognition performance if no countermeasure is taken. In this paper, we propose a spontaneous expression recognition method based on robust metric learning with the aim of alleviating these two problems. In particular, to increase the discrimination of different facial expressions, we learn a new metric space in which spatially close data points have a higher probability of being in the same class. In addition, instead of using the noisy labels directly for metric learning, we define sensitivity and specificity to characterize the annotation reliability of each annotator. Then the distance metric and annotators' reliability is jointly estimated by maximizing the likelihood of the observed class labels. With the introduction of latent variables representing the true class labels, the distance metric and annotators' reliability can be iteratively solved under the Expectation Maximization framework. Comparative experiments show that our method achieves better recognition accuracy on spontaneous expression recognition, and the learned metric can be reliably transferred to recognize posed expressions.  相似文献   

18.
《Ergonomics》2012,55(7):777-784
This experiment compared the value of real-world training, virtual reality training, and no training in the transfer of learning to the same task performed in real-world conditions. Results provide no evidence of transfer from a virtual reality training environment to a real-world task. There was no significant difference between the virtual reality training group and the group that received no training on the task. The group that received real-world training performed significantly better than both of the other two groups. The results question the utility of virtual training and suggest that in the present configuration, individuals learn performance characteristics specific only to the virtual reality context. Needed improvements to virtual reality for the purpose of enabling the transfer of training are indicated.  相似文献   

19.
Sparse representation based classification (SRC) has recently been proposed for robust face recognition. To deal with occlusion, SRC introduces an identity matrix as an occlusion dictionary on the assumption that the occlusion has sparse representation in this dictionary. However, the results show that SRC's use of this occlusion dictionary is not nearly as robust to large occlusion as it is to random pixel corruption. In addition, the identity matrix renders the expanded dictionary large, which results in expensive computation. In this paper, we present a novel method, namely structured sparse representation based classification (SSRC), for face recognition with occlusion. A novel structured dictionary learning method is proposed to learn an occlusion dictionary from the data instead of an identity matrix. Specifically, a mutual incoherence of dictionaries regularization term is incorporated into the dictionary learning objective function which encourages the occlusion dictionary to be as independent as possible of the training sample dictionary. So that the occlusion can then be sparsely represented by the linear combination of the atoms from the learned occlusion dictionary and effectively separated from the occluded face image. The classification can thus be efficiently carried out on the recovered non-occluded face images and the size of the expanded dictionary is also much smaller than that used in SRC. The extensive experiments demonstrate that the proposed method achieves better results than the existing sparse representation based face recognition methods, especially in dealing with large region contiguous occlusion and severe illumination variation, while the computational cost is much lower.  相似文献   

20.
目的 由于受到光照变化、表情变化以及遮挡的影响,使得采集的不同人的人脸图像具有相似性,从而给人脸识别带来巨大的挑战,如果每一类人有足够多的训练样本,利用基于稀疏表示的分类算法(SRC)就能够取得很好地识别效果。然而,实际应用中往往无法得到尺寸大以及足够多的人脸图像作为训练样本。为了解决上述问题,根据基于稀疏表示理论,提出了一种基于联合判别性低秩类字典以及稀疏误差字典的人脸识别算法。每一类的低秩字典捕捉这类的判别性特征,稀疏误差字典反映了类变化,比如光照、表情变化。方法 首先利用低秩分解理论得到初始化的低秩字典以及稀疏字典,然后结合低秩分解和结构不相干的理论,训练出判别性低秩类字典和稀疏误差字典,并把它们联合起来作为测试时所用的字典;本文的方法去除了训练样本的噪声,并在此基础上增加了低秩字典之间的不相关性,能够提高的低秩字典的判别性。再运用l1范数法(同伦法)求得稀疏系数,并根据重构误差进行分类。结果 针对Extended Yale B库和AR库进行了实验。为了减少算法执行时间,对于训练样本利用随机矩阵进行降维。本文算法在Extended Yale B库的504维每类32样本训练的识别结果为96.9%。在无遮挡的540维每类4样本训练的AR库的实验结果为83.3%,1 760维的结果为87.6%。有遮挡的540维每类8样本训练的AR库的结果为94.1%,1 760维的结果为94.8%。实验结果表明,本文算法的结果比SRC、DKSVD(Discriminative K-SVD)、LRSI(Low rank matrix decomposition with structural incoherence)、LRSE+SC(Low rank and sparse error matrix+sparse coding)这4种算法中识别率最高的算法还要好,特别在训练样本比较少的情况下。结论 本文所提出的人脸识别算法具有一定的鲁棒性和有效性,尤其在训练样本较少以及干扰较大的情况下,能够取得很好地识别效果,适合在实际中进行应用。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号