首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
人脸特征点的定位是自动人脸识别系统和人脸表情识别的重要组成部分,小波变换是近年发展起来的一种有效的信号分析工具。在小波变换基础上,提出了一种基于多尺度梯度矢量的对称变换方法,将此方法应用于人脸特征点定位,不仅减少了计算量,而且提高了定位准确度,同时对光照变化和人脸表情变化也具有很强的鲁棒性。  相似文献   

2.
面部表情识别是地铁、火车站、机场等复杂环境中安检监控的重要任务,通过识别监控图像中行人的面部表情可以筛选出可疑分子。针对因监控图像模糊和面部表情拍摄不全而引起的识别准确率低等问题,提出一种改进的InceptionV4面部表情识别算法,改进InceptionV4的网络结构,使其更好地适应面部表情识别任务。基于深度学习中的Tensorflow平台对面部表情类数据进行训练,在面部表情验证集上进行测试,在输入图像为299×299时,识别准确率高达97.9%,改进后的算法在保证识别精度的同时,降低表情在类内差距较大、图像模糊和面部表情拍摄不全情况下的误识率,提高系统鲁棒性。  相似文献   

3.
陈佳昌  肖飒  周伟松 《电讯技术》2022,62(3):288-291
为了解决传统卷积神经网络(Convolutional Neural Network,CNN)对于人脸微表情识别的泛化能力差的问题,提出了一种改进的Inception结构与残差结构结合的卷积神经网络方法.首先在改进的Inception结构的基础上将输入特征直接映射到输出结果中构成残差结构,并针对表情局部特征复杂模糊等不足...  相似文献   

4.
In this paper, we propose an automatic facial expression exaggeration system, which consists of face detection, facial expression recognition, and facial expression exaggeration components, for generating exaggerated views of different expressions for an input face video. In addition, the parallelized algorithms for the automatic facial expression exaggeration system are developed to reduce the execution time on a multi-core embedded system. The experimental results show satisfactory expression exaggeration results and computational efficiency of the automatic facial expression exaggeration system under cluttered environments. The quantitative experimental comparisons show that the proposed parallelization strategies provide significant computational speedup compared to the single-processor implementation on a multi-core embedded platform.  相似文献   

5.

Face recognition has become an accessible issue for experts as well as ordinary people as it is a focal non-interfering biometric modality. In this paper, we introduced a new approach to perform face recognition under varying facial expressions. The proposed approach consists of two main steps: facial expression recognition and face recognition. They are two complementary steps to improve face recognition across facial expression variation. In the first step, we selected the most expressive regions responsible for facial expression appearance using the Mutual Information technique. Such a process helps not only improve the facial expression classification accuracy but also reduce the features vector size. In the second step, we used the Principal Component Analysis (PCA) to build EigenFaces for each facial expression class. Then, a face recognition is performed by projecting the face onto the corresponding facial expression Eigenfaces. The PCA technique significantly reduces the dimensionality of the original space since the face recognition is carried out in the reduced Eigenfaces space. An experimental study was conducted to evaluate the performance of the proposed approach in terms of face recognition accuracy and spatial-temporal complexity.

  相似文献   

6.
7.
In this paper, two novel methods for facial expression recognition in facial image sequences are presented. The user has to manually place some of Candide grid nodes to face landmarks depicted at the first frame of the image sequence under examination. The grid-tracking and deformation system used, based on deformable models, tracks the grid in consecutive video frames over time, as the facial expression evolves, until the frame that corresponds to the greatest facial expression intensity. The geometrical displacement of certain selected Candide nodes, defined as the difference of the node coordinates between the first and the greatest facial expression intensity frame, is used as an input to a novel multiclass Support Vector Machine (SVM) system of classifiers that are used to recognize either the six basic facial expressions or a set of chosen Facial Action Units (FAUs). The results on the Cohn-Kanade database show a recognition accuracy of 99.7% for facial expression recognition using the proposed multiclass SVMs and 95.1% for facial expression recognition based on FAU detection.  相似文献   

8.
该文针对人脸图像受到非刚性变化的影响,如旋转、姿态以及表情变化等,提出一种基于稠密尺度不变特征转换(SIFT)特征对齐(Dense SIFT Feature Alignment, DSFA)的稀疏表达人脸识别算法。整个算法包含两个步骤:首先利用DSFA方法对齐训练和测试样本;然后设计一种改进的稀疏表达模型进行人脸识别。为加快DSFA步骤的执行速度,还设计了一种由粗到精的层次化对齐机制。实验结果表明:在ORL,AR和LFW 3个典型数据集上,该文方法都获得了最高的识别精度。该文方法比传统稀疏表达方法在识别精度上平均提高了4.3%,同时提高了大约6倍的识别效率。  相似文献   

9.
王春峰  李军 《光电子.激光》2020,31(11):1197-1203
面部情绪识别已成为可见光人脸识别应用的重要部 分,是光学模式识别研究中最重要的领域之一。为了进一步实现可见光条件下面部情绪的自 动识别,本文结合Viola-Jones、自适应直方图均衡(AHE)、离散小波变换(DWT)和深度卷 积神经网络(CNN),提出了一种面部情绪自动识别算法。该算法使用Viola-Jones定位脸 部和五官,使用自适应直方图均衡增强面部图像,使用DWT完成面部特征提取;最后,提取 的特征直接用于深度卷积神经网络训练,以实现面部情绪自动识别。仿真实验分别在CK+数 据库和可见光人脸图像中进行,在CK+数据集上收获了97%的平均准确 率,在可见光人脸图像测试中也获得了95%的平均准确率。实验结果 表明,针对不同的面部五官和情绪,本文算法能够对可见光面部特征进行准确定位,对可见 光图像信息进行均衡处理,对情绪类别进行自动识别,并且能够满足同框下多类面部情绪同 时识别的需求,有着较高的识别率和鲁棒性。  相似文献   

10.
复杂车辆图像中的车牌定位与字符分割方法   总被引:29,自引:0,他引:29  
车牌定位和字符分割是车牌照自动识别系统中的关键步骤。提出了一个综合多种特征的车牌定位算法和一个基于模板匹配的字符分割算法,有效地解决了背景复杂的彩色图像中车牌定位和字符分割的问题。综合这些算法并结合字符识别核心,实现了一个完整的车牌照自动识别系统。该系统对不同背景、光照条件下采集到的车辆图像进行了大量实验,实验结果证明算法准确率高、鲁棒性好。  相似文献   

11.
基于端到端的深度学习模型已经被广泛应用于自动调制识别。现有的深度学习方案大多数依赖于丰富的样本分布,而大批量的标记训练集通常很难获得。提出了一种基于数据驱动和选择性核卷积神经网络(Convolutional Neural Network,CNN)的自动调制识别框架。首先开发深度密集生成式对抗网络增强5种调制信号的原始数据集;其次选择平滑伪Wigner-Ville分布作为信号的时频表示,并将注意力模块用于聚焦时频图像分类中的差异区域;最后将真实信号输入轻量级卷积神经网络进行时间相关性提取,并融合信号的时频特征完成分类。实验结果表明,所提算法提高了在低信噪比情况下的识别精度,表现出较强的鲁棒性。  相似文献   

12.
陈咸志  罗镇宝  李艺强  陈陶 《红外与激光工程》2022,51(8):20220391-1-20220391-11
实现图像末制导导弹发射后不管和远程精确打击,自动目标识别的工程化应用是关键技术。概述了国内外精确制导武器自动目标识别的发展历程、识别方法、技术水平和应用效果等现状,分析了基于目标特征和模板匹配的识别方法与应用场景,指出了两类经工程化验证有效的自动目标识别方法,梳理了任务规划、主要执行内容、规划质量对不同识别方法的影响等自动目标识别流程。为了适应未来精确制导武器智能化发展需求,深度学习识别技术工程化应用成为了新趋势,针对解决好深度学习算法效率与应用精度的平衡问题,重点分析了网络剪枝、权值量化、低秩近似和知识蒸馏等实时加速推理关键技术;针对网络模型训练,提出了有效解决训练样本不足或军事目标样本获取困难等问题的思路。随着多波段、多模复合制导技术的广泛应用,信息融合为目标识别的工程化应用提供了新技术途径。如何适应各种复杂场景和人工主动干扰是图像末制导面临的重大挑战,阐述了在干扰条件下目标识别鲁棒性,是自动目标识别技术在图像末制导应用中需要迫切解决的工程化问题。  相似文献   

13.
陈昊  郭文普  康凯 《电讯技术》2023,63(12):1869-1875
针对低信噪比条件下自动调制识别准确率不高的问题,提出了通道门控Res2Net卷积神经网络自动调制识别模型。该模型主要由二维卷积神经(Two-dimensional Convolutional Neural Network, 2D-CNN)网络、多尺度残差网络(Residual 2-network, Res2Net)、压缩与激励网络(Squeeze-and-Excitation Network, SENet)和长短期记忆(Long Short-Term Memory, LSTM)网络组成,通过卷积从原始I/Q数据中提取多尺度特征,结合门控机制对特征通道进行权重调整,并利用LSTM对卷积所得特征进行序列建模,确保数据特征被有效挖掘,从而提升自动调制识别的准确率。在基准数据集RML2016.10a下的调制识别实验表明,所提模型在信噪比为12 dB时识别精度为92.68%,在信噪比2 dB以上时平均识别精度大于91%,较经典CLDNN模型、LSTM模型和同类型PET-CGDNN模型、CGDNet模型能取得更高的调制类型识别准确率。  相似文献   

14.
一种用于表情识别的局部判别   总被引:2,自引:0,他引:2       下载免费PDF全文
蒋斌  贾克斌 《电子学报》2014,42(1):155-159
 在判别分量分析算法的基础上,提出了一种针对人脸表情识别任务的局部判别分量分析算法.首先该算法为每个测试样本选取了一组近邻训练样本,获取了训练集的局部样本结构.然后在最大化判别样本子集协方差的同时,最小化样本子集内所有数据的协方差,从而有效地提取了测试样本的表情特征.在多个人脸表情数据库上的实验结果表明,该算法不但提高了判别分量分析算法的表情识别率,而且具有较强的鲁棒性.  相似文献   

15.
Enhancing facial images captured under different lighting conditions is an important challenge and a crucial component in the automatic face recognition systems. This work tackles illumination variation challenge by proposing a new face image enhancement approach based on Fuzzy theory. The proposed Fuzzy reasoning model generates an adaptive enhancement which corrects and improves non-uniform illumination and low contrasts. The FRM approach has been assessed using four blind-reference image quality metrics supported by visual assessment. A comparison to six state-of-the-art methods has also been provided. Experiments are performed on four public data sets, namely Extended Yale-B, Mobio, FERET and Carnegie Mellon University Pose, Illumination, and Expression, showing very interesting results achieved by our approach.  相似文献   

16.
李宏菲  李庆  周莉 《电子学报》2019,47(8):1643-1653
关于面部表情识别的应用也正在渗透至各个领域,如安全驾驶、商品销售、临床医学等等.本文对面部表情识别相关技术进行研究,主要工作及贡献如下:研究非约束条件下人脸动态表情识别,提出了一种基于多视觉描述子及音频特征融合策略的动态表情识别算法.借助多视觉描述子的空时局部特征描述实现动态表情特征的提取;而视频、音频特征的融合策略改善了表情识别性能.基于协方差矩阵及时间轴分段的动态规整,有效地解决了具有不同时长的动态表情序列的样本描述.为进一步改善表情识别模型的泛化性能,本文引入了基于多个体识别模型加权投票的集成识别模型.针对投票过程中的权值学习,提出了基于随机重采样的投票权重学习以及基于个体分类模型相对优势的投票权重学习方法.集成决策进一步改善了表情识别性能.基于AFEW5.0的动态表情库实验验证了算法的有效性.  相似文献   

17.
针对无法对面部表情进行精确识别的问题,提出了基于ResNet50网络融合双线性混合注意力机制的网络模型。针对传统池化算法造成图像特征提取残缺、模糊等问题,提出了一种基于Average-Pooling算法的自适应池化权重算法,同时基于粒子群算法对卷积神经网络模型超参数进行自适应调节,从而进一步提升模型识别精度。基于改进的网络模型,设计了一款实时面部表情识别系统。经验证,在Fer2013数据集和CK+数据集上,改进的模型在测试集中的识别精度分别为73.51%和99.86%。  相似文献   

18.
Automatic facial expression recognition (FER) is an important technique in human–computer interfaces and surveillance systems. It classifies the input facial image into one of the basic expressions (anger, sadness, surprise, happiness, disgust, fear, and neutral). There are two types of FER algorithms: feature-based and convolutional neural network (CNN)-based algorithms. The CNN is a powerful classifier, however, without proper auxiliary techniques, its performance may be limited. In this study, we improve the CNN-based FER system by utilizing face frontalization and the hierarchical architecture. The frontalization algorithm aligns the face by in-plane or out-of-plane, rotation, landmark point matching, and removing background noise. The proposed adaptive exponentially weighted average ensemble rule can determine the optimal weight according to the accuracy of classifiers to improve robustness. Experiments on several popular databases are performed and the results show that the proposed system has a very high accuracy and outperforms state-of-the-art FER systems.  相似文献   

19.
为克服单一输入形式存在的交互缺点,融合手部移动和面部表情两种输入方式的交互特性,将手部移动和面部表情动作相结合,提出了基于“面部表情+手”的混合手势交互技术。混合手势交互技术将7种面部表情和手部移动组合起来,通过手部移动和面部表情识别操控计算机执行一系列目标选择任务。设计的实验中,手部移动用于操控鼠标光标移动,面部表情识别替代鼠标的点击操作用于选中目标按钮。根据设计的多种目标选择任务,详细分析混合手势交互技术的识别错误率和平均识别时间。结果表明,“面部表情+手”的混合手势交互技术的识别准确率可达93.81%,平均识别时间可达2921 ms,完全满足日常的人机交互需求。  相似文献   

20.
Automatic facial expression recognition has received considerable attention in the research areas of computer vision and pattern recognition. To achieve satisfactory accuracy, deriving a robust facial expression representation is especially important. In this paper, we present an adaptive weighted fusion model (AWFM), aiming to automatically determine optimal weighted values. The AWFM integrates two subspaces, i.e., unsupervised and supervised subspaces, to represent and classify query samples. The unsupervised subspace is formed by differentiated expression samples generated via an auxiliary neutral training set. The supervised subspace is obtained through the reconstruction of intra-class singular value decomposition based on low-rank decomposition from raw training data. Our experiments using three public facial expression datasets confirm that the proposed model can obtain better performance compared to conventional fusion methods as well as state-of-the-art methods from the literature.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号