首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
针对目前普通卷积神经网络(CNN)在表情和性别识别任务中出现的训练过程复杂、耗时过长、实时性差等问题,提出一种深度可分卷积神经网络的实时人脸表情和性别识别模型。首先,利用多任务级联卷积网络(MTCNN)对不同尺度输入图像进行人脸检测,并利用核相关滤波(KCF)对检测到的人脸位置进行跟踪进而提高检测速度。然后,设置不同尺度卷积核的瓶颈层,用通道合并的特征融合方式形成核卷积单元,以具有残差块和可分卷积单元的深度可分卷积神经网络提取多样化特征,并减少参数数量,轻量化模型结构;使用实时启用的反向传播可视化来揭示权重动态的变化并评估了学习的特征。最后,将表情识别和性别识别两个网络并联融合,实现表情和性别的实时识别。实验结果表明,所提出的网络模型在FER-2013数据集上取得73.8%的识别率,在CK+数据集上的识别率达到96%,在IMDB数据集中性别分类的准确率达到96%;模型的整体处理帧率达到80 frame/s,与结合支持向量机的全连接卷积神经网络方法所得结果相比,有着1.5倍的提升。因此针对数量、分辨率、大小等差异较大的数据集,该网络模型检测快,训练时间短,特征提取简单,具有较高的识别率和实时性。  相似文献   

2.
针对深度卷积神经网络随着卷积层数增加而导致网络模型难以训练和性能退化等问题,提出了一种基于深度残差网络的人脸表情识别方法。该方法利用残差学习单元来改善深度卷积神经网络模型训练寻优的过程,减少模型收敛的时间开销。此外,为了提高网络模型的泛化能力,从KDEF和CK+两种表情数据集上选取表情图像样本组成混合数据集用以训练网络。在混合数据集上采用十折(10-fold)交叉验证方法进行了实验,比较了不同深度的带有残差学习单元的残差网络与不带残差学习单元的常规卷积神经网络的表情识别准确率。当采用74层的深度残差网络时,可以获得90.79%的平均识别准确率。实验结果表明采用残差学习单元构建的深度残差网络可以解决网络深度和模型收敛性之间的矛盾,并能提升表情识别的准确率。  相似文献   

3.
近几年来,人工智能的热度一直居高不下,其中作为人机交互的一种重要方法—人脸表情识别已经成为计算机视觉研究的热点.从传统的机器学习算法到现在的深度学习,识别效率也在不断地提高,为了进一步提高人脸表情识别率,在传统的卷积神经网络的基础上,提出了一种基于改进的ResNet卷积神经网络的表情识别方法.该方法基于ResNet网络...  相似文献   

4.
针对ResNet50中的Bottleneck经过1×1卷积降维后主干分支丢失部分特征信息而导致在表情识别中准确率不高的问题,本文通过引入Ghost模块和深度可分离卷积分别替换Bottleneck中的1×1卷积和3×3卷积,保留更多原始特征信息,提升主干分支的特征提取能力;利用Mish激活函数替换Bottleneck中的ReLU激活函数,提高了表情识别的准确率;在此基础上,通过在改进的Bottleneck之间添加非对称残差注意力模块(asymmetric residual attention block, ARABlock)来提升模型对重要信息的表示能力,从而提出一种面向表情识别的重影非对称残差注意力网络(ghost asymmetric residual attention network, GARAN)模型。对比实验结果表明,本文方法在FER2013和CK+表情数据集上具有较高的识别准确率。  相似文献   

5.
针对三维卷积神经网络无法高效地提取时空特征,提出了一种基于SR3D网络的人体行为识别算法。首先,将三维残差模块的BN层和Relu激活函数放置在三维卷积层之前,更好地提取时空特征;然后,将改进的三维残差块和SE模块组合成SR3D模块,增加重要通道的利用率,提高了网络的识别率。在UCF-101和自制异常行为数据集上进行了大量实验结果表明,SR3D算法分别达到了47.7%和83.6%的识别率(top-1精度),与三维卷积网络(C3D)相比分别提高了4.6和17.3个百分点。  相似文献   

6.
现有深度残差网络作为一种卷积神经网络的变种,由于其良好的表现,被应用于各个领域,深度残差网络虽然通过增加神经网络深度获得了较高的准确率,但是在相同深度情况下,仍然有其他方式提升其准确率.本文针对深度残差网络使用了三种优化方法:(1)通过卷积网络进行映射实现维度填充;(2)构建基于SELU激活函数的残差模块(3)学习率随迭代次数进行衰减.在数据集Fashion-MNIST上测试改进后的网络,实验结果表明:所提出的网络模型在准确率上优于传统的深度残差网络.  相似文献   

7.
倪春晓 《信息与电脑》2023,(11):208-210
本研究为了解决传统面部表情识别模型准确率较低的问题,基于深度卷积神经网络(Deep Convolutional Neural Network,DCNN)提出一种新的改进神经网络模型,与传统模型相对比,本模型将其核心的卷积层替换成了深度可分离卷积层,同时搭配卷积残差块的使用,使网络能够有效减少参数的情况下,能够提取多尺度上的特征信息,从而有效地保留了细节特征。最后通过仿真对比,证明本研究提出的DCNN网络具有突出的性能特点,适合用于面部表情识别任务。  相似文献   

8.
传统的服装多类别分类方法主要是人工提取图像的颜色、纹理、边缘等特征,这些人工选取特征方法过程繁琐且分类精度较低。深度残差网络可通过增加神经网络的深度获得较高的识别精度被广泛地应用于各个领域。为提高服装图像识别精度问题,提出一种改进深度残差网络模型:改进残差块中卷积层、调整批量归一化层与激活函数层中的排列顺序;引入注意力机制;调整网络卷积核结构。该网络结构在标准数据集Fashion-MNIST和香港中文大学多媒体实验室提供的多类别大型服装数据集(DeepFashion)上进行测试,实验结果表明,所提出的网络模型在服装图像识别分类精度上优于传统的深度残差网络。  相似文献   

9.
改进残差网络在玉米叶片病害图像的分类研究   总被引:1,自引:0,他引:1       下载免费PDF全文
针对传统的玉米叶片病害图像识别方法正确率不高、速度慢等问题,提出一种基于改进深度残差网络模型的玉米叶片图像识别算法。提出的改进策略有:将传统的ResNet-50模型第一层卷积层中7×7卷积核替换为3个3×3的卷积核;使用LeakyReLU激活函数替代ReLU激活函数;改变残差块中批标准化层、激活函数与卷积层的排列顺序。进行数据预处理,将训练集与测试集的比例划分为4∶1,采用数据增强的方式对训练集进行扩充,将改进的ResNet-50模型经过迁移学习得到在ImageNet上预训练好的权重参数。实验结果表明,改进的网络在玉米叶片病害图像分类中得到了98.3%的正确率,与其他网络模型相比准确率大幅提升,鲁棒性进一步增强,可为玉米叶片病害的识别提供参考。  相似文献   

10.
针对当前电力通讯网络故障诊断方法及时性差、准确率低和自我学习能力差等缺陷,提出基于改进卷积神经网络的电力通信网故障诊断方法,结合ReLU和Softplus两个激活函数的特点,对卷积神经网络原有激活函数进行改进,使其同时具备光滑性与稀疏性;采用ReLU函数作为作为卷积层与池化层的激活函数,改进激活函数作为全连接层激活函数的结构模型,基于小波神经网络模型对告警信息进行加权操作,得到不同告警类型和信息影响故障诊断和判定的权重,进一步提升故障诊断的准确率;最后通过仿真试验可以看出,改进卷积神经网络相较贝叶斯分类算法与卷积神经网络具有较高的准确率和稳定性,故障诊断准确率达到99.1%,准确率标准差0.915%,为今后电力通讯网智能化故障诊断研究提供一定的参考。  相似文献   

11.
为了解决在面部表情特征提取过程中卷积神经网络CNN和局部二值模式LBP只能提取面部表情图像的单一特征,难以提取与面部变化高度相关的精确特征的问题,提出了一种基于深度学习的特征融合的表情识别方法。该方法将LBP特征和CNN卷积层提取的特征通过加权的方式结合在改进的VGG-16网络连接层中,最后将融合特征送入Softmax分类器获取各类特征的概率,完成基本的6种表情分类。实验结果表明,所提方法在CK+和JAFFE数据集上的平均识别准确率分别达到了97.5%和97.62%,利用融合特征得到的识别结果明显优于利用单一特征识别的效果。与其他方法相比较,该方法能有效提高表情识别准确率,对光照变化更加鲁棒。  相似文献   

12.
Automatic Target Recognition (ATR) based on Synthetic Aperture Radar (SAR) images plays a key role in military applications. However, there are difficulties with this traditional recognition method. Principally, it is a challenge to design robust features and classifiers for different SAR images. Although Convolutional Neural Networks (CNNs) are very successful in many image classification tasks, building a deep network with limited labeled data remains a problem. The topologies of CNNs like the fully connected structure will lead to redundant parameters and the negligence of channel-wise information flow. A novel CNNs approach, called Group Squeeze Excitation Sparsely Connected Convolutional Networks (GSESCNNs), is therefore proposed as a solution. The group squeeze excitation performs dynamic channel-wise feature recalibration with less parameters than squeeze excitation. Sparsely connected convolutional networks are a more efficient way to operate the concatenation of feature maps from different layers. Experimental results on Moving and Stationary Target Acquisition and Recognition (MSTAR) SAR images, demonstrate that this approach achieves, at 99.79%, the best prediction accuracy, outperforming the most common skip connection models, such as Residual Networks and Densely Connected Convolutional Networks, as well as other methods reported in the MSTAR dataset.  相似文献   

13.
Convolutional Neural Networks (CNNs) have a broad range of applications, such as image processing and natural language processing. Inspired by the mammalian visual cortex, CNNs have been shown to achieve impressive results on a number of computer vision challenges, but often with large amounts of processing power and no timing restrictions. This paper presents a design methodology for accelerating CNNs using Hardware/Software Co-design techniques, in order to balance performance and flexibility, particularly for resource-constrained systems. The methodology is applied to a gender recognition case study, using an ARM processor and FPGA fabric to create an embedded system that can process facial images in real-time.  相似文献   

14.
孪生神经网络由两组共享参数的孪生神经网络组成,可对高维度非线性的数据进行低维度映射,其在低维特征空间中变得可分。利用其优异的相似度计算性能,针对像交通标志识别这样具有复杂环境条件的分类问题,提出并设计基于孪生神经网络结构的高效分类器。采用卷积神经网络作为其基本构成,运用max-pooling,dropout等技术形成特征提取所需的多尺度卷积神经网络。同时辅助以空间变换器网络来进一步提高识别的准确率。通过对GTSRB交通标志数据集进行测试,其识别的准确率达到了99.40%。该分类器方法同时具备了结构简单、训练时间短、准确率高以及识别速度快的优点。  相似文献   

15.
目的 卷积神经网络在图像识别算法中得到了广泛应用。针对传统卷积神经网络学习到的特征缺少更有效的鉴别能力而导致图像识别性能不佳等问题,提出一种融合线性判别式思想的损失函数LDloss(linear discriminant loss)并用于图像识别中的深度特征提取,以提高特征的鉴别能力,进而改善图像识别性能。方法 首先利用卷积神经网络搭建特征提取所需的深度网络,然后在考虑样本分类误差最小化的基础上,对于图像多分类问题,引入LDA(linear discriminant analysis)思想构建新的损失函数参与卷积神经网络的训练,来最小化类内特征距离和最大化类间特征距离,以提高特征的鉴别能力,从而进一步提高图像识别性能,分析表明,本文算法可以获得更有助于样本分类的特征。其中,学习过程中采用均值分批迭代更新的策略实现样本均值平稳更新。结果 该算法在MNIST数据集和CK+数据库上分别取得了99.53%和94.73%的平均识别率,与现有算法相比较有一定的提升。同时,与传统的损失函数Softmax loss和Hinge loss对比,采用LDloss的深度网络在MNIST数据集上分别提升了0.2%和0.3%,在CK+数据库上分别提升了9.21%和24.28%。结论 本文提出一种新的融合判别式深度特征学习算法,该算法能有效地提高深度网络的可鉴别能力,从而提高图像识别精度,并且在测试阶段,与Softmax loss相比也不需要额外的计算量。  相似文献   

16.
最近的研究表明,卷积神经网络的性能可以通过采用跨层连接来提高,典型的残差网络(Res Net)便通过恒等映射方法取得了非常好的图像识别效果.但是通过理论分析,在残差模块中,跨层连接线的布局并没有达到最优设置,造成信息的冗余和层数的浪费,为了进一步提高卷积神经网络的性能,文章设计了两种新型的网络结构,分别命名为C-FnetO和C-FnetT,它们在残差模块的基础上进行优化并且具有更少的卷积层层数,同时通过在MNIST,CIFAR-10,CIFAR-100和SVHN公开数据集上的一系列对比实验表明,与最先进的卷积神经网络对比,C-FnetO和C-FnetT网络获得了相对更好的图像识别效果,其中C-FnetT网络的性能最佳,在四种数据集上均取得了最高的准确率.  相似文献   

17.

With new architectures providing astonishing performance on many vision tasks, the interest in Convolutional Neural Networks (CNNs) has grown exponentially in the recent past. Such architectures, however, are not problem-free. For instance, one of the many issues is that they require a huge amount of labeled data and are not able to encode pose and deformation information. Capsule Networks (CapsNets) have been recently proposed as a solution to the issues related to CNNs. CapsNet achieved interesting results in images recognition by addressing pose and deformation encoding challenges. Despite their success, CapsNets are still an under-investigated architecture with respect to the more classical CNNs. Following the ideas of CapsNet, we propose to introduce Residual Capsule Network (ResNetCaps) and Dense Capsule Network (DenseNetCaps) to tackle the image recognition problem. With these two architectures, we expand the encoding phase of CapsNet by adding residual convolutional and densely connected convolutional blocks. In addition to this, we investigate the application of feature interaction methods between capsules to promote their cooperation while dealing with complex data. Experiments on four benchmark datasets demonstrate that the proposed approach performs better than existing solutions.

  相似文献   

18.
In recent years, the development of deep learning has further improved hash retrieval technology. Most of the existing hashing methods currently use Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) to process image and text information, respectively. This makes images or texts subject to local constraints, and inherent label matching cannot capture fine-grained information, often leading to suboptimal results. Driven by the development of the transformer model, we propose a framework called ViT2CMH mainly based on the Vision Transformer to handle deep Cross-modal Hashing tasks rather than CNNs or RNNs. Specifically, we use a BERT network to extract text features and use the vision transformer as the image network of the model. Finally, the features are transformed into hash codes for efficient and fast retrieval. We conduct extensive experiments on Microsoft COCO (MS-COCO) and Flickr30K, comparing with baselines of some hashing methods and image-text matching methods, showing that our method has better performance.  相似文献   

19.
马佳良  陈斌  孙晓飞 《计算机应用》2021,41(9):2712-2719
针对当前基于深度学习的检测器不能有效检测形状不规则或长宽相差悬殊的目标的问题,在传统Faster R-CNN算法的基础上,提出了一个改进的二阶段目标检测框架——Accurate R-CNN。首先,提出了新的交并比(IoU)度量——有效交并比(EIoU),通过提出中心度权重来降低训练数据中冗余包围框的占比。然后,提出了一个上下文相关的特征重分配模块(FRM),通过建模目标的远程依赖和局部上下文关系信息对特征进行重编码,以弥补池化过程中的形状信息损失。实验结果表明,在微软多场景通用目标(MS COCO)数据集上,对于包围框检测任务,当使用深度为50和101的残差网络(ResNet)作为骨干网络时,Accurate R-CNN比基线模型Faster R-CNN的平均精度(AP)分别提高了1.7个百分点和1.1个百分点,超越了使用同样骨干网络的基于掩膜的检测器。在添加掩膜分支后,对于实例分割任务,当使用两种不同深度的ResNet作为骨干网络时,Accurate R-CNN比Mask R-CNN的掩膜平均精度分别提高了1.2个百分点和1.1个百分点。研究结果显示,相较于基线模型,Accurate R-CNN在不同数据集、不同任务上均取得了更好的检测效果。  相似文献   

20.
This paper proposes using Deep Neural Networks (DNN) models for recognizing construction workers’ postures from motion data captured by wearable Inertial Measurement Units (IMUs) sensors. The recognized awkward postures can be linked to known risks of Musculoskeletal Disorders among workers. Applying conventional Machine Learning (ML)-based models has shown promising results in recognizing workers’ postures. ML models are limited – they reply on heuristic feature engineering when constructing discriminative features for characterizing postures. This makes further improving the model performance regarding recognition accuracy challenging. In this paper, the authors investigate the feasibility of addressing this problem using a DNN model that, through integrating Convolutional Neural Networks (CNN) with Long Short-Term Memory (LSTM) layers, automates feature engineering and sequential pattern detection. The model’s recognition performance was evaluated using datasets collected from four workers on construction sites. The DNN model integrating one convolutional and two LSTM layers resulted in the best performance (measured by F1 Score). The proposed model outperformed baseline CNN and LSTM models suggesting that it leveraged the advantages of the two baseline models for effective feature learning. It improved benchmark ML models’ recognition performance by an average of 11% under personalized modelling. The recognition performance was also improved by 3% when the proposed model was applied to 8 types of postures across three subjects. These results support that the proposed DNN model has a high potential in addressing challenges for improving the recognition performance that was observed when using ML models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号