首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
为了更好地将现有深度卷积神经网络应用于表情识别,提出将构建自然表情图像集预训练和多任务深度学习相结合的方法。首先,利用社交网络图像构建一个自发面部表情数据集,对现有深度卷积神经网络进行预训练;然后,以双层树分类器替换输出层的平面softmax分类器,构建深度多任务人脸表情识别模型。实验结果表明,本文提出的方法有效提高了人脸表情识别准确率。  相似文献   

2.
深度学习已成为图像识别领域的一个研究热点。与传统图像识别方法不同,深度学习从大量数据中自动学习特征,并且具有强大的自学习能力和高效的特征表达能力。但在小样本条件下,传统的深度学习方法如卷积神经网络难以学习到有效的特征,造成图像识别的准确率较低。因此,提出一种新的小样本条件下的图像识别算法用于解决SAR图像的分类识别。该算法以卷积神经网络为基础,结合自编码器,形成深度卷积自编码网络结构。首先对图像进行预处理,使用2D Gabor滤波增强图像,在此基础上对模型进行训练,最后构建图像分类模型。该算法设计的网络结构能自动学习并提取小样本图像中的有效特征,进而提高识别准确率。在MSTAR数据集的10类目标分类中,选择训练集数据中10%的样本作为新的训练数据,其余数据为验证数据,并且,测试数据在卷积神经网络中的识别准确率为76.38%,而在提出的卷积自编码结构中的识别准确率达到了88.09%。实验结果表明,提出的算法在小样本图像识别中比卷积神经网络模型更加有效。  相似文献   

3.
为提高仅包含少量训练样本的图像识别准确率,利用卷积神经网络作为图像的特征提取器,提出一种基于卷积神经网络的小样本图像识别方法。在原始小数据集中引入数据增强变换,扩充数据样本的范围;在此基础上将大规模数据集上的源预训练模型在目标小数据集上进行迁移训练,提取除最后全连接层之外的模型权重和图像特征;结合源预训练模型提取的特征,采用层冻结方法,微调目标小规模数据集上的卷积模型,得到最终分类识别结果。实验结果表明,该方法在小规模图像数据集的识别问题中具有较高的准确率和鲁棒性。  相似文献   

4.
针对深度卷积神经网络随着卷积层数增加而导致网络模型难以训练和性能退化等问题,提出了一种基于深度残差网络的人脸表情识别方法。该方法利用残差学习单元来改善深度卷积神经网络模型训练寻优的过程,减少模型收敛的时间开销。此外,为了提高网络模型的泛化能力,从KDEF和CK+两种表情数据集上选取表情图像样本组成混合数据集用以训练网络。在混合数据集上采用十折(10-fold)交叉验证方法进行了实验,比较了不同深度的带有残差学习单元的残差网络与不带残差学习单元的常规卷积神经网络的表情识别准确率。当采用74层的深度残差网络时,可以获得90.79%的平均识别准确率。实验结果表明采用残差学习单元构建的深度残差网络可以解决网络深度和模型收敛性之间的矛盾,并能提升表情识别的准确率。  相似文献   

5.
针对深层卷积神经网络检测表面结构裂纹耗费时间长、精度不够高的问题,基于Xception网络进行自适应调整重构其分类器,利用图像增广技术扩充数据集后,引入迁移学习的方法对Xception网络进行训练。同时,与构建的ResNet50、InceptionV3和VGG19三个深层卷积神经网络模型进行对比实验,重新验证其性能。实验证明,引入迁移学习不仅可以提升模型的整体性能,还能缩减训练深层卷积神经网络的时间,训练的模型在数据集上的识别精确率达到96.24%,在对比实验中达到96.50%。  相似文献   

6.
针对提高不同笔体下的手写识别准确率进行了研究,将深度卷积神经网络与自动编码器相结合,设计卷积自编码器网络层数,形成深度卷积自编码神经网络。首先采用双线性插值方法分别对MNIST数据集与一万幅自制中国大学生手写数字图片进行图像预处理,然后先使用单一MNIST数据集对深度卷积自编码神经网络进行训练与测试;最后使用MNIST与自制数据集中5 000幅混合,再次训练该网络,对另外5 000幅进行测试。实验数据表明,所提深度卷积自编码神经网络在MNIST测试集正确率达到99.37%,有效提高了准确率;且5 000幅自制数据集模型测试正确率达99.33%,表明该算法实用性较强,在不同笔体数字上得到了较高的识别准确率,模型准确有效。  相似文献   

7.
韩斌  曾松伟 《计算机科学》2021,48(z1):113-117
植物叶片识别是植物自动分类识别研究的重要分支和热点,利用卷积神经网络进行图像分类研究已成为主流.为了提高植物叶片识别准确率,提出了基于多特征融合和卷积神经网络的植物叶片图像识别方法.首先对植物叶片图像进行预处理,提取LBP特征和Gabor特征,将多特征相加融合输入网络进行训练,使用卷积神经网络(AlexNet)构架作为分类器,利用全连接层对植物叶片进行识别.为了避免过拟合现象,使用"dropout"方法训练卷积神经网络,通过调节学习率、dropout值、迭代次数优化模型.实验结果表明,基于多特征融合的卷积神经网络植物叶片识别方法对Flavia数据库32种叶片和MEW2014数据库189种叶片识别分类效果较好,平均正确识别率分别为93.25%和96.37%,相比一般的卷积神经网络识别方法,该方法可以提高植物叶片的识别准确率,鲁棒性更强.  相似文献   

8.
由于鱼类数据的多样性以及应用的广泛性,为了进一步提高鱼类检测的效率,以及在处理鱼类图片时提取到更高维的特征来提高鱼类检测的准确率,将卷积神经网络与联邦学习相结合,将鱼类图片数据按照非独立同分布的形式分发给用户。用户在本地训练模型,并将训练好的模型参数上传到云端,云端将完成模型参数的聚合与更新,并将更新好的参数返回到用户的终端,各个用户开始下一轮训练。以此过程来训练网络,并模拟联邦学习的过程。最后,用联邦卷积神经网络、联邦学习以及卷积神经网络分别对野生鱼类数据集上鱼类图片进行图像检测与识别,并将结果做对比。结果表明,联邦卷积神经网络模型最终的分类准确率为33.3%,传统的联邦学习的准确率为26.67%,Resnet50的准确率为87.97%,可以看出联邦卷积神经网络的分类准确率远高于传统的联邦学习。并且联邦卷积神经网络模型在训练轮数较少的情况下就可以得到较好的实验结果。联邦学习作为分布式计算的重要组成部分,它提供的快速模糊化处理以及数据独立的特性,为鱼类分类的效率和数据保护提供了有力保障。卷积神经网络也提高了联邦学习的学习效率。这使得提出的联邦卷积神经网络分类系统相比于传统的联邦学习在分...  相似文献   

9.
为了进一步提高网络异常检测的准确率, 本文在对现有入侵检测模型分析的基础上, 提出了一种基于卷积神经网络和支持向量机的网络报文入侵检测方法. 该方法首先将数据预处理成二维矩阵, 为了防止算法模型过拟合, 利用permutation函数将数据随机打乱, 然后利用卷积神经网络CNN从预处理后的数据中学习有效特征, 最后通过支持向量机SVM分类器将得到的向量进行分类处理. 在数据集选择上, 采用网络入侵检测常用的权威数据集—京都大学蜜罐系统数据集, 通过与GRU-Softmax、GRU-SVM等现有检测率较高的模型进行实验对比, 该模型在准确率上最高分别提高了19.39% 和12.83%, 进一步提升了网络异常检测的准确度. 同时, 本研究所提出方法在训练速度和测试速度上有较大提高.  相似文献   

10.
以小数据集为样本进行卷积神经网络模型的训练过程,容易出现所得到的神经网络模型泛化能力不足的问题。传统的处理方法大都通过数据增强的方式来提高训练数据的样本总数。本文选取多个网络模型进行对比实验,验证不同神经网络在训练过程中是否使用数据随机增强方式的模型识别准确率提高的效果,为如何选取小数据集样本训练神经网络提供参考。  相似文献   

11.
Kearns  Michael  Sebastian Seung  H. 《Machine Learning》1995,18(2-3):255-276
We introduce a new formal model in which a learning algorithm must combine a collection of potentially poor but statistically independent hypothesis functions in order to approximate an unknown target function arbitrarily well. Our motivation includes the question of how to make optimal use of multiple independent runs of a mediocre learning algorithm, as well as settings in which the many hypotheses are obtained by a distributed population of identical learning agents.  相似文献   

12.
Auer  Peter  Long  Philip M.  Maass  Wolfgang  Woeginger  Gerhard J. 《Machine Learning》1995,18(2-3):187-230
The majority of results in computational learning theory are concerned with concept learning, i.e. with the special case of function learning for classes of functions with range {0, 1}. Much less is known about the theory of learning functions with a larger range such as or . In particular relatively few results exist about the general structure of common models for function learning, and there are only very few nontrivial function classes for which positive learning results have been exhibited in any of these models.We introduce in this paper the notion of a binary branching adversary tree for function learning, which allows us to give a somewhat surprising equivalent characterization of the optimal learning cost for learning a class of real-valued functions (in terms of a max-min definition which does not involve any learning model).Another general structural result of this paper relates the cost for learning a union of function classes to the learning costs for the individual function classes.Furthermore, we exhibit an efficient learning algorithm for learning convex piecewise linear functions from d into . Previously, the class of linear functions from d into was the only class of functions with multidimensional domain that was known to be learnable within the rigorous framework of a formal model for online learning.Finally we give a sufficient condition for an arbitrary class of functions from into that allows us to learn the class of all functions that can be written as the pointwise maximum ofk functions from . This allows us to exhibit a number of further nontrivial classes of functions from into for which there exist efficient learning algorithms.  相似文献   

13.
Transfer in variable-reward hierarchical reinforcement learning   总被引:2,自引:1,他引:1  
Transfer learning seeks to leverage previously learned tasks to achieve faster learning in a new task. In this paper, we consider transfer learning in the context of related but distinct Reinforcement Learning (RL) problems. In particular, our RL problems are derived from Semi-Markov Decision Processes (SMDPs) that share the same transition dynamics but have different reward functions that are linear in a set of reward features. We formally define the transfer learning problem in the context of RL as learning an efficient algorithm to solve any SMDP drawn from a fixed distribution after experiencing a finite number of them. Furthermore, we introduce an online algorithm to solve this problem, Variable-Reward Reinforcement Learning (VRRL), that compactly stores the optimal value functions for several SMDPs, and uses them to optimally initialize the value function for a new SMDP. We generalize our method to a hierarchical RL setting where the different SMDPs share the same task hierarchy. Our experimental results in a simplified real-time strategy domain show that significant transfer learning occurs in both flat and hierarchical settings. Transfer is especially effective in the hierarchical setting where the overall value functions are decomposed into subtask value functions which are more widely amenable to transfer across different SMDPs.  相似文献   

14.
This article studies self-directed learning, a variant of the on-line (or incremental) learning model in which the learner selects the presentation order for the instances. Alternatively, one can view this model as a variation of learning with membership queries in which the learner is only charged for membership queries for which it could not predict the outcome. We give tight bounds on the complexity of self-directed learning for the concept classes of monomials, monotone DNF formulas, and axis-parallel rectangles in {0, 1, , n – 1} d . These results demonstrate that the number of mistakes under self-directed learning can be surprisingly small. We then show that learning complexity in the model of self-directed learning is less than that of all other commonly studied on-line and query learning models. Next we explore the relationship between the complexity of self-directed learning and the Vapnik-Chervonenkis (VC-)dimension. We show that, in general, the VC-dimension and the self-directed learning complexity are incomparable. However, for some special cases, we show that the VC-dimension gives a lower bound for the self-directed learning complexity. Finally, we explore a relationship between Mitchell's version space algorithm and the existence of self-directed learning algorithms that make few mistakes.  相似文献   

15.
In this paper we initiate an investigation of generalizations of the Probably Approximately Correct (PAC) learning model that attempt to significantly weaken the target function assumptions. The ultimate goal in this direction is informally termed agnostic learning, in which we make virtually no assumptions on the target function. The name derives from the fact that as designers of learning algorithms, we give up the belief that Nature (as represented by the target function) has a simple or succinct explanation. We give a number of positive and negative results that provide an initial outline of the possibilities for agnostic learning. Our results include hardness results for the most obvious generalization of the PAC model to an agnostic setting, an efficient and general agnostic learning method based on dynamic programming, relationships between loss functions for agnostic learning, and an algorithm for a learning problem that involves hidden variables.  相似文献   

16.
刘晓  毛宁 《数据采集与处理》2015,30(6):1310-1317
学习自动机(Learning automation,LA)是一种自适应决策器。其通过与一个随机环境不断交互学习从一个允许的动作集里选择最优的动作。在大多数传统的LA模型中,动作集总是被取作有限的。因此,对于连续参数学习问题,需要将动作空间离散化,并且学习的精度取决于离散化的粒度。本文提出一种新的连续动作集学习自动机(Continuous action set learning automaton,CALA),其动作集为一个可变区间,同时按照均匀分布方式选择输出动作。学习算法利用来自环境的二值反馈信号对动作区间的端点进行自适应更新。通过一个多模态学习问题的仿真实验,演示了新算法相对于3种现有CALA算法的优越性。  相似文献   

17.
We study a model of probably exactly correct (PExact) learning that can be viewed either as the Exact model (learning from equivalence queries only) relaxed so that counterexamples to equivalence queries are distributionally drawn rather than adversarially chosen or as the probably approximately correct (PAC) model strengthened to require a perfect hypothesis. We also introduce a model of probably almost exactly correct (PAExact) learning that requires a hypothesis with negligible error and thus lies between the PExact and PAC models. Unlike the Exact and PExact models, PAExact learning is applicable to classes of functions defined over infinite instance spaces. We obtain a number of separation results between these models. Of particular note are some positive results for efficient parallel learning in the PAExact model, which stand in stark contrast to earlier negative results for efficient parallel Exact learning.  相似文献   

18.
不同程度的监督机制在自动文本分类中的应用   总被引:1,自引:0,他引:1  
自动文本分类技术涉及信息检索、模式识别及机器学习等领域。本文以监督的程度为线索,综述了分属全监督,非监督以及半监督学习策略的若干方法-NBC(Naive Bayes Classifier),FCM(Fuzzy C-Means),SOM(Self-Organizing Map),ssFCM(serni-supervised Fuzzy C-Means)gSOM(guided Self-Organizing Map),并应用于文本分类中。其中,gSOM是我们在SOM基础上发展得到的半监督形式。并以Reuters-21578为语料,研究了监督程度对分类效果的影响,从而提出了对实际文本分类工作的建议。  相似文献   

19.
Massive Open Online Courses (MOOCs) require individual learners to self-regulate their own learning, determining when, how and with what content and activities they engage. However, MOOCs attract a diverse range of learners, from a variety of learning and professional contexts. This study examines how a learner's current role and context influences their ability to self-regulate their learning in a MOOC: Introduction to Data Science offered by Coursera. The study compared the self-reported self-regulated learning behaviour between learners from different contexts and with different roles. Significant differences were identified between learners who were working as data professionals or studying towards a higher education degree and other learners in the MOOC. The study provides an insight into how an individual's context and role may impact their learning behaviour in MOOCs.  相似文献   

20.
Ram  Ashwin 《Machine Learning》1993,10(3):201-248
This article describes how a reasoner can improve its understanding of an incompletely understood domain through the application of what it already knows to novel problems in that domain. Case-based reasoning is the process of using past experiences stored in the reasoner's memory to understand novel situations or solve novel problems. However, this process assumes that past experiences are well understood and provide good lessons to be used for future situations. This assumption is usually false when one is learning about a novel domain, since situations encountered previously in this domain might not have been understood completely. Furthermore, the reasoner may not even have a case that adequately deals with the new situation, or may not be able to access the case using existing indices. We present a theory of incremental learning based on the revision of previously existing case knowledge in response to experiences in such situations. The theory has been implemented in a case-based story understanding program that can (a) learn a new case in situations where no case already exists, (b) learn how to index the case in memory, and (c) incrementally refine its understanding of the case by using it to reason about new situations, thus evolving a better understanding of its domain through experience. This research complements work in case-based reasoning by providing mechanisms by which a case library can be automatically built for use by a case-based reasoning program.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号