首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
基于深度迁移学习的烟雾识别方法   总被引:1,自引:0,他引:1  
王文朋  毛文涛  何建樑  窦智 《计算机应用》2017,37(11):3176-3181
针对传统的基于传感器和图像特征的烟雾识别方法易被外部环境干扰且识别场景单一,从而造成烟雾识别精度较低,而基于深度学习的识别方法对数据量要求较高,对于烟雾数据缺失或数据来源受限的情况模型识别能力较弱的问题,提出一种基于深度迁移学习的烟雾识别方法。将ImageNet数据集作为源数据,利用VGG-16模型进行基于同构数据下的特征迁移。首先,将所有的图像数据进行预处理,对每张图像作随机变换(随机旋转、剪切、翻转等);其次,引入VGG-16网络,将其卷积层特征进行迁移,并连接预先使用烟雾数据在VGG-16网络中训练过的全连接层;进而构建出基于迁移学习的深度网络,从而训练得到烟雾识别模型。利用公开数据集以及真实场景烟雾图像进行实验验证,实验结果表明,和现有主流烟雾图像识别方法相比,所提方法有较高的烟雾识别率,实验精度达96%以上。  相似文献   

2.
交通标志识别是智能驾驶的关键技术,要满足识别准确率高和识别速度快的要求。为了提升交通标志的识别准确率和识别速度,提出基于卷积神经网络的交通标志识别算法,设计了一种准确率高、速度快的识别模型用于交通标志识别。该模型使用了改进的Inception模块以及多尺度特征融合方式增强网络的特征提取能力,采用批量归一化来加速网络的训练,采用全局平均池化减小模型的参数量。在GTSRB数据集上进行训练测试,识别模型的准确率达到99.6%,识别每张图片的时间为0.22ms,实验结果表明识别模型的识别准确率高,识别速度快。通过自对比实验,验证了识别模型的结构优势。与其他交通标志识别方法在GTSRB数据集上进行对比实验,识别模型的识别性能优于其他识别方法。  相似文献   

3.
为提高仅包含少量训练样本的图像识别准确率,利用卷积神经网络作为图像的特征提取器,提出一种基于卷积神经网络的小样本图像识别方法。在原始小数据集中引入数据增强变换,扩充数据样本的范围;在此基础上将大规模数据集上的源预训练模型在目标小数据集上进行迁移训练,提取除最后全连接层之外的模型权重和图像特征;结合源预训练模型提取的特征,采用层冻结方法,微调目标小规模数据集上的卷积模型,得到最终分类识别结果。实验结果表明,该方法在小规模图像数据集的识别问题中具有较高的准确率和鲁棒性。  相似文献   

4.
交通标志识别是无人驾驶系统和智能驾驶系统不可少的关键技术之一,为了提高交通标志的识别准确率,进一步提高无人驾驶汽车在行驶过程中的安全性,提出了一种基于深度残差网络的交通标志识别方法,利用不同尺寸的残差模块进行堆叠,构建了具有100层卷积层的网络模型.以比利时交通标志数据集BTSC作为实验数据,优化网络模型后得到的识别准...  相似文献   

5.
花卉识别在生活中有重要的应用价值,传统的花卉识别方法存在识别准确率低、泛化能力较弱等问题。针对这些问题,本文提出一种加入注意力机制的ResNet34网络模型,在ResNet34第一层卷积层和各残差块后加入通道注意力机制、空间注意力机制,并使用迁移学习训练网络模型。实验表明,在花卉数据集上ResNet34比AlexNet、VGG-16、GoogLeNet识别准确率更高,加入注意力机制并使用迁移学习的ResNet34模型的识别准确率比原模型提高了6.1个百分点,比仅使用迁移学习的原模型提高了1.1个百分点。与传统深度学习模型相比,本文提出的模型显著地提高了识别准确率。  相似文献   

6.
针对海洋渔业监管复杂场景下鱼类识别面临的方法落后及系统性理论研究缺乏等问题,提出一种基于迁移学习模型融合的识别方法。通过ImageNet数据集获取预训练模型InceptionV3,把其特征提取部分作为实验模型的特征提取器,在特征提取器后接入AveragePooling层和Softmax分类层,形成新的训练网络;通过NCFM数据集对新的训练网络进行十折交叉验证,得到十个新的鱼类识别模型,进行模型融合后,识别准确率达到97.368%,比单纯新网络模型提高了29.868%。实验结果表明,该方法在复杂场景下的鱼类识别准确率及其泛化性等性能均优于已有相关方法,能够为渔业捕捞监管系统的智能化升级提供可靠的技术支撑。  相似文献   

7.
行人再识别指的是在无重叠的多摄像机监控视频中,匹配不同摄像机中的行人目 标。提出了一种基于迁移学习的行人再识别方法。在训练阶段,针对现有的基于深度卷积神经 网络的图像识别模型进行参数微调,将网络模型迁移学习至行人再识别模型。测试阶段,利用 学习好的网络模型提取行人图像的特征,再采用余弦距离来描述行人对之间的相似度。在 CUHK03、Market-1501 和 DukeMTMC-reID 3 个数据集上进行了深入的实验分析,实验结果表 明该方法取得了较高的累积匹配得分,特别是第 1 匹配率远远超过了非深度学习的方法,与其 他基于深度学习的行人再识别方法相比,准确率也有所提升。  相似文献   

8.
针对手指静脉与手指关节纹的数据样本小且识别准确率易受各自固有属性限制以及非注册用户对系统识别准确率影响较大等问题,提出一种基于迁移学习的带拒绝识别阈值的手指静脉与手指关节纹共同决策同一主体的双模态分数级融合识别方法。对二者数据集进行数据扩充和图像尺寸调整;使用经ImageNet海量数据集训练后的Vgg19、Inceptionv3、Xception以及Resnet分别在二者数据集上进行参数调优;应用调优后的新模型进行分类识别,得到各自的匹配分数,再进行分数级融合,融合后的匹配分数与拒绝识别阈值比较,再进行最终的决策。该方法在公开数据集中识别准确率均可达99%,较各自单模态在各个网络中的识别准确率提高0.33%~15%不等。实验结果表明,采用迁移学习方法对指静脉与指关节纹进行分数级融合能够有效提高系统的识别准确率。  相似文献   

9.
针对蒙古语语音识别模型训练时语料资源匮乏,导致的低资源语料无法满足深度网络模型充分训练的问题。该文基于迁移学习提出了层迁移方法,针对层迁移设计了多种迁移策略构建基于CNN-CTC(卷积神经网络和连接时序分类器)的蒙古语层迁移语音识别模型,并对不同的迁移策略进行探究,从而得到最优模型。在10 000句英语语料数据集和5 000句蒙古语语料数据集上开展了层迁移模型训练中学习率选择实验、层迁移有效性实验、迁移层选择策略实验以及高资源模型训练数据量对层迁移模型的影响实验。实验结果表明,层迁移模型可以加快训练速度,且可以有效降低模型的WER;采用自下向上的迁移层选择策略可以获得最佳的层迁移模型;在有限的蒙古语语料资源下,基于CNN-CTC的蒙古语层迁移语音识别模型比普通基于CNN-CTC的蒙古语语音识别模型的WER降低10.18%。  相似文献   

10.
针对模型在下采样过程中不断损失图像的高层次信息,从而导致特征提取不足的问题,本文对ResNet网络结构进行改进,提出基于多尺度特征与注意力机制的交通标志识别方法。首先,通过特征融合的方式将模型各个层次的多尺度特征进行融合,丰富特征语义信息,增强网络的特征提取能力。然后,通过注意力机制强化不同通道特征,提升特征整体的表达能力。结合这2种方法可提升模型的交通标志识别准确率。在GTSRB和BelgiumTS交通标志数据集上的实验结果表明,所提出方法的准确率分别达到99.31%和98.96%,优于前沿的交通标志识别算法。  相似文献   

11.
Kearns  Michael  Sebastian Seung  H. 《Machine Learning》1995,18(2-3):255-276
We introduce a new formal model in which a learning algorithm must combine a collection of potentially poor but statistically independent hypothesis functions in order to approximate an unknown target function arbitrarily well. Our motivation includes the question of how to make optimal use of multiple independent runs of a mediocre learning algorithm, as well as settings in which the many hypotheses are obtained by a distributed population of identical learning agents.  相似文献   

12.
Auer  Peter  Long  Philip M.  Maass  Wolfgang  Woeginger  Gerhard J. 《Machine Learning》1995,18(2-3):187-230
The majority of results in computational learning theory are concerned with concept learning, i.e. with the special case of function learning for classes of functions with range {0, 1}. Much less is known about the theory of learning functions with a larger range such as or . In particular relatively few results exist about the general structure of common models for function learning, and there are only very few nontrivial function classes for which positive learning results have been exhibited in any of these models.We introduce in this paper the notion of a binary branching adversary tree for function learning, which allows us to give a somewhat surprising equivalent characterization of the optimal learning cost for learning a class of real-valued functions (in terms of a max-min definition which does not involve any learning model).Another general structural result of this paper relates the cost for learning a union of function classes to the learning costs for the individual function classes.Furthermore, we exhibit an efficient learning algorithm for learning convex piecewise linear functions from d into . Previously, the class of linear functions from d into was the only class of functions with multidimensional domain that was known to be learnable within the rigorous framework of a formal model for online learning.Finally we give a sufficient condition for an arbitrary class of functions from into that allows us to learn the class of all functions that can be written as the pointwise maximum ofk functions from . This allows us to exhibit a number of further nontrivial classes of functions from into for which there exist efficient learning algorithms.  相似文献   

13.
Transfer in variable-reward hierarchical reinforcement learning   总被引:2,自引:1,他引:1  
Transfer learning seeks to leverage previously learned tasks to achieve faster learning in a new task. In this paper, we consider transfer learning in the context of related but distinct Reinforcement Learning (RL) problems. In particular, our RL problems are derived from Semi-Markov Decision Processes (SMDPs) that share the same transition dynamics but have different reward functions that are linear in a set of reward features. We formally define the transfer learning problem in the context of RL as learning an efficient algorithm to solve any SMDP drawn from a fixed distribution after experiencing a finite number of them. Furthermore, we introduce an online algorithm to solve this problem, Variable-Reward Reinforcement Learning (VRRL), that compactly stores the optimal value functions for several SMDPs, and uses them to optimally initialize the value function for a new SMDP. We generalize our method to a hierarchical RL setting where the different SMDPs share the same task hierarchy. Our experimental results in a simplified real-time strategy domain show that significant transfer learning occurs in both flat and hierarchical settings. Transfer is especially effective in the hierarchical setting where the overall value functions are decomposed into subtask value functions which are more widely amenable to transfer across different SMDPs.  相似文献   

14.
This article studies self-directed learning, a variant of the on-line (or incremental) learning model in which the learner selects the presentation order for the instances. Alternatively, one can view this model as a variation of learning with membership queries in which the learner is only charged for membership queries for which it could not predict the outcome. We give tight bounds on the complexity of self-directed learning for the concept classes of monomials, monotone DNF formulas, and axis-parallel rectangles in {0, 1, , n – 1} d . These results demonstrate that the number of mistakes under self-directed learning can be surprisingly small. We then show that learning complexity in the model of self-directed learning is less than that of all other commonly studied on-line and query learning models. Next we explore the relationship between the complexity of self-directed learning and the Vapnik-Chervonenkis (VC-)dimension. We show that, in general, the VC-dimension and the self-directed learning complexity are incomparable. However, for some special cases, we show that the VC-dimension gives a lower bound for the self-directed learning complexity. Finally, we explore a relationship between Mitchell's version space algorithm and the existence of self-directed learning algorithms that make few mistakes.  相似文献   

15.
In this paper we initiate an investigation of generalizations of the Probably Approximately Correct (PAC) learning model that attempt to significantly weaken the target function assumptions. The ultimate goal in this direction is informally termed agnostic learning, in which we make virtually no assumptions on the target function. The name derives from the fact that as designers of learning algorithms, we give up the belief that Nature (as represented by the target function) has a simple or succinct explanation. We give a number of positive and negative results that provide an initial outline of the possibilities for agnostic learning. Our results include hardness results for the most obvious generalization of the PAC model to an agnostic setting, an efficient and general agnostic learning method based on dynamic programming, relationships between loss functions for agnostic learning, and an algorithm for a learning problem that involves hidden variables.  相似文献   

16.
不同程度的监督机制在自动文本分类中的应用   总被引:1,自引:0,他引:1  
自动文本分类技术涉及信息检索、模式识别及机器学习等领域。本文以监督的程度为线索,综述了分属全监督,非监督以及半监督学习策略的若干方法-NBC(Naive Bayes Classifier),FCM(Fuzzy C-Means),SOM(Self-Organizing Map),ssFCM(serni-supervised Fuzzy C-Means)gSOM(guided Self-Organizing Map),并应用于文本分类中。其中,gSOM是我们在SOM基础上发展得到的半监督形式。并以Reuters-21578为语料,研究了监督程度对分类效果的影响,从而提出了对实际文本分类工作的建议。  相似文献   

17.
刘晓  毛宁 《数据采集与处理》2015,30(6):1310-1317
学习自动机(Learning automation,LA)是一种自适应决策器。其通过与一个随机环境不断交互学习从一个允许的动作集里选择最优的动作。在大多数传统的LA模型中,动作集总是被取作有限的。因此,对于连续参数学习问题,需要将动作空间离散化,并且学习的精度取决于离散化的粒度。本文提出一种新的连续动作集学习自动机(Continuous action set learning automaton,CALA),其动作集为一个可变区间,同时按照均匀分布方式选择输出动作。学习算法利用来自环境的二值反馈信号对动作区间的端点进行自适应更新。通过一个多模态学习问题的仿真实验,演示了新算法相对于3种现有CALA算法的优越性。  相似文献   

18.
We study a model of probably exactly correct (PExact) learning that can be viewed either as the Exact model (learning from equivalence queries only) relaxed so that counterexamples to equivalence queries are distributionally drawn rather than adversarially chosen or as the probably approximately correct (PAC) model strengthened to require a perfect hypothesis. We also introduce a model of probably almost exactly correct (PAExact) learning that requires a hypothesis with negligible error and thus lies between the PExact and PAC models. Unlike the Exact and PExact models, PAExact learning is applicable to classes of functions defined over infinite instance spaces. We obtain a number of separation results between these models. Of particular note are some positive results for efficient parallel learning in the PAExact model, which stand in stark contrast to earlier negative results for efficient parallel Exact learning.  相似文献   

19.
Massive Open Online Courses (MOOCs) require individual learners to self-regulate their own learning, determining when, how and with what content and activities they engage. However, MOOCs attract a diverse range of learners, from a variety of learning and professional contexts. This study examines how a learner's current role and context influences their ability to self-regulate their learning in a MOOC: Introduction to Data Science offered by Coursera. The study compared the self-reported self-regulated learning behaviour between learners from different contexts and with different roles. Significant differences were identified between learners who were working as data professionals or studying towards a higher education degree and other learners in the MOOC. The study provides an insight into how an individual's context and role may impact their learning behaviour in MOOCs.  相似文献   

20.
Ram  Ashwin 《Machine Learning》1993,10(3):201-248
This article describes how a reasoner can improve its understanding of an incompletely understood domain through the application of what it already knows to novel problems in that domain. Case-based reasoning is the process of using past experiences stored in the reasoner's memory to understand novel situations or solve novel problems. However, this process assumes that past experiences are well understood and provide good lessons to be used for future situations. This assumption is usually false when one is learning about a novel domain, since situations encountered previously in this domain might not have been understood completely. Furthermore, the reasoner may not even have a case that adequately deals with the new situation, or may not be able to access the case using existing indices. We present a theory of incremental learning based on the revision of previously existing case knowledge in response to experiences in such situations. The theory has been implemented in a case-based story understanding program that can (a) learn a new case in situations where no case already exists, (b) learn how to index the case in memory, and (c) incrementally refine its understanding of the case by using it to reason about new situations, thus evolving a better understanding of its domain through experience. This research complements work in case-based reasoning by providing mechanisms by which a case library can be automatically built for use by a case-based reasoning program.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号