首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
深度决策树迁移学习Boosting方法(DTrBoost)仅能适应一个源域与一个目标域的训练数据,无法适应多个不同分布的源域的样本。此外,DTrBoost方法同步地从源域中学习数据至目标域模型,并没有根据重要程度量化学习知识的权重。在实践中,对于某数据集的数据按照某一或某些特征划分出来的数据往往分布不一致,并且这些不同分布的数据对于最终模型的重要性也不一致,知识迁移的权重也因此不平等。针对这一问题,提出了多源域优化权重的迁移学习方法,主要思想是根据不同分布的源域空间计算出到目标域的KL距离,利用KL距离的比值计算出不同分布的源域样本的学习权重比例,从而优化整体梯度函数,使学习方向朝着梯度下降最快的方向进行。使用梯度下降算法能使模型较快收敛,在确保迁移学习效果的同时,也能确保学习的速度。实验结果表明,提出的算法在整体上实现了更好的性能并且对于不同的训练数据能够实现自适应效果,分类错误率平均下降0.013,在效果最好的OCR数据集上下降0.030。  相似文献   

2.
深度学习技术的广泛应用有力推动了医学图像分析领域的发展,然而大多数深度学习方法通常假设训练集和测试集是独立同分布的,这个假设在模型临床部署时很难保证实现,因此常出现模型性能下降、场景泛化能力不强的困境。基于深度学习的域自适应技术是提升模型迁移能力的主流方法,其目的是使在一个数据集上训练的模型,能够在另一个没有或只有少量标签的数据集上也获得较好结果。由于医学图像存在着样本获取和标注困难、图像性质特殊、模态差异等情况,这给域自适应技术带来很多现实挑战。首先将介绍域自适应的定义及面临的主要挑战,进而从技术角度分类总结了近年来的相关算法,并对比分析其优缺点;然后详细介绍了域自适应常用的医学图像数据集以及相关算法结果情况;最后,从发展瓶颈、技术手段、交叉领域等方面,展望了面向医学图像分析的域自适应的未来研究方向。  相似文献   

3.
近年来深度学习在图像分类任务上取得了显著效果,但通常要求大量人工标记数据,模型训练成本很高.因此,领域自适应等小样本学习方法成为当前研究热点.通常,域适应方法利用源域的经验知识也仅能一定程度降低对目标域标记数据的依赖,因此可以引入主动学习方法对样本价值进行评估并做筛选,从而进一步降低标记成本.本文将典型样本价值估计模型引入域适应学习,结合特征迁移思路,提出了双主动域适应学习算法D_Ac T(Dual active domain adaptation).该算法同时对源域与目标域数据进行价值度量,并挑选最具训练价值的样本,在保证模型精度的前提下,大幅度减少了模型对标签数据的需求.具体而言,首先利用极大极小熵和核心集采样方法,用主动学习价值评估模型挑选目标域样本,得到单主动域适应算法S_Ac T (Single active domain adaptation).随后利用损失预测策略,将价值评估策略适配至源域,进一步提升迁移学习知识复用有效性,降低模型训练成本.本文在常用的四个图像迁移数据集进行了测试,将所提两个算法和传统主动迁移学习及半监督迁移学习算法进行了实验对比.结果表明双主动域适应方...  相似文献   

4.
深度学习的成功依赖于海量的训练数据,然而获取大规模有标注的数据并不容易,成本昂贵且耗时;同时由于数据在不同场景下的分布有所不同,利用某一特定场景的数据集所训练出的模型往往在其他场景表现不佳。迁移学习作为一种将知识从一个领域转移到另一个领域的方法,可以解决上述问题。深度迁移学习则是在深度学习框架下实现迁移学习的方法。提出一种基于伪标签的深度迁移学习算法,该算法以ResNet-50为骨干,通过一种兼顾置信度和类别平衡的样本筛选机制为目标域样本提供伪标签,然后进行自训练,最终实现对目标域样本准确分类,在Office-31数据集上的三组迁移学习任务中,平均准确率较传统算法提升5.0%。该算法没有引入任何额外网络参数,且注重源域数据隐私,可移植性强,具有一定的实用价值。  相似文献   

5.
深度决策树迁移学习Boosting方法(DTrBoost)可以有效地实现单源域有监督情况下向一个目标域迁移学习,但无法实现多个源域情况下的无监督迁移场景。针对这一问题,提出了多源域分布下优化权重的无监督迁移学习Boosting方法,主要思想是根据不同源域与目标域分布情况计算出对应的KL值,通过比较选择合适数量的不同源域样本训练分类器并对目标域样本打上伪标签。最后,依照各个不同源域的KL距离分配不同的学习权重,将带标签的各个源域样本与带伪标签的目标域进行集成训练得到最终结果。对比实验表明,提出的算法实现了更好的分类精度并对不同的数据集实现了自适应效果,分类错误率平均下降2.4%,在效果最好的marketing数据集上下降6%以上。  相似文献   

6.
为了解决传统的机器学习中数据隐私和数据孤岛问题,联邦学习技术应运而生.现有的联邦学习方法采用多个不共享私有数据的参与方联合训练得到了更优的全局模型.然而研究表明,联邦学习仍然存在很多安全问题.典型地,如在训练阶段受到恶意参与方的攻击,导致联邦学习全局模型失效和参与方隐私泄露.本文通过研究对抗样本在训练阶段对联邦学习系统进行投毒攻击的有效性,以发现联邦学习系统的潜在安全问题.尽管对抗样本常用于在测试阶段对机器学习模型进行攻击,但本文中,恶意参与方将对抗样本用于本地模型训练,旨在使得本地模型学习混乱的样本分类特征,从而生成恶意的本地模型参数.为了让恶意参与方主导联邦学习训练过程,本文进一步使用了“学习率放大”的策略.实验表明,相比于Fed-Deepconfuse攻击方法,本文的攻击在CIFAR10数据集和MNIST数据集上均获得了更优的攻击性能.  相似文献   

7.
域适应是解决源域样本和目标域样本不满足独立同分布问题的迁移学习范式,是当下研究的重点方法。然而实际情况下获取源域样本的渠道和方法并不唯一,这会导致源域中存在多种不同分布的样本。多源域适应方法是解决源域样本分布多样性问题的有效途径,其主要研究各源域分布间的关系和与目标域分布对齐的策略,进一步减轻各域之间的域偏移,具有实用意义和挑战价值。随着深度学习技术的不断进步,多源域适应方法主要使用深度神经网络提取各域的域不变特征作为分布对齐的依据,结合使用度量准则衡量分布差异或者利用对抗思想对齐域间分布。经过理论证明和实验验证,多源域适应方法训练的模型比单源域方法训练的模型具有更好的泛化性能,更符合现实需求。通过介绍多源域适应的研究现状和相关概念,对现有算法进行总结和综述,按照迁移方式不同对多源域适应方法进行分类,进一步分析多源域适应方法性能的实验结果,阐述其存在的不足和缺点,并对多源域适应领域的发展和趋势进行预测。  相似文献   

8.
基于相似度学习的多源迁移算法   总被引:1,自引:0,他引:1  
卞则康  王士同 《控制与决策》2017,32(11):1941-1948
针对与测试数据分布相同的训练数据不足,相关领域中存在大量的、与测试数据分布相近的训练数据的场景,提出一种基于相似度学习的多源迁移学习算法(SL-MSTL).该算法在经典SVM分类模型的基础上提出一种新的迁移分类模型,增加对多源域与目标域之间的相似度学习,可以有效地利用各源域中的有用信息,提高目标域的分类效果.实验的结果表明了SL-MSTL 算法的有效性和实用性.  相似文献   

9.
在传统的联邦学习中,多个客户端的本地模型由其隐私数据独立训练,中心服务器通过聚合本地模型生成共享的全局模型。然而,由于非独立同分布(Non-IID)数据等统计异质性,一个全局模型往往无法适应每个客户端。为了解决这个问题,本文提出一种针对Non-IID数据的基于AP聚类算法的联邦学习聚合算法(APFL)。在APFL中,服务器会根据客户端的数据特征,计算出每个客户端之间的相似度矩阵,再利用AP聚类算法对客户端划分不同的集群,构建多中心框架,为每个客户端计算出适合的个性化模型权重。将本文算法在FMINST数据集和CIFAR10数据集上进行实验,与传统联邦学习FedAvg相比,APFL在FMNIST数据集上提升了1.88个百分点,在CIFAR10数据集上提升了6.08个百分点。实验结果表明,本文所提出的APFL在Non-IID数据上可以提高联邦学习的精度性能。  相似文献   

10.
目的 传统的糖尿病视网膜病变(糖网)(diabetic retinopathy, DR)依赖于早期病理特征的精确检测,但由于数据集缺乏病灶标记区域导致无法有效地建立监督性分类模型,引入其他辅助数据集又会出现跨域数据异质性问题;另外,现有的糖网诊断方法大多无法直观地从语义上解释医学模型预测的结果。基于此,本文提出一种端到端式结合域适应学习的糖网自动多分类方法,该方法协同注意力机制和弱监督学习加强优化。方法 首先,利用已标记病灶区域的辅助数据训练病灶检测模型,再将目标域数据集的糖网诊断转化为弱监督学习问题,依靠多分类预测结果指导深度跨域生成对抗网络模型,提升跨域的样本图像质量,用于微调病灶检测模型,进而过滤目标域中一些无关的病灶样本,提升多分类分级诊断性能。最后,在整体模型中融合注意力机制,从医学病理诊断角度提供可解释性支持其分类决策。结果 在公开数据集Messidor上进行糖网多分类评估实验,本文方法获得了71.2%的平均准确率和80.8%的AUC(area under curve)值,相比于其他多种方法具有很大优势,可以辅助医生进行临床眼底筛查。结论 结合域适应学习的糖网分类方法在没有...  相似文献   

11.
Kearns  Michael  Sebastian Seung  H. 《Machine Learning》1995,18(2-3):255-276
We introduce a new formal model in which a learning algorithm must combine a collection of potentially poor but statistically independent hypothesis functions in order to approximate an unknown target function arbitrarily well. Our motivation includes the question of how to make optimal use of multiple independent runs of a mediocre learning algorithm, as well as settings in which the many hypotheses are obtained by a distributed population of identical learning agents.  相似文献   

12.
Auer  Peter  Long  Philip M.  Maass  Wolfgang  Woeginger  Gerhard J. 《Machine Learning》1995,18(2-3):187-230
The majority of results in computational learning theory are concerned with concept learning, i.e. with the special case of function learning for classes of functions with range {0, 1}. Much less is known about the theory of learning functions with a larger range such as or . In particular relatively few results exist about the general structure of common models for function learning, and there are only very few nontrivial function classes for which positive learning results have been exhibited in any of these models.We introduce in this paper the notion of a binary branching adversary tree for function learning, which allows us to give a somewhat surprising equivalent characterization of the optimal learning cost for learning a class of real-valued functions (in terms of a max-min definition which does not involve any learning model).Another general structural result of this paper relates the cost for learning a union of function classes to the learning costs for the individual function classes.Furthermore, we exhibit an efficient learning algorithm for learning convex piecewise linear functions from d into . Previously, the class of linear functions from d into was the only class of functions with multidimensional domain that was known to be learnable within the rigorous framework of a formal model for online learning.Finally we give a sufficient condition for an arbitrary class of functions from into that allows us to learn the class of all functions that can be written as the pointwise maximum ofk functions from . This allows us to exhibit a number of further nontrivial classes of functions from into for which there exist efficient learning algorithms.  相似文献   

13.
Transfer in variable-reward hierarchical reinforcement learning   总被引:2,自引:1,他引:1  
Transfer learning seeks to leverage previously learned tasks to achieve faster learning in a new task. In this paper, we consider transfer learning in the context of related but distinct Reinforcement Learning (RL) problems. In particular, our RL problems are derived from Semi-Markov Decision Processes (SMDPs) that share the same transition dynamics but have different reward functions that are linear in a set of reward features. We formally define the transfer learning problem in the context of RL as learning an efficient algorithm to solve any SMDP drawn from a fixed distribution after experiencing a finite number of them. Furthermore, we introduce an online algorithm to solve this problem, Variable-Reward Reinforcement Learning (VRRL), that compactly stores the optimal value functions for several SMDPs, and uses them to optimally initialize the value function for a new SMDP. We generalize our method to a hierarchical RL setting where the different SMDPs share the same task hierarchy. Our experimental results in a simplified real-time strategy domain show that significant transfer learning occurs in both flat and hierarchical settings. Transfer is especially effective in the hierarchical setting where the overall value functions are decomposed into subtask value functions which are more widely amenable to transfer across different SMDPs.  相似文献   

14.
This article studies self-directed learning, a variant of the on-line (or incremental) learning model in which the learner selects the presentation order for the instances. Alternatively, one can view this model as a variation of learning with membership queries in which the learner is only charged for membership queries for which it could not predict the outcome. We give tight bounds on the complexity of self-directed learning for the concept classes of monomials, monotone DNF formulas, and axis-parallel rectangles in {0, 1, , n – 1} d . These results demonstrate that the number of mistakes under self-directed learning can be surprisingly small. We then show that learning complexity in the model of self-directed learning is less than that of all other commonly studied on-line and query learning models. Next we explore the relationship between the complexity of self-directed learning and the Vapnik-Chervonenkis (VC-)dimension. We show that, in general, the VC-dimension and the self-directed learning complexity are incomparable. However, for some special cases, we show that the VC-dimension gives a lower bound for the self-directed learning complexity. Finally, we explore a relationship between Mitchell's version space algorithm and the existence of self-directed learning algorithms that make few mistakes.  相似文献   

15.
In this paper we initiate an investigation of generalizations of the Probably Approximately Correct (PAC) learning model that attempt to significantly weaken the target function assumptions. The ultimate goal in this direction is informally termed agnostic learning, in which we make virtually no assumptions on the target function. The name derives from the fact that as designers of learning algorithms, we give up the belief that Nature (as represented by the target function) has a simple or succinct explanation. We give a number of positive and negative results that provide an initial outline of the possibilities for agnostic learning. Our results include hardness results for the most obvious generalization of the PAC model to an agnostic setting, an efficient and general agnostic learning method based on dynamic programming, relationships between loss functions for agnostic learning, and an algorithm for a learning problem that involves hidden variables.  相似文献   

16.
刘晓  毛宁 《数据采集与处理》2015,30(6):1310-1317
学习自动机(Learning automation,LA)是一种自适应决策器。其通过与一个随机环境不断交互学习从一个允许的动作集里选择最优的动作。在大多数传统的LA模型中,动作集总是被取作有限的。因此,对于连续参数学习问题,需要将动作空间离散化,并且学习的精度取决于离散化的粒度。本文提出一种新的连续动作集学习自动机(Continuous action set learning automaton,CALA),其动作集为一个可变区间,同时按照均匀分布方式选择输出动作。学习算法利用来自环境的二值反馈信号对动作区间的端点进行自适应更新。通过一个多模态学习问题的仿真实验,演示了新算法相对于3种现有CALA算法的优越性。  相似文献   

17.
不同程度的监督机制在自动文本分类中的应用   总被引:1,自引:0,他引:1  
自动文本分类技术涉及信息检索、模式识别及机器学习等领域。本文以监督的程度为线索,综述了分属全监督,非监督以及半监督学习策略的若干方法-NBC(Naive Bayes Classifier),FCM(Fuzzy C-Means),SOM(Self-Organizing Map),ssFCM(serni-supervised Fuzzy C-Means)gSOM(guided Self-Organizing Map),并应用于文本分类中。其中,gSOM是我们在SOM基础上发展得到的半监督形式。并以Reuters-21578为语料,研究了监督程度对分类效果的影响,从而提出了对实际文本分类工作的建议。  相似文献   

18.
We study a model of probably exactly correct (PExact) learning that can be viewed either as the Exact model (learning from equivalence queries only) relaxed so that counterexamples to equivalence queries are distributionally drawn rather than adversarially chosen or as the probably approximately correct (PAC) model strengthened to require a perfect hypothesis. We also introduce a model of probably almost exactly correct (PAExact) learning that requires a hypothesis with negligible error and thus lies between the PExact and PAC models. Unlike the Exact and PExact models, PAExact learning is applicable to classes of functions defined over infinite instance spaces. We obtain a number of separation results between these models. Of particular note are some positive results for efficient parallel learning in the PAExact model, which stand in stark contrast to earlier negative results for efficient parallel Exact learning.  相似文献   

19.
Massive Open Online Courses (MOOCs) require individual learners to self-regulate their own learning, determining when, how and with what content and activities they engage. However, MOOCs attract a diverse range of learners, from a variety of learning and professional contexts. This study examines how a learner's current role and context influences their ability to self-regulate their learning in a MOOC: Introduction to Data Science offered by Coursera. The study compared the self-reported self-regulated learning behaviour between learners from different contexts and with different roles. Significant differences were identified between learners who were working as data professionals or studying towards a higher education degree and other learners in the MOOC. The study provides an insight into how an individual's context and role may impact their learning behaviour in MOOCs.  相似文献   

20.
Ram  Ashwin 《Machine Learning》1993,10(3):201-248
This article describes how a reasoner can improve its understanding of an incompletely understood domain through the application of what it already knows to novel problems in that domain. Case-based reasoning is the process of using past experiences stored in the reasoner's memory to understand novel situations or solve novel problems. However, this process assumes that past experiences are well understood and provide good lessons to be used for future situations. This assumption is usually false when one is learning about a novel domain, since situations encountered previously in this domain might not have been understood completely. Furthermore, the reasoner may not even have a case that adequately deals with the new situation, or may not be able to access the case using existing indices. We present a theory of incremental learning based on the revision of previously existing case knowledge in response to experiences in such situations. The theory has been implemented in a case-based story understanding program that can (a) learn a new case in situations where no case already exists, (b) learn how to index the case in memory, and (c) incrementally refine its understanding of the case by using it to reason about new situations, thus evolving a better understanding of its domain through experience. This research complements work in case-based reasoning by providing mechanisms by which a case library can be automatically built for use by a case-based reasoning program.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号