首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
汪云云  孙顾威  赵国祥  薛晖 《软件学报》2022,33(4):1170-1182
无监督域适应(unsupervised domain adaptation,UDA)旨在利用带大量标注数据的源域帮助无任何标注信息的目标域学习.在UDA中,通常假设源域和目标域间的数据分布不同,但共享相同的类标签空间.但在真实开放学习场景中,域间的标签空间很可能存在差异.在极端情形下,域间的类别不存在交集,即目标域中类...  相似文献   

2.
Unsupervised domain adaptation (UDA), which aims to use knowledge from a label-rich source domain to help learn unlabeled target domain, has recently attracted much attention. UDA methods mainly concentrate on source classification and distribution alignment between domains to expect the correct target prediction. While in this paper, we attempt to learn the target prediction end to end directly, and develop a Self-corrected unsupervised domain adaptation (SCUDA) method with probabilistic label correction. SCUDA adopts a probabilistic label corrector to learn and correct the target labels directly. Specifically, besides model parameters, those target pseudo-labels are also updated in learning and corrected by the anchor-variable, which preserves the class candidates for samples. Experiments on real datasets show the competitiveness of SCUDA.  相似文献   

3.
王帆  韩忠义  苏皖  尹义龙 《软件学报》2024,35(4):1651-1666
无监督域自适应在解决训练集(源域)和测试集(目标域)分布不一致的问题上已经取得了一定的成功.在面向低能耗场景和开放动态任务环境时,在资源约束和开放类别出现的情况下,现有的无监督域自适应方法面临着严峻的挑战.源域无关开集域自适应(SF-ODA)旨在将源域模型中的知识迁移到开放类出现的无标签目标域,从而在无源域数据资源的限制下辨别公共类和检测开放类.现有的源域无关开集域自适应的方法聚焦于设计准确检测开放类别的源域模型或增改模型的结构.但是,这些方法不仅需要额外的存储空间和训练开销,而且在严格的隐私保护场景下难以实现.提出了一个更加实际的场景:主动学习的源域无关开集域自适应(ASF-ODA),目标是基于一个普通训练的源域模型和少量专家标注的有价值的目标域样本来实现鲁棒的迁移.为了达成此目标,提出了局部一致性主动学习(LCAL)算法.首先,利用目标域中局部特征标签一致的特点,LCAL设计了一种新的主动选择方法:局部多样性选择,来挑选更有价值的阈值模糊样本来促进开放类和公共类分离.接着,LCAL基于信息熵初步筛选出潜在的公共类集合和开放类集合,并利用第一步得到的主动标注样本对这两个集合进行匹配纠...  相似文献   

4.
无监督域适应(Unsupervised Domain Adaptation,UDA)是一类新兴的机器学习范式,其通过对源域知识在无标记目标域上的迁移利用,来促进目标域模型的训练。为建模源域与目标域之间的域分布差异,最大均值差异(Maximum Mean Discrepancy,MMD)建模被广泛应用,其对UDA的性能提升起到了有效的促进作用。然而,这些方法通常忽视了领域之间对应类规模与类分布等结构信息,因为目标域与源域的数据类规模与数据分布通常并非一致。为此,文中提出了一种基于跨域类和数据样本双重加权的无监督域适应模型(Sample weighted and Class weighted based Unsupervised Domain Adaptation Network,SCUDAN)。具体而言,一方面,通过源域类层面的适应性加权来调整源域类权重,以实现源域与目标域之间的类分布对齐;另一方面,通过目标域样本层面的适应性加权来调整目标域样本权重,以实现目标域与源域类中心的对齐。此外,文中还提出了一种CEM(Classification Expectation Maximization)优化算法,以实现对SCUDAN的优化求解。最后,通过对比实验和分析,验证了所提模型和算法的有效性。  相似文献   

5.
深度决策树迁移学习Boosting方法(DTrBoost)可以有效地实现单源域有监督情况下向一个目标域迁移学习,但无法实现多个源域情况下的无监督迁移场景。针对这一问题,提出了多源域分布下优化权重的无监督迁移学习Boosting方法,主要思想是根据不同源域与目标域分布情况计算出对应的KL值,通过比较选择合适数量的不同源域样本训练分类器并对目标域样本打上伪标签。最后,依照各个不同源域的KL距离分配不同的学习权重,将带标签的各个源域样本与带伪标签的目标域进行集成训练得到最终结果。对比实验表明,提出的算法实现了更好的分类精度并对不同的数据集实现了自适应效果,分类错误率平均下降2.4%,在效果最好的marketing数据集上下降6%以上。  相似文献   

6.
为解决网络入侵检测问题,提高检测准确率和降低误报率,提出一种基于深度迁移学习的网络入侵检测方法,该方法使用非监督学习的深度自编码器来进行迁移学习,实现网络的入侵检测。首先对深度迁移学习问题进行建模,然后对深度模型进行迁移学习。迁移学习框架由嵌入层和标签层实现编/解码,编码和解码权重由源域和目标域共享,用于知识的迁移。嵌入层中,通过最小化域之间的嵌入实例的KL散度来强制源域和目标域数据的分布相似;在标签编码层中,使用softmax回归模型对源域的标签信息进行编码分类。实验结果表明,该方法能够实现网络入侵检测,且性能优于其他入侵检测方法。  相似文献   

7.
Transfer learning is a widely investigated learning paradigm that is initially proposed to reuse informative knowledge from related domains, as supervised information in the target domain is scarce while it is sufficiently available in the multiple source domains. One of the challenging issues in transfer learning is how to handle the distribution differences between the source domains and the target domain. Most studies in the research field implicitly assume that data distributions from the source domains and the target domain are similar in a well-designed feature space. However, it is often the case that label assignments for data in the source domains and the target domain are significantly different. Therefore, in reality even if the distribution difference between a source domain and a target domain is reduced, the knowledge from multiple source domains is not well transferred to the target domain unless the label information is carefully considered. In addition, noisy data often emerge in real world applications. Therefore, considering how to handle noisy data in the transfer learning setting is a challenging problem, as noisy data inevitably cause a side effect during the knowledge transfer. Due to the above reasons, in this paper, we are motivated to propose a robust framework against noise in the transfer learning setting. We also explicitly consider the difference in data distributions and label assignments among multiple source domains and the target domain. Experimental results on one synthetic data set, three UCI data sets and one real world text data set in different noise levels demonstrate the effectiveness of our method.  相似文献   

8.
One of the serious challenges in computer vision and image classification is learning an accurate classifier for a new unlabeled image dataset, considering that there is no available labeled training data. Transfer learning and domain adaptation are two outstanding solutions that tackle this challenge by employing available datasets, even with significant difference in distribution and properties, and transfer the knowledge from a related domain to the target domain. The main difference between these two solutions is their primary assumption about change in marginal and conditional distributions where transfer learning emphasizes on problems with same marginal distribution and different conditional distribution, and domain adaptation deals with opposite conditions. Most prior works have exploited these two learning strategies separately for domain shift problem where training and test sets are drawn from different distributions. In this paper, we exploit joint transfer learning and domain adaptation to cope with domain shift problem in which the distribution difference is significantly large, particularly vision datasets. We therefore put forward a novel transfer learning and domain adaptation approach, referred to as visual domain adaptation (VDA). Specifically, VDA reduces the joint marginal and conditional distributions across domains in an unsupervised manner where no label is available in test set. Moreover, VDA constructs condensed domain invariant clusters in the embedding representation to separate various classes alongside the domain transfer. In this work, we employ pseudo target labels refinement to iteratively converge to final solution. Employing an iterative procedure along with a novel optimization problem creates a robust and effective representation for adaptation across domains. Extensive experiments on 16 real vision datasets with different difficulties verify that VDA can significantly outperform state-of-the-art methods in image classification problem.  相似文献   

9.
王帆  韩忠义  尹义龙 《软件学报》2022,33(4):1183-1199
无监督域自适应是解决训练集(源域)和测试集(目标域)分布不一致的有效途径之一.现有的无监督域自适应的理论和方法在相对封闭、静态的环境下取得了一定成功,但面向开放动态任务环境时,在隐私保护、数据孤岛等限制条件下,源域数据往往不可直接获取,现有无监督域自适应方法的鲁棒性将面临严峻的挑战.鉴于此,研究了一个更具挑战性却又未被...  相似文献   

10.
一种异构直推式迁移学习算法   总被引:1,自引:1,他引:0  
杨柳  景丽萍  于剑 《软件学报》2015,26(11):2762-2780
目标领域已有类别标注的数据较少时会影响学习性能,而与之相关的其他源领域中存在一些已标注数据.迁移学习针对这一情况,提出将与目标领域不同但相关的源领域上学习到的知识应用到目标领域.在实际应用中,例如文本-图像、跨语言迁移学习等,源领域和目标领域的特征空间是不相同的,这就是异构迁移学习.关注的重点是利用源领域中已标注的数据来提高目标领域中未标注数据的学习性能,这种情况是异构直推式迁移学习.因为源领域和目标领域的特征空间不同,异构迁移学习的一个关键问题是学习从源领域到目标领域的映射函数.提出采用无监督匹配源领域和目标领域的特征空间的方法来学习映射函数.学到的映射函数可以把源领域中的数据在目标领域中重新表示.这样,重表示之后的已标注源领域数据可以被迁移到目标领域中.因此,可以采用标准的机器学习方法(例如支持向量机方法)来训练分类器,以对目标领域中未标注的数据进行类别预测.给出一个概率解释以说明其对数据中的一些噪声是具有鲁棒性的.同时还推导了一个样本复杂度的边界,也就是寻找映射函数时需要的样本数.在4个实际的数据库上的实验结果,展示了该方法的有效性.  相似文献   

11.
无监督跨域迁移学习是行人再识别中一个非常重要的任务. 给定一个有标注的源域和一个没有标注的目标域, 无监督跨域迁移的关键点在于尽可能地把源域的知识迁移到目标域. 然而, 目前的跨域迁移方法忽略了域内各视角分布的差异性, 导致迁移效果不好. 针对这个缺陷, 本文提出了一个基于多视角的非对称跨域迁移学习的新问题. 为了实现这种非对称跨域迁移, 提出了一种基于多对多生成对抗网络(Many-to-many generative adversarial network, M2M-GAN)的迁移方法. 该方法嵌入了指定的源域视角标记和目标域视角标记作为引导信息, 并增加了视角分类器用于鉴别不同的视角分布, 从而使模型能自动针对不同的源域视角和目标域视角组合采取不同的迁移方式. 在行人再识别基准数据集Market1501、DukeMTMC-reID和MSMT17上, 实验验证了本文的方法能有效提升迁移效果, 达到更高的无监督跨域行人再识别准确率.  相似文献   

12.
为了解决机器学习中的主观信息缺失问题,提出一种新的面向共享数据的迁移组概率学习机(TGPLM-CD).该方法基于结构风险最小化模型,将源领域所含知识和目标领域的类标签组概率信息,特别是领域间的共享数据纳入学习框架中,实现了源领域和目标领域的知识迁移,在待研究领域数据信息不足的情况下提高了分类精确度.大量数据集上的实验结果验证了所提出方法的有效性.  相似文献   

13.
李志恒 《计算机应用研究》2021,38(2):591-594,599
针对机器学习中训练样本和测试样本概率分布不一致的问题,提出了一种基于dropout正则化的半监督域自适应方法来实现将神经网络的特征表示从标签丰富的源域转移到无标签的目标域。此方法从半监督学习的角度出发,在源域数据中添加少量带标签的目标域数据,使得神经网络在学习到源域数据特征分布的同时也能学习到目标域数据的特征分布。由于有了先验知识的指导,即使没有丰富的标签信息,神经网络依然可以很好地拟合目标域数据。实验结果表明,此算法在几种典型的数字数据集SVHN、MNIST和USPS的域自适应任务上的性能优于现有的其他算法,并且在涵盖广泛自然类别的真实数据集CIFAR-10和STL-10的域自适应任务上有较好的鲁棒性。  相似文献   

14.
在无监督领域自适应中分类器对目标域的样本进行类别预测时容易产生混淆预测,虽然已有研究提出了相关算法提取到样本的类间相关性,降低了分类器在目标域上的类混淆预测。但该方法仍然未能解决源域和目标域因共享特征稀疏导致的迁移学习能力不足的问题,针对这个问题,通过使用生成对抗网络对源域进行了风格迁移,扩展源域各类样本的特征空间可供目标域匹配的共享特征,解决因共享特征稀疏导致分类器正迁移力不足的问题,从而进一步减少分类器在目标域上产生的类混淆预测。当分类器利用扩充后的共享特征对目标域样本预测分类概率时,基于不确定性权重机制,加重预测概率权重使其能在几个预测概率峰值上以更高的概率值突出,准确地量化类混淆,最小化跨域的类混淆预测,抑制跨域的负迁移。在UDA场景下,对标准的数据集ImageCLEF-DA和Office-31的三个子数据集分别进行了领域自适应实验,相较于RADA算法平均识别精度分别提升了1.3个百分点和1.7个百分点。  相似文献   

15.
一种面向多源领域的实例迁移学习   总被引:1,自引:0,他引:1  
在迁移学习最大的特点就是利用相关领域的知识来帮助完成目标领域中的学习任务,它能够有效地在相似的领域或任务之间进行信息的共享和迁移,使传统的从零开始的学习变成可积累的学习,具有成本低、效率高等优点.针对源领域数据和目标领域数据分布类似的情况,提出一种基于多源动态TrAdaBoost的实例迁移学习方法.该方法考虑多个源领域知识,使得目标任务的学习可以充分利用所有源领域信息,每次训练候选分类器时,所有源领域样本都参与学习,可以获得有利于目标任务学习的有用信息,从而避免负迁移的产生.理论分析验证了所提算法较单源迁移的优势,以及加入动态因子改善了源权重收敛导致的权重熵由源样本转移到目标样本的问题.实验结果验证了此算法在提高识别率方面的优势.  相似文献   

16.
17.
田青  孙灿宇  储奕 《软件学报》2024,35(4):1703-1716
作为机器学习的一个新兴领域,多源部分域适应(MSPDA)问题由于其源域自身的复杂性、领域之间的差异性以及目标域自身的无监督性,给相关研究带来了挑战,以致目前鲜有相关工作被提出.在该场景下,多个源域中的无关类样本在域适应过程中会造成较大的累积误差和负迁移.此外,现有多源域适应方法大多未考虑不同源域对目标域任务的贡献度不同.因此,提出基于自适应权重的多源部分域适应方法(AW-MSPDA).首先,构建了多样性特征提取器以有效利用源域的先验知识;同时,设计了多层次分布对齐策略从不同层面消除了分布差异,促进了正迁移;此外,为量化不同源域贡献度以及过滤源域无关类样本,利用相似性度量以及伪标签加权方式构建自适应权重;最后,通过大量实验验证了所提出AW-MSPDA算法的泛化性以及优越性.  相似文献   

18.
目前大多的域自适应算法在源域与目标域具有相同类别的场景下,利用标签丰富的源域信息对标签稀少且分布相似的目标域数据进行迁移学习,取得了很多成果.然而,由于现实场景的复杂性和开放性,源域和目标域在类别空间上不尽相同,往往会各自包含一些类别未知且超出现有类别设定的样本.对于这样具有挑战性的开放集场景,传统的域自适应算法将无能...  相似文献   

19.
In this paper, a novel unsupervised dimensionality reduction algorithm, unsupervised Globality-Locality Preserving Projections in Transfer Learning (UGLPTL) is proposed, based on the conventional Globality-Locality Preserving dimensionality reduction algorithm (GLPP) that does not work well in real-world Transfer Learning (TL) applications. In TL applications, one application (source domain) contains sufficient labeled data, but the related application contains only unlabeled data (target domain). Compared to the existing TL methods, our proposed method incorporates all the objectives, such as minimizing the marginal and conditional distributions between both the domains, maximizing the variance of the target domain, and performing Geometrical Diffusion on Manifolds, all of which are essential for transfer learning applications. UGLPTL seeks a projection vector that projects the source and the target domains data into a common subspace where both the labeled source data and the unlabeled target data can be utilized to perform dimensionality reduction. Comprehensive experiments have verified that the proposed method outperforms many state-of-the-art non-transfer learning and transfer learning methods on two popular real-world cross-domain visual transfer learning data sets. Our proposed UGLPTL approach achieved 82.18% and 87.14% mean accuracies over all the tasks of PIE Face and Office-Caltech data sets, respectively.  相似文献   

20.
The application of transfer learning to effectively identify rolling bearing fault has been attracting much attention. Most of the current studies are based on single-source domain or multi-source domains constructed from different working conditions of the same machine. However, in practical scenarios, it is common to obtain multiple source domains from different machines, which brings new challenges to how to use these source domains to complete fault diagnosis. To solve the issue, a conditional distribution-guided adversarial transfer learning network with multi-source domains (CDGATLN) is developed for fault diagnosis of bearing installed on different machines. Firstly, the knowledge of multi-source domains from different machines is transferred to the single target domain by decreasing data distribution discrepancy between each source domain and target domain. Then, a conditional distribution-guided alignment strategy is introduced to decrease conditional distribution discrepancy and calculate the importance per source domain based on the conditional distribution discrepancy, so as to promote the knowledge transfer of each source domain. Finally, a monotone importance specification mechanism is constructed to constrain each importance to ensure that the source domain with low importance will not be discarded, which enables the knowledge of each source domain to participate in the construction of the model. Extensive experimental results verify the effectiveness and superiority of CDGATLN.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号