首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
摘要:跨领域分类旨在利用已标记的源领域信息来为概率分布不同,未标记的目标领域训练一个精确的分类器。已有工作大多以文本主题为特征表现形式,并基于共享主题来建立领域间独有主题的映射关系,从而达到跨领域学习的目的。然而,现实中领域间的连接可以是多角度的,而这种基于单一共享主题的映射方式,存在语义表示不完备和偏差性等问题,从而影响跨领域分类精度。基于此,提出一种基于多桥映射的跨领域分类方法,通过提取多重的共享主题和领域独有主题,并以多重共享主题为桥梁来建立领域独有主题之间的多重映射关系,从而实现跨领域的分类。在20Newsgroups和Reuters-21578数据集上的实验结果表明,和同类算法相比,所提算法在分类精度上具有优越性。  相似文献   

2.
黄贤立 《计算机工程》2010,36(24):186-188
跨领域的文本分类,是指利用有标记领域的知识去帮助另一个概率分布不同的,未标记领域的知识进行分类的问题。从多视图学习的视角提出一个新的跨领域文本分类的方法(MTV算法)。通过在核空间典型相关分析中引入与标记相关的信息,MTV算法可以得到一个判别性能更优的公共子空间。在多个情感类文本数据上的实验表明,MTV算法可以大大提升传统监督式学习算法面对领域迁移时的分类性能,并且在引入判别式的核空间典型相关分析后,进一步优化性能。  相似文献   

3.
在无监督领域自适应中分类器对目标域的样本进行类别预测时容易产生混淆预测,虽然已有研究提出了相关算法提取到样本的类间相关性,降低了分类器在目标域上的类混淆预测。但该方法仍然未能解决源域和目标域因共享特征稀疏导致的迁移学习能力不足的问题,针对这个问题,通过使用生成对抗网络对源域进行了风格迁移,扩展源域各类样本的特征空间可供目标域匹配的共享特征,解决因共享特征稀疏导致分类器正迁移力不足的问题,从而进一步减少分类器在目标域上产生的类混淆预测。当分类器利用扩充后的共享特征对目标域样本预测分类概率时,基于不确定性权重机制,加重预测概率权重使其能在几个预测概率峰值上以更高的概率值突出,准确地量化类混淆,最小化跨域的类混淆预测,抑制跨域的负迁移。在UDA场景下,对标准的数据集ImageCLEF-DA和Office-31的三个子数据集分别进行了领域自适应实验,相较于RADA算法平均识别精度分别提升了1.3个百分点和1.7个百分点。  相似文献   

4.
在实际应用场景中,情感分析技术为自动判别文本情感极性提供了有效的决策及解决方案,但是文本情感分析技术依赖于大量的标定样本.为了减小对人工标注的依赖,有研究者提出了基于领域自适应的跨领域情感分析技术.该技术面向跨领域文本情感分析任务,将经由标定样本训练的源领域模型,迁移至无标定的目标领域.然而目前的领域自适应技术仅从单个角度进行迁移,即减小领域专有特征差异或提取领域不变特征.因此考虑到跨领域文本数据同时包含领域专有特征和领域不变特征的特点,提出了一种领域对齐对抗的无监督跨领域文本情感分析算法.该算法通过渐进式的迁移策略,逐层减小不同语义层的领域差异,并在高层语义子空间通过协同优化的领域自适应算法,实现跨领域文本数据的领域知识迁移.在2个公开跨领域文本情感数据集上的24组跨领域文本情感分类实验结果表明,与4类领域自适应算法中代表性的和当前表现最优的方法相比,领域对齐对抗的无监督跨领域文本情感分析算法在24组实验中取得了最高的平均分类准确率,同时结合迁移性能分析结果和特征分布可视化结果,证明该算法一定程度上提升了现有无监督跨领域文本情感分析算法的分类性能和迁移性能.  相似文献   

5.
为了提高作者识别的跨领域鲁棒性,解决作者写作规律在不同领域间的迁移问题,该文首先通过分析和实验发现:名词具有较高的领域相关性。然后,采用文本变形算法将名词掩盖掉,以此来降低相关特征的权重,从而迫使机器学习算法选择领域关联度更低的特征拟合样本,进而提高模型的泛化能力。在由21 953个样本组成的跨领域作者识别的实验中,该文分别采用了基于字N-gram、基于BERT和基于集成学习的三种典型作者识别方法,对比了无掩盖和掩盖名词、形容词、动词、副词、功能词的作者识别,其中掩盖名词后的作者识别方法获得了较高的评价指标。实验结果表明,掩盖名词的方法可以提高作者识别的跨领域鲁棒性。  相似文献   

6.
神经网络模型被广泛用于跨领域分类学习。边缘堆叠降噪自动编码器(marginalized stacked denoising autoencoders,mSDA)作为一种神经网络模型,通过对源领域和目标领域数据进行边缘化加噪损坏,学习一个公共的、健壮的特征表示空间,从而解决领域适应问题。然而,mSDA对所有的特征都采取相同的边缘化加噪处理方式,没有考虑到不同特征对分类结果的影响不同。为此,对特征进行区分性的噪音系数干扰,提出多边缘降噪自动编码器(multi-marginalized denoising autoencoders,M-MDA)。首先,利用改进的权重似然率(weighted log-likelihood ratio update,WLLRU)区分出领域间的共享和特有特征;然后,通过计算特征在两个领域的距离,对共享特征和特有特征进行不同方式的边缘化降噪处理,并基于单层边缘降噪自动编码器(marginalized denoising autoencoders,MDA)学习获取更健壮的特征;最后,对新的特征空间进行二次损坏以强化共享特征的比例。实验结果表明,该方法在跨领域情感分类方面优于基线算法。  相似文献   

7.
融合异构特征的子空间迁移学习算法   总被引:2,自引:0,他引:2  
特征迁移重在领域共有特征间学习,然而其忽略领域特有特征的判别信息,使算法的适应性受到一定的局限. 针对此问题,提出了一种融合异构特征的子空间迁移学习(The subspace transfer learning algorithm integrating with heterogeneous features,STL-IHF)算法.该算法将数据的特征空间看成共享和特有两个特征子空间的组合,同时基于经验风险最 小框架将共享特征和特有特征共同嵌入到支持向量机(Support vector machine,SVM)的训练过程中.其在共享特征子空间上实现知识迁移的 同时兼顾了领域特有的异构信息,增强了算法的适应性.模拟和真实数据集上的实验结果表明了所提方法的有效性.  相似文献   

8.
《计算机工程》2018,(4):310-316
在图像分类任务中,由于图像背景、光照、拍摄角度等的变化,从源领域上训练的分类模型常常不适用于相关目标领域的图像数据。为此,提出一种基于深度卷积神经网络的迁移学习方法——稀疏辨别迁移模型。该方法通过自适应地学习目标领域辨别性特征分布优化分类函数,同时与特征预处理方法相结合,可获得较好的互补性作用。实验结果表明,与现有的基准与深度迁移方法相比,该方法在Office-Caltech和Office-31 2个标准跨领域分类数据集上均取得了较好的分类性能。  相似文献   

9.
跨领域文本情感分类研究进展   总被引:1,自引:0,他引:1  
赵传君  王素格  李德玉 《软件学报》2020,31(6):1723-1746
作为社会媒体文本情感分析的重要研究课题之一,跨领域文本情感分类旨在利用源领域资源或模型迁移地服务于目标领域的文本情感分类任务,其可以有效缓解目标领域中带标签数据不足问题.本文从三个角度对跨领域文本情感分类方法行了归纳总结:(1)按照目标领域中是否有带标签数据,可分为直推式和归纳式情感迁移方法;(2)按照不同情感适应性策略,可分为实例迁移方法、特征迁移方法、模型迁移方法、基于词典的方法、联合情感主题方法以及图模型方法等;(3)按照可用源领域个数,可分为单源和多源跨领域文本情感分类方法.此外,论文还介绍了深度迁移学习方法及其在跨领域文本情感分类的最新应用成果.最后,论文围绕跨领域文本情感分类面临的关键技术问题,对可能的突破方向进行了展望.  相似文献   

10.
近年来,跨领域文本倾向性分析已成为自然语言处理领域的一个研究热点.它利用已经标注倾向性的源领域文本,预测目标领域文本的倾向性.然而,由于不同领域的数据往往服从不同的分布,导致传统的监督分类模型通常不能取得理想的效果.为解决以上问题,提出了一种基于加权SimRank的分析模型.本模型在加权SimRank算法的基础上,构建潜在特征空间,然后在潜在特征空间下学习得到映射函数,并对每个样本重新映射,从而缩小了不同领域间的数据分布差异,实现了跨领域情感分类.最后,通过实验验证了该方法的有效性.  相似文献   

11.
Auer  Peter  Long  Philip M.  Maass  Wolfgang  Woeginger  Gerhard J. 《Machine Learning》1995,18(2-3):187-230
The majority of results in computational learning theory are concerned with concept learning, i.e. with the special case of function learning for classes of functions with range {0, 1}. Much less is known about the theory of learning functions with a larger range such as or . In particular relatively few results exist about the general structure of common models for function learning, and there are only very few nontrivial function classes for which positive learning results have been exhibited in any of these models.We introduce in this paper the notion of a binary branching adversary tree for function learning, which allows us to give a somewhat surprising equivalent characterization of the optimal learning cost for learning a class of real-valued functions (in terms of a max-min definition which does not involve any learning model).Another general structural result of this paper relates the cost for learning a union of function classes to the learning costs for the individual function classes.Furthermore, we exhibit an efficient learning algorithm for learning convex piecewise linear functions from d into . Previously, the class of linear functions from d into was the only class of functions with multidimensional domain that was known to be learnable within the rigorous framework of a formal model for online learning.Finally we give a sufficient condition for an arbitrary class of functions from into that allows us to learn the class of all functions that can be written as the pointwise maximum ofk functions from . This allows us to exhibit a number of further nontrivial classes of functions from into for which there exist efficient learning algorithms.  相似文献   

12.
Kearns  Michael  Sebastian Seung  H. 《Machine Learning》1995,18(2-3):255-276
We introduce a new formal model in which a learning algorithm must combine a collection of potentially poor but statistically independent hypothesis functions in order to approximate an unknown target function arbitrarily well. Our motivation includes the question of how to make optimal use of multiple independent runs of a mediocre learning algorithm, as well as settings in which the many hypotheses are obtained by a distributed population of identical learning agents.  相似文献   

13.
In this paper we initiate an investigation of generalizations of the Probably Approximately Correct (PAC) learning model that attempt to significantly weaken the target function assumptions. The ultimate goal in this direction is informally termed agnostic learning, in which we make virtually no assumptions on the target function. The name derives from the fact that as designers of learning algorithms, we give up the belief that Nature (as represented by the target function) has a simple or succinct explanation. We give a number of positive and negative results that provide an initial outline of the possibilities for agnostic learning. Our results include hardness results for the most obvious generalization of the PAC model to an agnostic setting, an efficient and general agnostic learning method based on dynamic programming, relationships between loss functions for agnostic learning, and an algorithm for a learning problem that involves hidden variables.  相似文献   

14.
This article studies self-directed learning, a variant of the on-line (or incremental) learning model in which the learner selects the presentation order for the instances. Alternatively, one can view this model as a variation of learning with membership queries in which the learner is only charged for membership queries for which it could not predict the outcome. We give tight bounds on the complexity of self-directed learning for the concept classes of monomials, monotone DNF formulas, and axis-parallel rectangles in {0, 1, , n – 1} d . These results demonstrate that the number of mistakes under self-directed learning can be surprisingly small. We then show that learning complexity in the model of self-directed learning is less than that of all other commonly studied on-line and query learning models. Next we explore the relationship between the complexity of self-directed learning and the Vapnik-Chervonenkis (VC-)dimension. We show that, in general, the VC-dimension and the self-directed learning complexity are incomparable. However, for some special cases, we show that the VC-dimension gives a lower bound for the self-directed learning complexity. Finally, we explore a relationship between Mitchell's version space algorithm and the existence of self-directed learning algorithms that make few mistakes.  相似文献   

15.
刘晓  毛宁 《数据采集与处理》2015,30(6):1310-1317
学习自动机(Learning automation,LA)是一种自适应决策器。其通过与一个随机环境不断交互学习从一个允许的动作集里选择最优的动作。在大多数传统的LA模型中,动作集总是被取作有限的。因此,对于连续参数学习问题,需要将动作空间离散化,并且学习的精度取决于离散化的粒度。本文提出一种新的连续动作集学习自动机(Continuous action set learning automaton,CALA),其动作集为一个可变区间,同时按照均匀分布方式选择输出动作。学习算法利用来自环境的二值反馈信号对动作区间的端点进行自适应更新。通过一个多模态学习问题的仿真实验,演示了新算法相对于3种现有CALA算法的优越性。  相似文献   

16.
Transfer in variable-reward hierarchical reinforcement learning   总被引:2,自引:1,他引:1  
Transfer learning seeks to leverage previously learned tasks to achieve faster learning in a new task. In this paper, we consider transfer learning in the context of related but distinct Reinforcement Learning (RL) problems. In particular, our RL problems are derived from Semi-Markov Decision Processes (SMDPs) that share the same transition dynamics but have different reward functions that are linear in a set of reward features. We formally define the transfer learning problem in the context of RL as learning an efficient algorithm to solve any SMDP drawn from a fixed distribution after experiencing a finite number of them. Furthermore, we introduce an online algorithm to solve this problem, Variable-Reward Reinforcement Learning (VRRL), that compactly stores the optimal value functions for several SMDPs, and uses them to optimally initialize the value function for a new SMDP. We generalize our method to a hierarchical RL setting where the different SMDPs share the same task hierarchy. Our experimental results in a simplified real-time strategy domain show that significant transfer learning occurs in both flat and hierarchical settings. Transfer is especially effective in the hierarchical setting where the overall value functions are decomposed into subtask value functions which are more widely amenable to transfer across different SMDPs.  相似文献   

17.
Massive Open Online Courses (MOOCs) require individual learners to self-regulate their own learning, determining when, how and with what content and activities they engage. However, MOOCs attract a diverse range of learners, from a variety of learning and professional contexts. This study examines how a learner's current role and context influences their ability to self-regulate their learning in a MOOC: Introduction to Data Science offered by Coursera. The study compared the self-reported self-regulated learning behaviour between learners from different contexts and with different roles. Significant differences were identified between learners who were working as data professionals or studying towards a higher education degree and other learners in the MOOC. The study provides an insight into how an individual's context and role may impact their learning behaviour in MOOCs.  相似文献   

18.
19.
不同程度的监督机制在自动文本分类中的应用   总被引:1,自引:0,他引:1  
自动文本分类技术涉及信息检索、模式识别及机器学习等领域。本文以监督的程度为线索,综述了分属全监督,非监督以及半监督学习策略的若干方法-NBC(Naive Bayes Classifier),FCM(Fuzzy C-Means),SOM(Self-Organizing Map),ssFCM(serni-supervised Fuzzy C-Means)gSOM(guided Self-Organizing Map),并应用于文本分类中。其中,gSOM是我们在SOM基础上发展得到的半监督形式。并以Reuters-21578为语料,研究了监督程度对分类效果的影响,从而提出了对实际文本分类工作的建议。  相似文献   

20.
We study a model of probably exactly correct (PExact) learning that can be viewed either as the Exact model (learning from equivalence queries only) relaxed so that counterexamples to equivalence queries are distributionally drawn rather than adversarially chosen or as the probably approximately correct (PAC) model strengthened to require a perfect hypothesis. We also introduce a model of probably almost exactly correct (PAExact) learning that requires a hypothesis with negligible error and thus lies between the PExact and PAC models. Unlike the Exact and PExact models, PAExact learning is applicable to classes of functions defined over infinite instance spaces. We obtain a number of separation results between these models. Of particular note are some positive results for efficient parallel learning in the PAExact model, which stand in stark contrast to earlier negative results for efficient parallel Exact learning.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号