首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A stopping criterion for active learning   总被引:1,自引:0,他引:1  
Active learning (AL) is a framework that attempts to reduce the cost of annotating training material for statistical learning methods. While a lot of papers have been presented on applying AL to natural language processing tasks reporting impressive savings, little work has been done on defining a stopping criterion. In this work, we present a stopping criterion for active learning based on the way instances are selected during uncertainty-based sampling and verify its applicability in a variety of settings. The statistical learning models used in our study are support vector machines (SVMs), maximum entropy models and Bayesian logistic regression and the tasks performed are text classification, named entity recognition and shallow parsing. In addition, we present a method for multiclass mutually exclusive SVM active learning.  相似文献   

2.
Support vector machine (SVM) is a general and powerful learning machine, which adopts supervised manner. However, for many practical machine learning and data mining applications, unlabeled training examples are readily available but labeled ones are very expensive to be obtained. Therefore, semi-supervised learning emerges as the times require. At present, the combination of SVM and semi-supervised learning principle such as transductive learning has attracted more and more attentions. Transductive support vector machine (TSVM) learns a large margin hyperplane classifier using labeled training data, but simultaneously force this hyperplane to be far away from the unlabeled data. TSVM might seem to be the perfect semi-supervised algorithm since it combines the powerful regularization of SVMs and a direct implementation of the clustering assumption, nevertheless its objective function is non-convex and then it is difficult to be optimized. This paper aims to solve this difficult problem. We apply least square support vector machine to implement TSVM, which can ensure that the objective function is convex and the optimization solution can then be easily found by solving a set of linear equations. Simulation results demonstrate that the proposed method can exploit unlabeled data to yield good performance effectively.  相似文献   

3.
Personalized transductive learning (PTL) builds a unique local model for classification of individual test samples and is therefore practically neighborhood dependant; i.e. a specific model is built in a subspace spanned by a set of samples adjacent to the test sample. While existing PTL methods usually define the neighborhood by a predefined (dis)similarity measure, this paper introduces a new concept of a knowledgeable neighborhood and a transductive Support Vector Machine (SVM) classification tree (t-SVMT) for PTL. The neighborhood of a test sample is constructed over the classification knowledge modelled by regional SVMs, and a set of such SVMs adjacent to the test sample is systematically aggregated into a t-SVMT. Compared to a regular SVM and other SVMTs, a t-SVMT, by virtue of the aggregation of SVMs, has an inherent superiority in classifying class-imbalanced datasets. The t-SVMT has also solved the over-fitting problem of all previous SVMTs since it aggregates neighborhood knowledge and thus significantly reduces the size of the SVM tree. The properties of the t-SVMT are evaluated through experiments on a synthetic dataset, eight bench-mark cancer diagnosis datasets, as well as a case study of face membership authentication.  相似文献   

4.
李远航  刘波  唐侨 《计算机科学》2014,41(11):260-264
主动学习已经广泛应用于图数据的研究,但应用于多标签图数据的分类较为少见。结合基于误差界最小化的主动学习,给出了一种多标签图数据的分类方法,即通过多标签分类与局部和全局的一致性学习(LLGC)得到一系列目标方程,并将其用于最小化直推式的拉德马赫复杂度,得到最小泛化误差上界,从而在图上获取少量的但蕴含巨大信息量的节点。实验证明,应用该方法的多标签分类器的输出有很高的精确度。  相似文献   

5.
刁树民  王永利 《计算机应用》2009,29(6):1578-1581
在进行组合决策时,已有的组合分类方法需要对多个组合分类器均有效的公共已知标签训练样本。为了解决在没有已知标签样本的情况下数据流组合分类决策问题,提出一种基于约束学习的数据流组合分类器的融合策略。在判定测试样本上的决策时,根据直推学习理论设计满足每一个局部分类器约束度量的方法,保证了约束的可行性,解决了分布式分类聚集时最大熵的直推扩展问题。测试数据集上的实验证明,与已有的直推学习方法相比,此方法可以获得更好的决策精度,可以应用于数据流组合分类的融合。  相似文献   

6.
肖建鹏  张来顺  任星 《计算机应用》2008,28(7):1642-1644
针对直推式支持向量机在进行大数据量分类时出现精度低、学习速度慢和回溯式学习多的问题,提出了一种基于增量学习的直推式支持向量机分类算法,将增量学习引入直推式支持向量机,使其在训练过程中仅保留有用样本而抛弃无用样本,从而减少学习时间,提高分类速度。实验结果表明,该算法具有较快的分类速度和较高的分类精度。  相似文献   

7.
基于支持向量机的渐进直推式分类学习算法   总被引:48,自引:2,他引:48       下载免费PDF全文
支持向量机(support vector machine)是近年来在统计学习理论的基础上发展起来的一种新的模式识别方法,在解决小样本、非线性及高维模式识别问题中表现出许多特有的优势.直推式学习(transductive inference)试图根据已知样本对特定的未知样本建立一套进行识别的方法和准则.较之传统的归纳式学习方法而言,直推式学习往往更具普遍性和实际意义.提出了一种基于支持向量机的渐进直推式分类学习算法,在少量有标签样本和大量无标签样本所构成的混合样本训练集上取得了良好的学习效果.  相似文献   

8.
There has been recently a growing interest in the use of transductive inference for learning. We expand here the scope of transductive inference to active learning in a stream-based setting. Towards that end this paper proposes Query-by-Transduction (QBT) as a novel active learning algorithm. QBT queries the label of an example based on the p-values obtained using transduction. We show that QBT is closely related to Query-by-Committee (QBC) using relations between transduction, Bayesian statistical testing, Kullback-Leibler divergence, and Shannon information. The feasibility and utility of QBT is shown on both binary and multi-class classification tasks using SVM as the choice classifier. Our experimental results show that QBT compares favorably, in terms of mean generalization, against random sampling, committee-based active learning, margin-based active learning, and QBC in the stream-based setting.  相似文献   

9.
冀中  孙涛  于云龙 《软件学报》2017,28(11):2961-2970
零样本分类的目标是对训练阶段未出现过的类别的样本进行识别和分类,其主要思路是,借助类别语义信息,将可见类别的知识转移到未见类别中.提出了一种直推式的字典学习方法,包含以下两个步骤:首先,提出一个判别字典学习模型,对带标签的可见类别样本的视觉特征和类别语义特征建立映射关系模型;然后,针对可见类别和未见类别不同引起的域偏移问题,提出了一个基于直推学习的修正模型.通过在3个基准数据集(AwA,CUB和SUN)上的实验结果,证明了该方法的有效性和先进性.  相似文献   

10.
张新  何苯  罗铁坚  李东星 《软件学报》2014,25(12):2865-2876
近年来,Twitter 搜索在社交网络领域引起越来越多学者的关注。尽管排序学习可以融合 Twitter 中丰富的特征,但是训练数据的匮乏,会降低排序学习的性能。直推式学习作为一种常用的半监督学习方法,在解决训练数据的稀少性中发挥着重要的作用。由于在直推式学习的迭代过程中会生成噪音,基于聚类的直推式学习方法被提出。在基于聚类的直推式学习方法中有两个重要的参数,分别为聚类的阈值以及聚类文档的数量。在原有工作的基础上,提出使用另外一种不同的聚类算法。大量在标准TREC数据集Tweets11上的实验表明,聚类的阈值以及聚类过程中文档数量的选择都会对模型的检索性能产生影响。另外,也分析了基于聚类的直推式学习模型的鲁棒性在不同查询集上的表现。最后,引入名为簇凝聚度的质量控制因子,提出了一种基于聚类的自适应的直推式方法来实现 Twitter 检索。实验结果表明,基于聚类的自适应学习算法具有更好的鲁棒性。  相似文献   

11.
Transductive support vector machine (TSVM) is a well-known algorithm that realizes transductive learning in the field of support vector classification. This paper constructs a bi-fuzzy progressive transductive support vector machine (BFPTSVM) algorithm by combining the proposed notation of bi-fuzzy memberships for the temporary labeled sample appeared in progressive learning process and the sample-pruning strategy, which decreases the computation complexity and store memory of algorithm. Simulation experiments show that the BFPTSVM algorithm derives better classification performance and converges rapidly with better stability compared to the other learning algorithms.  相似文献   

12.
13.
一种用于文本分类的语义SVM及其在线学习算法   总被引:1,自引:1,他引:1  
该文利用SVM在小训练样本集条件下仍有高泛化能力的特性,结合文本分类问题中同类别文本的特征在特征空间中具有聚类性分布的特点,提出一种使用语义中心集代替原训练样本集作为训练样本和支持向量的SVM:语义SVM。文中给出语义中心集的生成步骤,进而给出语义SVM的在线学习(在线分类知识积累)算法框架,以及基于SMO算法的在线学习算法的实现。实验结果说明语义SVM及其在线学习算法具有巨大的应用潜力:不仅在线学习速度和分类速度相对于标准SVM及其简单增量算法有数量级提高,而且分类准确率方面具有一定优势。  相似文献   

14.
黄晟  杨万里  张译  张小洪  杨丹 《软件学报》2022,33(11):4268-4284
近年来,零样本学习备受机器学习和计算机视觉领域的关注.传统的归纳式零样本学习方法通过建立语义与视觉之间的映射关系,实现类别之间的知识迁移.这类方法存在着可见类和未见类之间的映射域漂移(projection domain shift)问题,直推式零样本学习方法通过在训练阶段引入无标定的未见类数据进行域适应,能够有效地缓解上述问题并提升零样本学习精度.然而,通过实验分析发现,这种直接在视觉空间同时进行语义映射建立和域适应的直推式零样本学习方法容易陷入“相互制衡”问题,从而无法充分发挥语义映射和域适应的最佳性能.针对上述问题,提出了一种基于间接域适应特征生成(feature generation with indirect domain adaptation,FG-IDA)的直推式零样本学习方法.该方法通过串行化语义映射和域适应优化过程,使得直推式零样本学习的这两大核心步骤能够在不同特征空间分别进行最佳优化,从而激发其潜能提升零样本识别精度.在4个标准数据集(CUB,AWA1,AWA2,SUN)上对FG-IDA模型进行了评估,实验结果表明,FG-IDA模型不仅展示出了相对其他直推学习方法的优越性,同时还在AWA1,AWA2和CUB数据集上取得了当前最优结果(the state-of-the-art performance).此外还进行了详尽的消融实验,通过与直接域适应方法进行对比分析,验证了直推式零样本学习中的“相互制衡”问题以及间接域适应思想的先进性.  相似文献   

15.
Linear programming support vector machines   总被引:4,自引:0,他引:4  
Weida  Li  Licheng 《Pattern recognition》2002,35(12):2927-2936
Based on the analysis of the conclusions in the statistical learning theory, especially the VC dimension of linear functions, linear programming support vector machines (or SVMs) are presented including linear programming linear and nonlinear SVMs. In linear programming SVMs, in order to improve the speed of the training time, the bound of the VC dimension is loosened properly. Simulation results for both artificial and real data show that the generalization performance of our method is a good approximation of SVMs and the computation complex is largely reduced by our method.  相似文献   

16.
Confidence-based active learning   总被引:1,自引:0,他引:1  
This paper proposes a new active learning approach, confidence-based active learning, for training a wide range of classifiers. This approach is based on identifying and annotating uncertain samples. The uncertainty value of each sample is measured by its conditional error. The approach takes advantage of current classifiers' probability preserving and ordering properties. It calibrates the output scores of classifiers to conditional error. Thus, it can estimate the uncertainty value for each input sample according to its output score from a classifier and select only samples with uncertainty value above a user-defined threshold. Even though we cannot guarantee the optimality of the proposed approach, we find it to provide good performance. Compared with existing methods, this approach is robust without additional computational effort. A new active learning method for support vector machines (SVMs) is implemented following this approach. A dynamic bin width allocation method is proposed to accurately estimate sample conditional error and this method adapts to the underlying probabilities. The effectiveness of the proposed approach is demonstrated using synthetic and real data sets and its performance is compared with the widely used least certain active learning method.  相似文献   

17.
一种异构直推式迁移学习算法   总被引:1,自引:1,他引:0  
杨柳  景丽萍  于剑 《软件学报》2015,26(11):2762-2780
目标领域已有类别标注的数据较少时会影响学习性能,而与之相关的其他源领域中存在一些已标注数据.迁移学习针对这一情况,提出将与目标领域不同但相关的源领域上学习到的知识应用到目标领域.在实际应用中,例如文本-图像、跨语言迁移学习等,源领域和目标领域的特征空间是不相同的,这就是异构迁移学习.关注的重点是利用源领域中已标注的数据来提高目标领域中未标注数据的学习性能,这种情况是异构直推式迁移学习.因为源领域和目标领域的特征空间不同,异构迁移学习的一个关键问题是学习从源领域到目标领域的映射函数.提出采用无监督匹配源领域和目标领域的特征空间的方法来学习映射函数.学到的映射函数可以把源领域中的数据在目标领域中重新表示.这样,重表示之后的已标注源领域数据可以被迁移到目标领域中.因此,可以采用标准的机器学习方法(例如支持向量机方法)来训练分类器,以对目标领域中未标注的数据进行类别预测.给出一个概率解释以说明其对数据中的一些噪声是具有鲁棒性的.同时还推导了一个样本复杂度的边界,也就是寻找映射函数时需要的样本数.在4个实际的数据库上的实验结果,展示了该方法的有效性.  相似文献   

18.
Large margin vs. large volume in transductive learning   总被引:2,自引:0,他引:2  
We consider a large volume principle for transductive learning that prioritizes the transductive equivalence classes according to the volume they occupy in hypothesis space. We approximate volume maximization using a geometric interpretation of the hypothesis space. The resulting algorithm is defined via a non-convex optimization problem that can still be solved exactly and efficiently. We provide a bound on the test error of the algorithm and compare it to transductive SVM (TSVM) using 31 datasets.  相似文献   

19.
Although in the past machine learning algorithms have been successfully used in many problems, their serious practical use is affected by the fact that often they cannot produce reliable and unbiased assessments of their predictions' quality. In last few years, several approaches for estimating reliability or confidence of individual classifiers have emerged, many of them building upon the algorithmic theory of randomness, such as (historically ordered) transduction-based confidence estimation, typicalness-based confidence estimation, and transductive reliability estimation. Unfortunately, they all have weaknesses: either they are tightly bound with particular learning algorithms, or the interpretation of reliability estimations is not always consistent with statistical confidence levels. In the paper we describe typicalness and transductive reliability estimation frameworks and propose a joint approach that compensates the above-mentioned weaknesses by integrating typicalness-based confidence estimation and transductive reliability estimation into a joint confidence machine. The resulting confidence machine produces confidence values in the statistical sense. We perform series of tests with several different machine learning algorithms in several problem domains. We compare our results with that of a proprietary method as well as with kernel density estimation. We show that the proposed method performs as well as proprietary methods and significantly outperforms density estimation methods. Matjaž Kukar is currently Assistant Professor in the Faculty of Computer and Information Science at University of Ljubljana. His research interests include machine learning, data mining and intelligent data analysis, ROC analysis, cost-sensitive learning, reliability estimation, and latent structure analysis, as well as applications of data mining in medical and business problems.  相似文献   

20.
Generally, collecting a large quantity of unlabeled examples is feasible, but labeling them all is not. Active learning can reduce the number of labeled examples needed to train a good classifier. Existing active learning algorithms can be roughly divided into three categories: single-view single-learner (SVSL) active learning, multiple-view single-learner (MVSL) active learning and single-view multiple-learner (SVML) active learning. In this paper, a new approach that incorporates multiple views and multiple learners (MVML) into active learning is proposed. Multiple artificial neural networks are used as learners in each view, and they are set with different numbers of hidden neurons and weights to ensure each of them has a different bias. The selective sampling of our proposed method is implemented in three different ways. For comparative purpose, the traditional methods MVSL and SVML active learning as well as bagging active learning and adaboost active learning are also implemented together with MVML active learning in our experiments. The empirical results indicate that the MVML active learning outperforms the other traditional methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号