首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 156 毫秒
1.
主动学习通过主动选择要学习的样例进行标注,从而有效地降低学习算法的样本复杂度。针对当前主动学习算法普遍采用的平分版本空间策略,本文提出过半缩减版本空间的策略,这种策略避免了平分版本空间策略所要求的较强假设。基于过半缩减版本空间的策略,本文实现了一种选取具有最大可能性被误分类的样例作为训练样例的启发式主动动学习算法(CBMPMS)。该算法计算版本空间中随机抽取的假设组成的委员会和当前学习器对样例预测的类概率差异的熵,以此作为选择样例的标准。针对UCI数据集的实验表明,该算法能够在大多数数据集上取得比相关研究更好的性能。  相似文献   

2.
基于采样策略的主动学习算法研究进展   总被引:2,自引:0,他引:2  
主动学习算法通过选择信息含量大的未标记样例交由专家进行标记,多次循环使分类器的正确率逐步提高,进而在标记总代价最小的情况下获得分类器的强泛化能力,这一技术引起了国内外研究人员的关注.侧重从采样策略的角度,详细介绍了主动学习中学习引擎和采样引擎的工作过程,总结了主动学习算法的理论研究成果,详细评述了主动学习的研究现状和发展动态.首先,针对采样策略选择样例的不同方式将主动学习算法划分为不同类型,进而,对基于不同采样策略的主动学习算法进行了深入地分析和比较,讨论了各种算法适用的应用领域及其优缺点.最后指出了存在的开放性问题和进一步的研究方向.  相似文献   

3.
针对主动学习中构造初始分类器难以选取代表性样本的问题,提出一种模糊核聚类采样算法。该算法首先通过聚类分析技术将样本集划分,然后分别在类簇中心和类簇边界区域选取样本进行标注,最后依此构造初始分类器。在该算法中,通过高斯核函数把原始样本空间中的点非线性变换到高维特征空间,以达到线性可聚的目的,并引入了一种基于局部密度的初始聚类中心选择方法,从而改善聚类效果。为了提高采样质量,结合划分后各类簇的样本个数设计了一种采样比例分配策略。同时,在采样结束阶段设计了一种后补采样策略,以确保采样个数达标。实验结果分析表明,所提算法可以有效地减少构造初始分类器所需的人工标注负担,并取得较高的分类正确率。  相似文献   

4.
张晓宇 《计算机科学》2012,39(7):175-177
针对传统SVM主动学习中批量采样方法的不足,提出了动态可行域划分算法.从特征空间与参数空间的对偶关系入手,深入分析SVM主动学习的本质,将特征空间中对样本的标注视为参数空间中对可行域的划分;通过综合利用当前分类模型和先前标注样本两方面信息,动态地优化可行域划分方案,以确保选取的样本对模型改进的价值,最终实现更为高效的选择性采样.实验结果表明,基于动态可行域划分的SVM主动学习算法能够显著提高所选样本的信息量,从而能够在有限的标注代价下大幅提高其分类性能.  相似文献   

5.
基于主动学习和半监督学习的多类图像分类   总被引:5,自引:0,他引:5  
陈荣  曹永锋  孙洪 《自动化学报》2011,37(8):954-962
多数图像分类算法需要大量的训练样本对分类器模型进行训练.在实际应用中, 对大量样本进行标注非常枯燥、耗时.对于一些特殊图像,如合成孔径雷达 (Synthetic aperture radar, SAR)图像, 对其内容判读非常困难,因此能够获得的标注样本数量非常有限. 本文将基于最优标号和次优标号(Best vs second-best, BvSB)的主动学习和带约束条件的自学习(Constrained self-training, CST) 引入到基于支持向量机(Support vector machine, SVM)分类器的图像分类算法中,提出了一种新的图像分类方法.通过BvSB 主动学习去挖掘那些对当前分类器模型最有价值的样本进行人工标注,并借助CST半 监督学习进一步利用样本集中大量的未标注样本,使得在花费较小标注代价情况下, 能够获得良好的分类性能.将新方法与随机样本选择、基于熵的不确定性采样主动学 习算法以及BvSB主动学习方法进行了性能比较.对3个光学图像集及1个SAR图像集分类 问题的实验结果显示,新方法能够有效地减少分类器训练时所需的人工标注样本的数 量,并获得较高的准确率和较好的鲁棒性.  相似文献   

6.
目的在多标签有监督学习框架中,构建具有较强泛化性能的分类器需要大量已标注训练样本,而实际应用中已标注样本少且获取代价十分昂贵。针对多标签图像分类中已标注样本数量不足和分类器再学习效率低的问题,提出一种结合主动学习的多标签图像在线分类算法。方法基于min-max理论,采用查询最具代表性和最具信息量的样本挑选策略主动地选择待标注样本,且基于KKT(Karush-Kuhn-Tucker)条件在线地更新多标签图像分类器。结果在4个公开的数据集上,采用4种多标签分类评价指标对本文算法进行评估。实验结果表明,本文采用的样本挑选方法比随机挑选样本方法和基于间隔的采样方法均占据明显优势;当分类器达到相同或相近的分类准确度时,利用本文的样本挑选策略选择的待标注样本数目要明显少于采用随机挑选样本方法和基于间隔的采样方法所需查询的样本数。结论本文算法一方面可以减少获取已标注样本所需的人工标注代价;另一方面也避免了传统的分类器重新训练时利用所有数据所产生的学习效率低下的问题,达到了当新数据到来时可实时更新分类器的目的。  相似文献   

7.
基于自适应SVM的半监督主动学习视频标注   总被引:1,自引:0,他引:1  
具有不同分布特性的视频包含相同的语义概念,会表现出不同的视觉特征,从而导致标注正确率下降。为解决该问题,提出一种基于自适应支持向量机(SVM)的半监督主动学习视频标注算法。通过引入?函数和优化模型参数将现有分类器转换为自适应支持向量(A-SVM)分类器,将基于高斯调和函数的半监督学习融合到基于A-SVM的主动学习中,得出相关性评价函数,根据评价函数对视频数据进行标注。实验结果表明,该算法在跨域视频概念检测问题上的平均标准率为68.1%,平均标全率为60%,与支持向量机半监督主动学习和基于直推式支持向量机半监督主动学习相比有所提高。  相似文献   

8.
图像分类的随机半监督采样方法   总被引:1,自引:1,他引:0  
为更好地利用大量未标注图像样本信息来提高分类器性能,提出一种半监督学习的图像分类算法--随机半监督采样(RSSS).该算法采用迭代随机采样方法,每次采样中通过谱聚类估计未标注样本的类别值,使用SVM进行模型学习,逐步优化模型;同时,使用图像的局部空间直方图特征可以有效地结合图像的统计和空间信息,以提高分类准确度.实验结果表明,RSSS算法可以充分利用未标注样本信息提高分类器的性能,并且可以有效地消除几何变换带来的影响.  相似文献   

9.
翟俊海  张素芳  王聪  沈矗  刘晓萌 《计算机应用》2018,38(10):2759-2763
针对传统的主动学习算法只能处理中小型数据集的问题,提出一种基于MapReduce的大数据主动学习算法。首先,在有类别标签的初始训练集上,用极限学习机(ELM)算法训练一个分类器,并将其输出用软最大化函数变换为一个后验概率分布。然后,将无类别标签的大数据集划分为l个子集,并部署到l个云计算节点上。在每一个节点,用训练出的分类器并行地计算各个子集中样例的信息熵,并选择信息熵大的前q个样例进行类别标注,将标注类别的l×q个样例添加到有类别标签的训练集中。重复以上步骤直到满足预定义的停止条件。在Artificial、Skin、Statlog和Poker 4个数据集上与基于ELM的主动学习算法进行了比较,结果显示,所提算法在4个数据集上均能完成主动样例选择,而基于ELM的主动学习算法只在规模最小的数据集上能完成主动样例选择。实验结果表明,所提算法优于基于极限学习机的主动学习算法。  相似文献   

10.
针对动态神经网络分类器训练时采样时间长、计算量大的问题,提出一种动态神经网络分类器的主动学习算法。根据主动学习AL(Active Learning)算法中一种改进型不确定性采样策略,综合考虑样本的后验概率及其与已标记样本间的相似性,标注综合评价得分值较小的样本,将其用于对网络分类器的训练。通过Sobol’敏感度分析法,神经网络适时地增加敏感度值较大或删减敏感度值较小的隐层神经元,以提高其学习速率,减小输出误差。分类器训练仿真实验结果表明,与被动学习算法相比,该算法能够大大缩短网络分类器训练时间,降低其输出误差。将该算法用于液压AGC系统中,实验结果表明,该算法可实现系统中PID控制器参数的在线调节,提高了厚度控制精度,以此验证了该算法的适用性。  相似文献   

11.
In real applications of inductive learning for classifi cation, labeled instances are often defi cient, and labeling them by an oracle is often expensive and time-consuming. Active learning on a single task aims to select only informative unlabeled instances for querying to improve the classifi cation accuracy while decreasing the querying cost. However, an inevitable problem in active learning is that the informative measures for selecting queries are commonly based on the initial hypotheses sampled from only a few labeled instances. In such a circumstance, the initial hypotheses are not reliable and may deviate from the true distribution underlying the target task. Consequently, the informative measures will possibly select irrelevant instances. A promising way to compensate this problem is to borrow useful knowledge from other sources with abundant labeled information, which is called transfer learning. However, a signifi cant challenge in transfer learning is how to measure the similarity between the source and the target tasks. One needs to be aware of different distributions or label assignments from unrelated source tasks;otherwise, they will lead to degenerated performance while transferring. Also, how to design an effective strategy to avoid selecting irrelevant samples to query is still an open question. To tackle these issues, we propose a hybrid algorithm for active learning with the help of transfer learning by adopting a divergence measure to alleviate the negative transfer caused by distribution differences. To avoid querying irrelevant instances, we also present an adaptive strategy which could eliminate unnecessary instances in the input space and models in the model space. Extensive experiments on both the synthetic and the real data sets show that the proposed algorithm is able to query fewer instances with a higher accuracy and that it converges faster than the state-of-the-art methods.  相似文献   

12.
梁喜涛  顾磊 《计算机科学》2015,42(6):228-232, 261
分词是中文自然语言处理中的一项关键基础技术.为了解决训练样本不足以及获取大量标注样本费时费力的问题,提出了一种基于最近邻规则的主动学习分词方法.使用新提出的选择策略从大量无标注样本中选择最有价值的样本进行标注,再把标注好的样本加入到训练集中,接着使用该集合来训练分词器.最后在PKU数据集、MSR数据集和山西大学数据集上进行测试,并与传统的基于不确定性的选择策略进行比较.实验结果表明,提出的最近邻主动学习方法在进行样本选择时能够选出更有价值的样本,有效降低了人工标注的代价,同时还提高了分词结果的准确率.  相似文献   

13.
This paper generalizes the learning strategy of version space to manage noisy and uncertain training data. A new learning algorithm is proposed that consists of two main phases: searching and pruning. The searching phase generates and collects possible candidates into a large set; the pruning then prunes this set according to various criteria to find a maximally consistent version space. When the training instances cannot completely be classified, the proposed learning algorithm can make a trade-off between including positive training instances and excluding negative ones according to the requirements of different application domains. Furthermore, suitable pruning parameters are chosen according to a given time limit, so the algorithm can also make a trade-off between time complexity and accuracy. The proposed learning algorithm is then a flexible and efficient induction method that makes the version space learning strategy more practical  相似文献   

14.
Active learning methods select informative instances to effectively learn a suitable classifier. Uncertainty sampling, a frequently utilized active learning strategy, selects instances about which the model is uncertain but it does not consider the reasons for why the model is uncertain. In this article, we present an evidence-based framework that can uncover the reasons for why a model is uncertain on a given instance. Using the evidence-based framework, we discuss two reasons for uncertainty of a model: a model can be uncertain about an instance because it has strong, but conflicting evidence for both classes or it can be uncertain because it does not have enough evidence for either class. Our empirical evaluations on several real-world datasets show that distinguishing between these two types of uncertainties has a drastic impact on the learning efficiency. We further provide empirical and analytical justifications as to why distinguishing between the two uncertainties matters.  相似文献   

15.
Multiple instance learning attempts to learn from a training set consists of labeled bags each containing many unlabeled instances. In previous works, most existing algorithms mainly pay attention to the ‘most positive’ instance in each positive bag, but ignore the other instances. For utilizing these unlabeled instances in positive bags, we present a new multiple instance learning algorithm via semi-supervised laplacian twin support vector machines (called Miss-LTSVM). In Miss-LTSVM, all instances in positive bags are used in the manifold regularization terms for improving the performance of classifier. For verifying the effectiveness of the presented method, a series of comparative experiments are performed on seven multiple instance data sets. Experimental results show that the proposed method has better classification accuracy than other methods in most cases.  相似文献   

16.
Having in mind the idea that the computational effort and knowledge gained while solving a problem’s instance should be used to solve other ones, we present a new strategy that allows to take advantage of both aspects. The strategy is based on a set of operators and a basic learning process that is fed up with the information obtained while solving several instances. The output of the learning process is an adjustment of the operators. The instances can be managed sequentially or simultaneously by the strategy, thus varying the information available for the learning process. The method has been tested on different SAT instance classes and the results confirm that (a) the usefulness of the learning process and (b) that embedding problem specific algorithms into our strategy, instances can be solved faster than applying these algorithms instance by instance.  相似文献   

17.
针对传统基于主动学习的支持向量机(support vector machine,SVM)方法中所采用的欧式距离不能有效衡量高维样本之间的相关程度,导致学习器泛化能力下降的问题,提出了一种基于向量余弦的支持向量机主动学习(SVM active learning based on vector cosine)策略,称为COS_SVMactive方法。该方法通过在主动学习过程中引入向量余弦来度量训练集中样本信息的冗余度,以挑选那些含有重要分类信息的最有价值样本交给专家进行人工标注,并在迭代的样本标注过程中对训练集的平衡度进行逐步调整,使学习器获得更好的泛化性能。实验结果表明,与传统基于随机采样的SVM主动学习方法(SVM active learning based on ran-dom sampling,RS_SVMactive)和基于距离的SVM主动学习方法(SVM active learning based on distance, DIS_SVMactive)相比,COS_SVMactive方法不仅可以提高分类精度,而且能够减少专家标记代价。  相似文献   

18.
This paper tackles the issue of objective performance evaluation of machine learning classifiers, and the impact of the choice of test instances. Given that statistical properties or features of a dataset affect the difficulty of an instance for particular classification algorithms, we examine the diversity and quality of the UCI repository of test instances used by most machine learning researchers. We show how an instance space can be visualized, with each classification dataset represented as a point in the space. The instance space is constructed to reveal pockets of hard and easy instances, and enables the strengths and weaknesses of individual classifiers to be identified. Finally, we propose a methodology to generate new test instances with the aim of enriching the diversity of the instance space, enabling potentially greater insights than can be afforded by the current UCI repository.  相似文献   

19.
In multi-label learning,it is rather expensive to label instances since they are simultaneously associated with multiple labels.Therefore,active learning,which reduces the labeling cost by actively querying the labels of the most valuable data,becomes particularly important for multi-label learning.A good multi-label active learning algorithm usually consists of two crucial elements:a reasonable criterion to evaluate the gain of querying the label for an instance,and an effective classification model,based on whose prediction the criterion can be accurately computed.In this paper,we first introduce an effective multi-label classification model by combining label ranking with threshold learning,which is incrementally trained to avoid retraining from scratch after every query.Based on this model,we then propose to exploit both uncertainty and diversity in the instance space as well as the label space,and actively query the instance-label pairs which can improve the classification model most.Extensive experiments on 20 datasets demonstrate the superiority of the proposed approach to state-of-the-art methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号