首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
基于分歧的半监督学习   总被引:9,自引:0,他引:9  
周志华 《自动化学报》2013,39(11):1871-1878
传统监督学习通常需使用大量有标记的数据样本作为训练例,而在很多现实问题中,人们虽能容易地获得大批数据样本,但为数据 提供标记却需耗费很多人力物力.那么,在仅有少量有标记数据时,可否通过对大量未标记数据进行利用来提升学习性能呢?为此,半监督学习 成为近十多年来机器学习的一大研究热点.基于分歧的半监督学习是该领域的主流范型之一,它通过使用多个学习器来对未标记数据进行利用, 而学习器间的"分歧"对学习成效至关重要.本文将综述简介这方面的一些研究进展.  相似文献   

2.
盛高斌  姚明海 《计算机仿真》2009,26(10):198-201,318
为了提高小数据量的有标记样本问题中学习器的性能,结合半监督学习和选择性集成学习,提出了基于半监督回归的选择性集成算法SSRES。算法基于半监督学习的基本思想,同时使用有标记样本和未标记样本训练学习器从而减少对有标记样本的需求,使用选择性集成算法GRES对不同学习器进行适当的选择,并将选择的结果结合提高学习器的泛化能力。实验结果表明,在小数据量的有标记样本问题中,该算法能够有效地提高学习器的性能。  相似文献   

3.
Some recent successful semi-supervised learning methods construct more than one learner from both labeled and unlabeled data for inductive learning. This paper proposes a novel multiple-view multiple-learner (MVML) framework for semi-supervised learning, which differs from previous methods in possession of both multiple views and multiple learners. This method adopts a co-training styled learning paradigm in enlarging labeled data from a much larger set of unlabeled data. To the best of our knowledge it is the first attempt to combine the advantages of multiple-view learning and ensemble learning for semi-supervised learning. The use of multiple views is promising to promote performance compared with single-view learning because information is more effectively exploited. At the same time, as an ensemble of classifiers is learned from each view, predictions with higher accuracies can be obtained than solely adopting one classifier from the same view. Experiments on different applications involving both multiple-view and single-view data sets show encouraging results of the proposed MVML method.  相似文献   

4.
Graph-based semi-supervised learning is an important semi-supervised learning paradigm. Although graph-based semi-supervised learning methods have been shown to be helpful in various situations, they may adversely affect performance when using unlabeled data. In this paper, we propose a new graph-based semi-supervised learning method based on instance selection in order to reduce the chances of performance degeneration. Our basic idea is that given a set of unlabeled instances, it is not the best approach to exploit all the unlabeled instances; instead, we should exploit the unlabeled instances that are highly likely to help improve the performance, while not taking into account the ones with high risk. We develop both transductive and inductive variants of our method. Experiments on a broad range of data sets show that the chances of performance degeneration of our proposed method are much smaller than those of many state-of-the-art graph-based semi-supervised learning methods.  相似文献   

5.
Ensemble learning learns from the training data by generating an ensemble of multiple base learners. It is well-known that to construct a good ensemble with strong generalization ability, the base learners are deemed to be accurate as well as diverse. In this paper, unlabeled data is exploited to facilitate ensemble learning by helping augment the diversity among the base learners. Specifically, a semi-supervised ensemble method named udeed, i.e. Unlabeled Data to Enhance Ensemble Diversity, is proposed. In contrast to existing semi-supervised ensemble methods which utilize unlabeled data by estimating error-prone pseudo-labels on them to enlarge the labeled data to improve base learners’ accuracies, udeed works by maximizing accuracies of base learners on labeled data while maximizing diversity among them on unlabeled data. Extensive experiments on 20 regular-scale and five large-scale data sets are conducted under the setting of either few or abundant labeled data. Experimental results show that udeed can effectively utilize unlabeled data for ensemble learning via diversity augmentation, and is highly competitive to well-established semi-supervised ensemble methods.  相似文献   

6.
Tri-Training是一种半监督学习算法,在少量标记数据下,通过三个不同的分类器,从未标记样本中采样并标记新的训练数据,作为各分类器训练数据的有效补充。但由于错误标记样本的存在,引入了噪音数据,降低了分类的性能。论文在Tri—Training算法中分别采用DE-KNN,DE-BKNN和DE-NED三种数据编辑技术,识别移除误标记的数据。通过对六组UCI数据集的实验,分析结果表明,编辑技术的引入是有效的,三种方法的使用在一定程度上提升了Tri-Training算法的分类性能,尤其是DE-NED方法更为显著。  相似文献   

7.
基于多学习器协同训练模型的人体行为识别方法   总被引:1,自引:0,他引:1  
唐超  王文剑  李伟  李国斌  曹峰 《软件学报》2015,26(11):2939-2950
人体行为识别是计算机视觉研究的热点问题,现有的行为识别方法都是基于监督学习框架.为了取得较好的识别效果,通常需要大量的有标记样本来建模.然而,获取有标记样本是一个费时又费力的工作.为了解决这个问题,对半监督学习中的协同训练算法进行改进,提出了一种基于多学习器协同训练模型的人体行为识别方法.这是一种基于半监督学习框架的识别算法.该方法首先通过基于Q统计量的学习器差异性度量选择算法来挑取出协同训练中基学习器集,在协同训练过程中,这些基学习器集对未标记样本进行标记;然后,采用了基于分类器成员委员会的标记近邻置信度计算公式来评估未标记样本的置信度,选取一定比例置信度较高的未标记样本加入到已标记的训练样本集并更新学习器来提升模型的泛化能力.为了评估算法的有效性,采用混合特征来表征人体行为,从而可以快速完成识别过程.实验结果表明,所提出的基于半监督学习的行为识别系统可以有效地辨识视频中的人体动作.  相似文献   

8.
Semi-supervised learning has attracted a significant amount of attention in pattern recognition and machine learning. Most previous studies have focused on designing special algorithms to effectively exploit the unlabeled data in conjunction with labeled data. Our goal is to improve the classification accuracy of any given supervised learning algorithm by using the available unlabeled examples. We call this as the Semi-supervised improvement problem, to distinguish the proposed approach from the existing approaches. We design a metasemi-supervised learning algorithm that wraps around the underlying supervised algorithm and improves its performance using unlabeled data. This problem is particularly important when we need to train a supervised learning algorithm with a limited number of labeled examples and a multitude of unlabeled examples. We present a boosting framework for semi-supervised learning, termed as SemiBoost. The key advantages of the proposed semi-supervised learning approach are: 1) performance improvement of any supervised learning algorithm with a multitude of unlabeled data, 2) efficient computation by the iterative boosting algorithm, and 3) exploiting both manifold and cluster assumption in training classification models. An empirical study on 16 different data sets and text categorization demonstrates that the proposed framework improves the performance of several commonly used supervised learning algorithms, given a large number of unlabeled examples. We also show that the performance of the proposed algorithm, SemiBoost, is comparable to the state-of-the-art semi-supervised learning algorithms.  相似文献   

9.
Many data mining applications have a large amount of data but labeling data is usually difficult, expensive, or time consuming, as it requires human experts for annotation. Semi-supervised learning addresses this problem by using unlabeled data together with labeled data in the training process. Co-Training is a popular semi-supervised learning algorithm that has the assumptions that each example is represented by multiple sets of features (views) and these views are sufficient for learning and independent given the class. However, these assumptions are strong and are not satisfied in many real-world domains. In this paper, a single-view variant of Co-Training, called Co-Training by Committee (CoBC) is proposed, in which an ensemble of diverse classifiers is used instead of redundant and independent views. We introduce a new labeling confidence measure for unlabeled examples based on estimating the local accuracy of the committee members on its neighborhood. Then we introduce two new learning algorithms, QBC-then-CoBC and QBC-with-CoBC, which combine the merits of committee-based semi-supervised learning and active learning. The random subspace method is applied on both C4.5 decision trees and 1-nearest neighbor classifiers to construct the diverse ensembles used for semi-supervised learning and active learning. Experiments show that these two combinations can outperform other non committee-based ones.  相似文献   

10.
Previous partially supervised classification methods can partition unlabeled data into positive examples and negative examples for a given class by learning from positive labeled examples and unlabeled examples, but they cannot further group the negative examples into meaningful clusters even if there are many different classes in the negative examples. Here we proposed an automatic method to obtain a natural partitioning of mixed data (labeled data + unlabeled data) by maximizing a stability criterion defined on classification results from an extended label propagation algorithm over all the possible values of model order (or the number of classes) in mixed data. Our experimental results on benchmark corpora for word sense disambiguation task indicate that this model order identification algorithm with the extended label propagation algorithm as the base classifier outperforms SVM, a one-class partially supervised classification algorithm, and the model order identification algorithm with semi-supervised k-means clustering as the base classifier when labeled data is incomplete.  相似文献   

11.
Traditional supervised classifiers use only labeled data (features/label pairs) as the training set, while the unlabeled data is used as the testing set. In practice, it is often the case that the labeled data is hard to obtain and the unlabeled data contains the instances that belong to the predefined class but not the labeled data categories. This problem has been widely studied in recent years and the semi-supervised PU learning is an efficient solution to learn from positive and unlabeled examples. Among all the semi-supervised PU learning methods, it is hard to choose just one approach to fit all unlabeled data distribution. In this paper, a new framework is designed to integrate different semi-supervised PU learning algorithms in order to take advantage of existing methods. In essence, we propose an automatic KL-divergence learning method by utilizing the knowledge of unlabeled data distribution. Meanwhile, the experimental results show that (1) data distribution information is very helpful for the semi-supervised PU learning method; (2) the proposed framework can achieve higher precision when compared with the state-of-the-art method.  相似文献   

12.
对于建立动态贝叶斯网络(DBN)分类模型时,带有类标注样本数据集获得困难的问题,提出一种基于EM和分类损失的半监督主动DBN学习算法.半监督学习中的EM算法可以有效利用未标注样本数据来学习DBN分类模型,但是由于迭代过程中易于加入错误的样本分类信息而影响模型的准确性.基于分类损失的主动学习借鉴到EM学习中,可以自主选择有用的未标注样本来请求用户标注,当把这些样本加入训练集后能够最大程度减少模型对未标注样本分类的不确定性.实验表明,该算法能够显著提高DBN学习器的效率和性能,并快速收敛于预定的分类精度.  相似文献   

13.
李延超  肖甫  陈志  李博 《软件学报》2020,31(12):3808-3822
主动学习从大量无标记样本中挑选样本交给专家标记.现有的批抽样主动学习算法主要受3个限制:(1)一些主动学习方法基于单选择准则或对数据、模型设定假设,这类方法很难找到既有不确定性又有代表性的未标记样本;(2)现有批抽样主动学习方法的性能很大程度上依赖于样本之间相似性度量的准确性,例如预定义函数或差异性衡量;(3)噪声标签问题一直影响批抽样主动学习算法的性能.提出一种基于深度学习批抽样的主动学习方法.通过深度神经网络生成标记和未标记样本的学习表示和采用标签循环模式,使得标记样本与未标记样本建立联系,再回到相同标签的标记样本.这样同时考虑了样本的不确定性和代表性,并且算法对噪声标签具有鲁棒性.在提出的批抽样主动学习方法中,算法使用的子模块函数确保选择的样本集合具有多样性.此外,自适应参数的优化,使得主动学习算法可以自动平衡样本的不确定性和代表性.将提出的主动学习方法应用到半监督分类和半监督聚类中,实验结果表明,所提出的主动学习方法的性能优于现有的一些先进的方法.  相似文献   

14.
情感分类是目前自然语言处理领域的一个热点研究问题。该文关注情感分类中的半监督学习方法(即基于少量标注样本和大量未标注样本进行学习的方式),提出了一种新的基于动态随机特征子空间的半监督学习方法。首先,动态生成多个随机特征子空间;然后,基于协同训练(Co-training)在每个特征子空间中挑选置信度高的未标注样本;最后使用这些挑选出的样本更新训练模型。实验结果表明我们的方法明显优于传统的静态产生方式及其他现有的半监督方法。此外该文还探索了特征子空间的划分数目问题。  相似文献   

15.
Tri-training: exploiting unlabeled data using three classifiers   总被引:24,自引:0,他引:24  
In many practical data mining applications, such as Web page classification, unlabeled training examples are readily available, but labeled ones are fairly expensive to obtain. Therefore, semi-supervised learning algorithms such as co-training have attracted much attention. In this paper, a new co-training style semi-supervised learning algorithm, named tri-training, is proposed. This algorithm generates three classifiers from the original labeled example set. These classifiers are then refined using unlabeled examples in the tri-training process. In detail, in each round of tri-training, an unlabeled example is labeled for a classifier if the other two classifiers agree on the labeling, under certain conditions. Since tri-training neither requires the instance space to be described with sufficient and redundant views nor does it put any constraints on the supervised learning algorithm, its applicability is broader than that of previous co-training style algorithms. Experiments on UCI data sets and application to the Web page classification task indicate that tri-training can effectively exploit unlabeled data to enhance the learning performance.  相似文献   

16.
Semi-supervised model-based document clustering: A comparative study   总被引:4,自引:0,他引:4  
Semi-supervised learning has become an attractive methodology for improving classification models and is often viewed as using unlabeled data to aid supervised learning. However, it can also be viewed as using labeled data to help clustering, namely, semi-supervised clustering. Viewing semi-supervised learning from a clustering angle is useful in practical situations when the set of labels available in labeled data are not complete, i.e., unlabeled data contain new classes that are not present in labeled data. This paper analyzes several multinomial model-based semi-supervised document clustering methods under a principled model-based clustering framework. The framework naturally leads to a deterministic annealing extension of existing semi-supervised clustering approaches. We compare three (slightly) different semi-supervised approaches for clustering documents: Seeded damnl, Constrained damnl, and Feedback-based damnl, where damnl stands for multinomial model-based deterministic annealing algorithm. The first two are extensions of the seeded k-means and constrained k-means algorithms studied by Basu et al. (2002); the last one is motivated by Cohn et al. (2003). Through empirical experiments on text datasets, we show that: (a) deterministic annealing can often significantly improve the performance of semi-supervised clustering; (b) the constrained approach is the best when available labels are complete whereas the feedback-based approach excels when available labels are incomplete. Editor: Andrew Moore  相似文献   

17.
针对不完备弱标记数据的学习问题,提出基于粗糙集理论的半监督协同学习模型.首先定义不完备弱标记数据的半监督差别矩阵,提出充分、具有差异性的约简子空间获取算法.然后在有标记数据集上利用各约简子空间训练两个基分类器.在无标记数据上,各分类器基于协同学习的思想标注信度较大的无标记样本给另一分类器学习,迭代更新直至无可利用的无标记数据.UCI数据集实验对比分析表明,文中模型可以获得更好的不完备弱标记数据的分类学习性能,具有有效性.  相似文献   

18.
结合半监督学习和集成学习方法,提出了一种基于置信度重取样的SemiBoost-CR分类模型.给出了基于标注近邻与未标注近邻的置信度计算公式,按照置信度重采样,不仅选取一定比例置信度较高的未标注样本,而且选取一定比例置信度较低的未标注样本,分别以不同的策略加入到已标注的训练样本集,引入置信度高的未标注样本,用以提高基分类...  相似文献   

19.
基于自适应数据剪辑策略的Tri-training算法   总被引:1,自引:0,他引:1  
邓超  郭茂祖 《计算机学报》2007,30(8):1213-1226
Tri-training能有效利用无标记样例提高泛化能力.针对Tri-training迭代中无标记样例常被错误标记而形成训练集噪声,导致性能不稳定的缺点,文中提出ADE-Tri-training(Tri-training with Adaptive Data Editing)新算法.它不仅利用RemoveOnly剪辑操作对每次迭代可能产生的误标记样例识别并移除,更重要的是采用自适应策略来确定RemoveOnly触发与抑制的恰当时机.文中证明,PAC理论下自适应策略中一系列判别充分条件可同时确保新训练集规模迭代增大和新假设分类错误率迭代降低更多.UCI数据集上实验结果表明:ADE-Tri-training具有更好的分类泛化性能和健壮性.  相似文献   

20.
Support vector machine (SVM) is a general and powerful learning machine, which adopts supervised manner. However, for many practical machine learning and data mining applications, unlabeled training examples are readily available but labeled ones are very expensive to be obtained. Therefore, semi-supervised learning emerges as the times require. At present, the combination of SVM and semi-supervised learning principle such as transductive learning has attracted more and more attentions. Transductive support vector machine (TSVM) learns a large margin hyperplane classifier using labeled training data, but simultaneously force this hyperplane to be far away from the unlabeled data. TSVM might seem to be the perfect semi-supervised algorithm since it combines the powerful regularization of SVMs and a direct implementation of the clustering assumption, nevertheless its objective function is non-convex and then it is difficult to be optimized. This paper aims to solve this difficult problem. We apply least square support vector machine to implement TSVM, which can ensure that the objective function is convex and the optimization solution can then be easily found by solving a set of linear equations. Simulation results demonstrate that the proposed method can exploit unlabeled data to yield good performance effectively.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号