首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 187 毫秒
1.
一种针对弱标记的直推式多标记分类方法   总被引:2,自引:1,他引:1  
多标记学习主要解决一个样本可以同时属于多个类别的问题,它广泛适用于图像场景分类、文本分类等任务.在传统的多标记学习中,分类器往往需要利用大量具有完整标记的训练样本才能获得较好的分类性能,然而,在很多现实应用中又往往只能获得少量标记不完整的训练样本.为了更好地利用这些弱标记训练样本,提出一种针对弱标记的直推式多标记分类方法,它可以通过标记误差加权来补全样本标记,同时也能更好地利用弱标记样本提高分类性能.实验结果表明,该方法在弱标记情况下的图像场景分类任务上具有较好的性能提高.  相似文献   

2.
吕佳 《计算机应用》2012,32(12):3308-3310
针对在求解半监督多标记分类问题时通常将其分解成若干个单标记半监督二类分类问题从而导致忽视类别之间内在联系的问题,提出基于局部学习的半监督多标记分类方法。该方法避开了多个单标记半监督二类分类问题的求解,采用“整体法”的研究思路,利用基于图的方法,引入基于样本的局部学习正则项和基于类别的拉普拉斯正则项,构建了问题的正则化框架。实验结果表明,所提算法具有较高的查全率和查准率。  相似文献   

3.
构造性机器学习(CML)算法在训练分类器时需要大量有标记样本,而获取这些有标记样本十分困难。为此,提出一种基于Tri- training算法的构造性学习方法。根据已标记的样本,采用不同策略构造3个差异较大的初始覆盖分类网络,用于对未标记数据进行标记,再将已标记数据加入到训练样本中,调整各分类网络参数,反复进行上述过程,直至获得稳定的分类器。实验结果证明,与CML算法和基于NB分类器的半监督学习算法相比,该方法的分类准确率更高。  相似文献   

4.
样本标记是一个重要但又比较耗时的过程。得到一个多标签分类器需要大量的训练样本,而手工为每个样本创建多个标签会存在一定困难。为尽可能降低标记样本的工作量,提出一种加权决策函数的主动学习方法,该方法同时考虑训练样本的数量和未知样本的置信度,使得分类器能在最小的成本下最快地达到比较满意的分类精度。  相似文献   

5.
介绍一种基于半监督学习的协同训练(Co-training)分类算法,当可用的训练样本比较少时,使用传统的方法进行分类,如决策树分类,将无法得到用户满意的结果,而且它们需要大量的标记样本。事实上,获取有标签的样本的代价是相当昂贵的。于是,使用较少的已标记样本和大量的无标记样本进行协同训练的半监督学习,成为研究者首选。  相似文献   

6.
流数据环境下如何利用大量非标记样本进行高效学习是一个非常重要的问题,基于分歧策略的主动学习是一种有效的解决方法,但通常该类算法只考虑具有最大分歧的边界样本,没有考虑训练前期对分歧度小的样本误判后的样本矫正问题,为此,提出一种基于分歧度评价的融合主动学习和集成学习的高效能学习方法。该方法基于样本分歧度和不同的训练阶段,采取不同的非标记样本选取方式。为评价方法性能,在人工流数据和HEp-2细胞图像数据上进行了实验,结果表明该方法相对于目前的Qboost方法,需要的训练样本数少且具有更高的分类精度。  相似文献   

7.
多标记学习主要用于解决因单个样本对应多个概念标记而带来的歧义性问题,而半监督多标记学习是近年来多标记学习任务中的一个新的研究方向,它试图综合利用少量的已标记样本和大量的未标记样本来提高学习性能。为了进一步挖掘未标记样本的信息和价值并将其应用于文档多标记分类问题,该文提出了一种基于Tri-training的半监督多标记学习算法(MKSMLT),该算法首先利用k近邻算法扩充已标记样本集,结合Tri-training算法训练分类器,将多标记学习问题转化为标记排序问题。实验表明,该算法能够有效提高文档分类性能。  相似文献   

8.
机器学习中的监督学习算法需要用有标记样本训练分类模型。而收集训练样本,并进行分类的过程,需要耗费大量人力物力以及时间。因此,如何高效率地完成图像分类一直是业内研究的热点。提出了一种基于霍夫森林和半监督学习的图像分类算法,能用较少的样本训练分类器,并在分类的过程中不断获取新的训练样本。并对部分训练结果加以人工标注,该方法有效提高了标注效率。利用COREL数据对该算法进行了实验验证,结果表明,该算法可以利用少量的训练样本,得到令人满意的标注精确度,提高人工效率。  相似文献   

9.
为解决多标记数据的分类问题,提出基于稀疏表示的多标记学习算法.首先将待分类样本表示为训练样本集上的稀疏线性组合,基于l1-最小化方法求得最稀疏的系数解.然后利用稀疏系数的判别信息提出一个计算待分类样本对各标记的隶属度的方法.最后根据隶属度对标记进行排序,进而完成分类.在Yeast基因功能分析、自然场景分类和web页面分类上的实验表明,该算法能够有效解决多标记数据的分类问题,与其它方法相比取得更好的结果.  相似文献   

10.
基于多学习器协同训练模型的人体行为识别方法   总被引:1,自引:0,他引:1  
唐超  王文剑  李伟  李国斌  曹峰 《软件学报》2015,26(11):2939-2950
人体行为识别是计算机视觉研究的热点问题,现有的行为识别方法都是基于监督学习框架.为了取得较好的识别效果,通常需要大量的有标记样本来建模.然而,获取有标记样本是一个费时又费力的工作.为了解决这个问题,对半监督学习中的协同训练算法进行改进,提出了一种基于多学习器协同训练模型的人体行为识别方法.这是一种基于半监督学习框架的识别算法.该方法首先通过基于Q统计量的学习器差异性度量选择算法来挑取出协同训练中基学习器集,在协同训练过程中,这些基学习器集对未标记样本进行标记;然后,采用了基于分类器成员委员会的标记近邻置信度计算公式来评估未标记样本的置信度,选取一定比例置信度较高的未标记样本加入到已标记的训练样本集并更新学习器来提升模型的泛化能力.为了评估算法的有效性,采用混合特征来表征人体行为,从而可以快速完成识别过程.实验结果表明,所提出的基于半监督学习的行为识别系统可以有效地辨识视频中的人体动作.  相似文献   

11.
张晨光  张燕  张夏欢 《自动化学报》2015,41(9):1577-1588
针对现有多标记学习方法大多属于有监督学习方法, 而不能有效利用相对便宜且容易获得的大量未标记样本的问题, 本文提出了一种新的多标记半监督学习方法, 称为最大规范化依赖性多标记半监督学习方法(Normalized dependence maximization multi-label semi-supervised learning method). 该方法将已有标签作为约束条件,利用所有样本, 包括已标记和未标记样本,对特征集和标签集的规范化依赖性进行估计, 并以该估计值的最大化为目标, 最终通过求解带边界的迹比值问题为未标记样本打上标签. 与其他经典多标记学习方法在多个真实多标记数据集上的对比实验表明, 本文方法可以有效从已标记和未标记样本中学习, 尤其是已标记样本相对稀少时,学习效果得到了显著提高.  相似文献   

12.
Support vector machine (SVM) is a general and powerful learning machine, which adopts supervised manner. However, for many practical machine learning and data mining applications, unlabeled training examples are readily available but labeled ones are very expensive to be obtained. Therefore, semi-supervised learning emerges as the times require. At present, the combination of SVM and semi-supervised learning principle such as transductive learning has attracted more and more attentions. Transductive support vector machine (TSVM) learns a large margin hyperplane classifier using labeled training data, but simultaneously force this hyperplane to be far away from the unlabeled data. TSVM might seem to be the perfect semi-supervised algorithm since it combines the powerful regularization of SVMs and a direct implementation of the clustering assumption, nevertheless its objective function is non-convex and then it is difficult to be optimized. This paper aims to solve this difficult problem. We apply least square support vector machine to implement TSVM, which can ensure that the objective function is convex and the optimization solution can then be easily found by solving a set of linear equations. Simulation results demonstrate that the proposed method can exploit unlabeled data to yield good performance effectively.  相似文献   

13.
Word Sense Disambiguation by Learning Decision Trees from Unlabeled Data   总被引:1,自引:0,他引:1  
In this paper we describe a machine learning approach to word sense disambiguation that uses unlabeled data. Our method is based on selective sampling with committees of decision trees. The committee members are trained on a small set of labeled examples which are then augmented by a large number of unlabeled examples. Using unlabeled examples is important because obtaining labeled data is expensive and time-consuming while it is easy and inexpensive to collect a large number of unlabeled examples. The idea behind this approach is that the labels of unlabeled examples can be estimated by using committees. Using additional unlabeled examples, therefore, improves the performance of word sense disambiguation and minimizes the cost of manual labeling. Effectiveness of this approach was examined on a raw corpus of one million words. Using unlabeled data, we achieved an accuracy improvement up to 20.2%.  相似文献   

14.
目的在多标签有监督学习框架中,构建具有较强泛化性能的分类器需要大量已标注训练样本,而实际应用中已标注样本少且获取代价十分昂贵。针对多标签图像分类中已标注样本数量不足和分类器再学习效率低的问题,提出一种结合主动学习的多标签图像在线分类算法。方法基于min-max理论,采用查询最具代表性和最具信息量的样本挑选策略主动地选择待标注样本,且基于KKT(Karush-Kuhn-Tucker)条件在线地更新多标签图像分类器。结果在4个公开的数据集上,采用4种多标签分类评价指标对本文算法进行评估。实验结果表明,本文采用的样本挑选方法比随机挑选样本方法和基于间隔的采样方法均占据明显优势;当分类器达到相同或相近的分类准确度时,利用本文的样本挑选策略选择的待标注样本数目要明显少于采用随机挑选样本方法和基于间隔的采样方法所需查询的样本数。结论本文算法一方面可以减少获取已标注样本所需的人工标注代价;另一方面也避免了传统的分类器重新训练时利用所有数据所产生的学习效率低下的问题,达到了当新数据到来时可实时更新分类器的目的。  相似文献   

15.
针对标记数据不足的多标签分类问题,提出一种新的半监督Boosting算法,即基于函数梯度下降方法给出一种半监督Boosting多标签分类的框架,并将非标记数据的条件熵作为一个正则化项引入分类模型。实验结果表明,对于多标签分类问题,新的半监督Boosting算法的分类效果随着非标记数据数量的增加而显著提高,在各方面都优于传统的监督Boosting算法。  相似文献   

16.
李延超  肖甫  陈志  李博 《软件学报》2020,31(12):3808-3822
主动学习从大量无标记样本中挑选样本交给专家标记.现有的批抽样主动学习算法主要受3个限制:(1)一些主动学习方法基于单选择准则或对数据、模型设定假设,这类方法很难找到既有不确定性又有代表性的未标记样本;(2)现有批抽样主动学习方法的性能很大程度上依赖于样本之间相似性度量的准确性,例如预定义函数或差异性衡量;(3)噪声标签问题一直影响批抽样主动学习算法的性能.提出一种基于深度学习批抽样的主动学习方法.通过深度神经网络生成标记和未标记样本的学习表示和采用标签循环模式,使得标记样本与未标记样本建立联系,再回到相同标签的标记样本.这样同时考虑了样本的不确定性和代表性,并且算法对噪声标签具有鲁棒性.在提出的批抽样主动学习方法中,算法使用的子模块函数确保选择的样本集合具有多样性.此外,自适应参数的优化,使得主动学习算法可以自动平衡样本的不确定性和代表性.将提出的主动学习方法应用到半监督分类和半监督聚类中,实验结果表明,所提出的主动学习方法的性能优于现有的一些先进的方法.  相似文献   

17.
Text Classification from Labeled and Unlabeled Documents using EM   总被引:51,自引:0,他引:51  
This paper shows that the accuracy of learned text classifiers can be improved by augmenting a small number of labeled training documents with a large pool of unlabeled documents. This is important because in many text classification problems obtaining training labels is expensive, while large quantities of unlabeled documents are readily available.We introduce an algorithm for learning from labeled and unlabeled documents based on the combination of Expectation-Maximization (EM) and a naive Bayes classifier. The algorithm first trains a classifier using the available labeled documents, and probabilistically labels the unlabeled documents. It then trains a new classifier using the labels for all the documents, and iterates to convergence. This basic EM procedure works well when the data conform to the generative assumptions of the model. However these assumptions are often violated in practice, and poor performance can result. We present two extensions to the algorithm that improve classification accuracy under these conditions: (1) a weighting factor to modulate the contribution of the unlabeled data, and (2) the use of multiple mixture components per class. Experimental results, obtained using text from three different real-world tasks, show that the use of unlabeled data reduces classification error by up to 30%.  相似文献   

18.
In this paper we study multi-label learning with weakly labeled data, i.e., labels of training examples are incomplete, which commonly occurs in real applications, e.g., image classification, document categorization. This setting includes, e.g., (i) semi-supervised multi-label learning where completely labeled examples are partially known; (ii) weak label learning where relevant labels of examples are partially known; (iii) extended weak label learning where relevant and irrelevant labels of examples are partially known. Previous studies often expect that the learning method with the use of weakly labeled data will improve the performance, as more data are employed. This, however, is not always the cases in reality, i.e., weakly labeled data may sometimes degenerate the learning performance. It is desirable to learn safe multi-label prediction that will not hurt performance when weakly labeled data is involved in the learning procedure. In this work we optimize multi-label evaluation metrics (\(\hbox {F}_1\) score and Top-k precision) given that the ground-truth label assignment is realized by a convex combination of base multi-label learners. To cope with the infinite number of possible ground-truth label assignments, cutting-plane strategy is adopted to iteratively generate the most helpful label assignments. The whole optimization is cast as a series of simple linear programs in an efficient manner. Extensive experiments on three weakly labeled learning tasks, namely, (i) semi-supervised multi-label learning; (ii) weak label learning and (iii) extended weak label learning, clearly show that our proposal improves the safeness of using weakly labeled data compared with many state-of-the-art methods.  相似文献   

19.
We study the problem of image retrieval based on semi-supervised learning. Semi-supervised learning has attracted a lot of attention in recent years. Different from traditional supervised learning. Semi-supervised learning makes use of both labeled and unlabeled data. In image retrieval, collecting labeled examples costs human efforts, while vast amounts of unlabeled data are often readily available and offer some additional information. In this paper, based on support vector machine (SVM), we introduce a semi-supervised learning method for image retrieval. The basic consideration of the method is that, if two data points are close to each, they should share the same label. Therefore, it is reasonable to search a projection with maximal margin and locality preserving property. We compare our method to standard SVM and transductive SVM. Experimental results show efficiency and effectiveness of our method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号