首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 140 毫秒
1.
吕佳 《计算机应用》2012,32(3):643-645
针对在半监督分类问题中单独使用全局学习容易出现的在整个输入空间中较难获得一个优良的决策函数的问题,以及单独使用局部学习可在特定的局部区域内习得较好的决策函数的特点,提出了一种结合全局和局部正则化的半监督二分类算法。该算法综合全局正则项和局部正则项的优点,基于先验知识构建的全局正则项能平滑样本的类标号以避免局部正则项学习不充分的问题,通过基于局部邻域内样本信息构建的局部正则项使得每个样本的类标号具有理想的特性,从而构造出半监督二分类问题的目标函数。通过在标准二类数据集上的实验,结果表明所提出的算法其平均分类正确率和标准误差均优于基于拉普拉斯正则项方法、基于正则化拉普拉斯正则项方法和基于局部学习正则项方法。  相似文献   

2.
多标记学习主要用于解决单个样本同时属于多个类别的问题.传统的多标记学习通常假设训练数据集含有大量有标记的训练样本.然而在许多实际问题中,大量训练样本中通常只有少量有标记的训练样本.为了更好地利用丰富的未标记训练样本以提高分类性能,提出了一种基于正则化的归纳式半监督多标记学习方法——MASS.具体而言,MASS首先在最小化经验风险的基础上,引入两种正则项分别用于约束分类器的复杂度及要求相似样本拥有相似结构化多标记输出,然后通过交替优化技术给出快速解法.在网页分类和基因功能分析问题上的实验结果验证了MASS方法的有效性.  相似文献   

3.
传统的有监督度量学习算法没有利用大量存在的无标记样本,且得到的度量矩阵复杂,难以了解不同原始特征的重要程度。针对这些情况,提出基于半监督假设的半监督稀疏度量学习算法。根据三样本组约束建立间隔损失函数;基于平滑假设、聚类假设、流形假设这三个半监督假设建立半监督正则项,并利用L_1范数建立稀疏正则项;利用梯度下降法求解目标函数。实验结果表明,该算法学习得到的度量能有效地使不同类别的样本间距离增大,度量矩阵具有稀疏性,分界面穿过低密度区域,该算法在UCI的样本数据集上具有良好的分类准确性。  相似文献   

4.
现实世界中存在着大量包含多种类型的对象和联系的异构信息网络,从中挖掘信息获取知识已成为当前的研究热点之一.基于图正则化的半监督学习在近年来得到了广泛的研究,然而,现有的半监督学习算法大都只能应用于同构网络.基于同构节点和异构节点的一致性假设,提出了任意结构的异构信息网络上的半监督学习的正则化分类函数,并得到分类函数的闭式解,以此预测未标记节点的类别.提出了异构信息网络上的半监督学习的迭代框架,标记节点的信息可以在邻近的节点上迭代传播,直至达到稳定状态,并证明了迭代算法将收敛于正则化分类函数的闭式解.DBL P数据集上的实验表明该方法优于经典的半监督学习算法.  相似文献   

5.
针对标记数据不足的多标签分类问题,提出一种新的半监督Boosting算法,即基于函数梯度下降方法给出一种半监督Boosting多标签分类的框架,并将非标记数据的条件熵作为一个正则化项引入分类模型。实验结果表明,对于多标签分类问题,新的半监督Boosting算法的分类效果随着非标记数据数量的增加而显著提高,在各方面都优于传统的监督Boosting算法。  相似文献   

6.
传统的机器学习主要解决单标记学习,即一个样本仅有一个标记.在生物信息学中,一个基因通常至少具有一个功能,即至少具有一个标记,与传统学习方法相比,多标记学习能更有效地识别生物相关基因组的功能.目前的研究主要集中在监督多标记学习算法.然而,研究半监督多标记学习算法,从已标记和未标记的基因表达数据中学习,仍然是未解决问题.提出一种有效的基因功能分析的半监督多标记学习算法SML_SVM.首先,SML_SVM根据PT4方法,将半监督多标记学习问题转化为半监督单标记学习问题,然后根据最大后验概率原则(MAP)和K近邻方法估计未标记样本的标记,最后,用SVM求解单标记学习问题.在yeast基因数据和genbase蛋白质数据上的实验表明,SML_SVM性能比基于PT4方法的MLSVM和自训练MLSVM更优.  相似文献   

7.
多示例多标记学习(Multi-Instance Multi-Label,MIML)是一种新的机器学习框架,基于该框架上的样本由多个示例组成并且与多个类别相关联,该框架因其对多义性对象具有出色的表达能力,已成为机器学习界研究的热点.解决MIML分类问题的最直接的思路是采用退化策略,通过向多示例学习或多标记学习的退化,将MIML框架下的分类问题简化为一系列的二类分类问题进行求解.但是在退化过程中会丢失标记之间的关联信息,降低分类的准确率.针对此问题,本文提出了MIMLSVM-LOC算法,该算法将改进的MIMLSVM算法与一种局部标记相关性的方法ML-LOC相结合,在训练过程中结合标记之间的关联信息进行分类.算法首先对MIMLSVM算法中的K-medoids聚类算法进行改进,采用的混合Hausdorff距离,将每一个示例包转化为一个示例,将MIML问题进行了退化.然后采用单示例多标记的算法ML-LOC算法继续以后的分类工作.在实验中,通过与其他多示例多标记算法对比,得出本文提出的算法取得了比其他分类算法更优的分类效果.  相似文献   

8.
多示例多标记学习是用多个示例来表示一个对象,同时该对象与多个类别标记相关联的新型机器学习框架.设计多示例多标记算法的一种方法是使用退化策略将其转化为多示例学习或者是多标记学习,最后退化为传统监督学习,然后使用某种算法进行训练和建模,但是在退化过程中会有信息丢失,从而影响到分类准确率.MIMLSVM算法是以多标记学习为桥梁,将多示例多标记学习问题退化为传统监督学习问题求解,但是该算法在退化过程中没有考虑标记之间的相关信息,本文利用一种既考虑到全局相关性又考虑到局部相关性的多标记算法GLOCAL来对MIMLSVM进行改进,实验结果显示,改进的算法取得了良好的分类效果.  相似文献   

9.
多示例多标记是一种新的机器学习框架,在该框架下一个对象用多个示例来表示,同时与多个类别标记相关联。MIMLSVM+算法将多示例多标记问题转化为一系列独立的二类分类问题,但是在退化过程中标记之间的联系信息会丢失,而E-MIMLSVM+算法则通过引入多任务学习技术对MIMLSVM+算法进行了改进。为了充分利用未标记样本来提高分类准确率,使用半监督支持向量机TSVM对E-MIMLSVM+算法进行了改进。通过实验将该算法与其他多示例多标记算法进行了比较,实验结果显示,改进算法取得了良好的分类效果。  相似文献   

10.
丁赛赛  吕佳 《计算机应用研究》2020,37(12):3607-3611
针对生成对抗网络中鉴别器在少量标记样本上的分类精度较差以及对流形局部扰动的鲁棒性不足的问题,提出一种基于可变损失和流形正则化的生成对抗网络算法。当标记样本较少时,该算法在鉴别器中利用可变损失代替原有对抗损失以解决训练前期分类性能较差的鉴别器对半监督分类任务的不利影响。此外,在鉴别器可变损失的基础上加入流形正则项,通过惩罚鉴别器在流形上分类决策的变化提高鉴别器对局部扰动的鲁棒性。以生成样本的质量和半监督的分类精度作为算法的评价标准,并在数据集SVHN和CIFAR-10上完成了数值实验。与其他半监督算法的对比结果表明,该算法在使用少量带标记数据的情况下能得到质量更高的生成样本和精度更高的分类结果。  相似文献   

11.
In multi-label classification, examples can be associated with multiple labels simultaneously. The task of learning from multi-label data can be addressed by methods that transform the multi-label classification problem into several single-label classification problems. The binary relevance approach is one of these methods, where the multi-label learning task is decomposed into several independent binary classification problems, one for each label in the set of labels, and the final labels for each example are determined by aggregating the predictions from all binary classifiers. However, this approach fails to consider any dependency among the labels. Aiming to accurately predict label combinations, in this paper we propose a simple approach that enables the binary classifiers to discover existing label dependency by themselves. An experimental study using decision trees, a kernel method as well as Naïve Bayes as base-learning techniques shows the potential of the proposed approach to improve the multi-label classification performance.  相似文献   

12.
In classification problems with hierarchical structures of labels, the target function must assign labels that are hierarchically organized and it can be used either for single-label (one label per instance) or multi-label classification problems (more than one label per instance). In parallel to these developments, the idea of semi-supervised learning has emerged as a solution to the problems found in a standard supervised learning procedure (used in most classification algorithms). It combines labelled and unlabelled data during the training phase. Some semi-supervised methods have been proposed for single-label classification methods. However, very little effort has been done in the context of multi-label hierarchical classification. Therefore, this paper proposes a new method for supervised hierarchical multi-label classification, called HMC-RAkEL. Additionally, we propose the use of semi-supervised learning, self-training, in hierarchical multi-label classification, leading to three new methods, called HMC-SSBR, HMC-SSLP and HMC-SSRAkEL. In order to validate the feasibility of these methods, an empirical analysis will be conducted, comparing the proposed methods with their corresponding supervised versions. The main aim of this analysis is to observe whether the semi-supervised methods proposed in this paper have similar performance of the corresponding supervised versions.  相似文献   

13.

In multi-label classification problems, every instance is associated with multiple labels at the same time. Binary classification, multi-class classification and ordinal regression problems can be seen as unique cases of multi-label classification where each instance is assigned only one label. Text classification is the main application area of multi-label classification techniques. However, relevant works are found in areas like bioinformatics, medical diagnosis, scene classification and music categorization. There are two approaches to do multi-label classification: The first is an algorithm-independent approach or problem transformation in which multi-label problem is dealt by transforming the original problem into a set of single-label problems, and the second approach is algorithm adaptation, where specific algorithms have been proposed to solve multi-label classification problem. Through our work, we not only investigate various research works that have been conducted under algorithm adaptation for multi-label classification but also perform comparative study of two proposed algorithms. The first proposed algorithm is named as fuzzy PSO-based ML-RBF, which is the hybridization of fuzzy PSO and ML-RBF. The second proposed algorithm is named as FSVD-MLRBF that hybridizes fuzzy c-means clustering along with singular value decomposition. Both the proposed algorithms are applied to real-world datasets, i.e., yeast and scene dataset. The experimental results show that both the proposed algorithms meet or beat ML-RBF and ML-KNN when applied on the test datasets.

  相似文献   

14.
在多标记学习的任务中,多标记学习的每个样本可被多个标签标记,比单标记学习的应用空间更广关注度更高,多标记学习可以利用关联性提高算法的性能。在多标记学习中,传统特征选择算法已不再适用,一方面,传统的特征选择算法可被用于单标记的评估标准。多标记学习使得多个标记被同时优化;而且在多标记学习中关联信息存在于不同标记间。因此,可设计一种能够处理多标记问题的特征选择算法,使标记之间的关联信息能够被提取和利用。通过设计最优的目标损失函数,提出了基于指数损失间隔的多标记特征选择算法。该算法可以通过样本相似性的方法,将特征空间和标记空间的信息融合在一起,独立于特定的分类算法或转换策略。优于其他特征选择算法的分类性能。在现实世界的数据集上验证了所提算法的正确性以及较好的性能。  相似文献   

15.
基于浮动阈值分类器组合的多标签分类算法   总被引:1,自引:0,他引:1  
针对目标可以同时属于多个类别的多标签分类问题,提出了一种基于浮动阈值分类器组合的多标签分类算法.首先,分析探讨了基于浮动阈值分类器的AdaBoost算法(AdaBoost.FT)的原理及错误率估计,证明了该算法能克服固定分段阈值分类器对分类边界附近点分类不稳定的缺点从而提高分类准确率;然后,采用二分类(BR)方法将该单标签学习算法应用于多标签分类问题,得到基于浮动阈值分类器组合的多标签分类方法,即多标签AdaBoost.FT.实验结果表明,所提算法的平均分类精度在Emotions数据集上比AdaBoost.MH、ML-kNN、RankSVM这3种算法分别提高约4%、8%、11%;在Scene、Yeast数据集上仅比RankSVM低约3%、1%.由实验分析可知,在不同类别标记之间基本没有关联关系或标签数目较少的数据集上,该算法均能得到较好的分类效果.  相似文献   

16.
Hybrid strategy, which generalizes a specific single-label algorithm while one or two data decomposition tricks are applied implicitly or explicitly, has become an effective and efficient tool to design and implement various multi-label classification algorithms. In this paper, we extend traditional binary support vector machine by introducing an approximate ranking loss as its empirical loss term to build a novel support vector machine for multi-label classification, resulting into a quadratic programming problem with different upper bounds of variables to characterize label correlation of individual instance. Further, our optimization problem can be solved via combining one-versus-rest data decomposition trick with modified binary support vector machine, which dramatically reduces computational cost. Experimental study on ten multi-label data sets illustrates that our method is a powerful candidate for multi-label classification, compared with four state-of-the-art multi-label classification approaches.  相似文献   

17.
多标记学习主要用于解决因单个样本对应多个概念标记而带来的歧义性问题,而半监督多标记学习是近年来多标记学习任务中的一个新的研究方向,它试图综合利用少量的已标记样本和大量的未标记样本来提高学习性能。为了进一步挖掘未标记样本的信息和价值并将其应用于文档多标记分类问题,该文提出了一种基于Tri-training的半监督多标记学习算法(MKSMLT),该算法首先利用k近邻算法扩充已标记样本集,结合Tri-training算法训练分类器,将多标记学习问题转化为标记排序问题。实验表明,该算法能够有效提高文档分类性能。  相似文献   

18.
Directly applying single-label classification methods to the multi-label learning problems substantially limits both the performance and speed due to the imbalance, dependence and high dimensionality of the given label matrix. Existing methods either ignore these three problems or reduce one with the price of aggravating another. In this paper, we propose a {0,1} label matrix compression and recovery method termed ??compressed labeling (CL)?? to simultaneously solve or at least reduce these three problems. CL first compresses the original label matrix to improve balance and independence by preserving the signs of its Gaussian random projections. Afterward, we directly utilize popular binary classification methods (e.g., support vector machines) for each new label. A fast recovery algorithm is developed to recover the original labels from the predicted new labels. In the recovery algorithm, a ??labelset distilling method?? is designed to extract distilled labelsets (DLs), i.e., the frequently appeared label subsets from the original labels via recursive clustering and subtraction. Given a distilled and an original label vector, we discover that the signs of their random projections have an explicit joint distribution that can be quickly computed from a geometric inference. Based on this observation, the original label vector is exactly determined after performing a series of Kullback-Leibler divergence based hypothesis tests on the distribution about the new labels. CL significantly improves the balance of?the training samples and reduces the dependence between different labels. Moreover, it accelerates the learning process by training fewer binary classifiers for compressed labels, and makes use of label dependence via DLs based tests. Theoretically, we prove the recovery bounds of CL which verifies the effectiveness of CL for label compression and multi-label classification performance improvement brought by label correlations preserved in DLs. We show the effectiveness, efficiency and robustness of CL via 5 groups of experiments on 21 datasets from text classification, image annotation, scene classification, music categorization, genomics and web page classification.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号