首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到17条相似文献,搜索用时 187 毫秒
1.
以往半监督多示例学习算法常把未标记包分解为示例集合,使用传统的半监督单示例学习算法确定这些示例的潜在标记以对它们进行利用。但该类方法认为多示例样本的分类与其概率密度分布紧密相关,且并未考虑包结构对包分类标记的影响。提出一种基于包层次的半监督多示例核学习方法,直接利用未标记包进行半监督学习器的训练。首先通过对示例空间聚类把包转换为概念向量表示形式,然后计算概念向量之间的海明距离,在此基础上计算描述包光滑性的图拉普拉斯矩阵,进而计算包层次的半监督核,最后在多示例学习标准数据集和图像数据集上测试本算法。测试表明本算法有明显的改进效果。  相似文献   

2.
在多示例学习中引入利用未标记示例的机制,能降低训练的成本并提高学习器的泛化能力。当前半监督多示例学习算法大部分是基于对包中的每一个示例进行标记,把多示例学习转化为一个单示例半监督学习问题。考虑到包的类标记由包中示例及包的结构决定,提出一种直接在包层次上进行半监督学习的多示例学习算法。通过定义多示例核,利用所有包(有标记和未标记)计算包层次的图拉普拉斯矩阵,作为优化目标中的光滑性惩罚项。在多示例核所张成的RKHS空间中寻找最优解被归结为确定一个经过未标记数据修改的多示例核函数,它能直接用在经典的核学习方法上。在实验数据集上对算法进行了测试,并和已有的算法进行了比较。实验结果表明,基于半监督多示例核的算法能够使用更少量的训练数据而达到与监督学习算法同样的精度,在有标记数据集相同的情况下利用未标记数据能有效地提高学习器的泛化能力。  相似文献   

3.
针对抽油机工况数据可从位移、载荷、电流等多个方面进行描述,若仅仅使用一个特征向量来描述抽油机工况数据会使其信息过于简化,丢失一部分有效信息的问题,以及工况数据具有多义性的特征,提出基于多示例多标记的抽油机故障诊断.该学习方法中,用抽油机的位移、载荷、电流数据作为抽油机工况样本包的多个示例,使用k-medoids聚类算法对样本包进行聚类,将多个样本包转换为若干示例,新示例的每一维表示样本包到样本各聚类中心的距离,再利用MLSVM算法对转换后的多标记问题进行求解.实验结果表明,多示例多标记学习能够及时、准确地诊断出抽油机故障问题.  相似文献   

4.
多示例多标签学习是一种新型的机器学习框架。在多示例多标签学习中,样本以包的形式存在,一个包由多个示例组成,并被标记多个标签。以往的多示例多标签学习研究中,通常认为包中的示例是独立同分布的,但这个假设在实际应用中是很难保证的。为了利用包中示例的相关性特征,提出了一种基于示例非独立同分布的多示例多标签分类算法。该算法首先通过建立相关性矩阵表示出包内示例的相关关系,每个多示例包由一个相关性矩阵表示;然后建立基于不同尺度的相关性矩阵的核函数;最后考虑到不同标签的预测对应不同的核函数,引入多核学习构造并训练针对不同标签预测的多核SVM分类器。图像和文本数据集上的实验结果表明,该算法大大提高了多标签分类的准确性。  相似文献   

5.
环天  郝宁  牛强 《计算机科学》2017,44(12):48-51, 63
针对多示例多标记学习算法MIMLSVM只从包层面构造聚类,而忽略了包内示例分布对分类造成影响这一不足,提出一种基于概念权重向量的MIMLSVM改进算法——I-MIMLSVM算法。首先从示例层面构造聚类,挖掘出示例中的潜在概念簇,运用R-PATTERN算法计算每个概念簇的概念权重;然后利用TF-IDF算法计算每个概念簇在各个示例包中的重要度;最后将示例包表示为概念权重向量,向量的每一维即为概念簇的概念权重与其在该包中的重要度的乘积。将该算法在包含2000幅图像的自然数据集上进行实验验证,结果表明改进的算法在分类性能上整体优于原算法,尤其在Hamming loss,Coverage和Average precision这3个测评指标上较为明显。  相似文献   

6.
郝宁  夏士雄  牛强  赵志军 《计算机应用》2015,35(11):3122-3125
针对多示例多标记学习算法MIMLBoost中退化过程造成的类别不平衡问题,运用人工降采样思想,引入类别重要度,提出一种改进的基于类别标记评估的退化方法.该方法通过对示例空间中的示例包进行聚类,把标记空间中的标记量化到聚类簇上,再以聚类簇为单位,利用TF-IDF算法对每个类别标记进行重要度评估和筛选,去除重要度低的标记,并将簇中的示例包与其余的类别标记拼接起来,以此来减少大类样本的出现,完成多示例多标记样本向多示例单标记样本的转化.在自然数据集上进行了实验,实验结果发现,改进算法的性能整体上优于原算法,尤其在Hamming loss、coverage、ranking loss三个评测指标上尤为明显,说明所提算法能够有效降低分类的出错率,提高算法的精度和分类效率.  相似文献   

7.
手机游戏提供商通过在游戏中销售虚拟道具来获得收益。将游戏玩家日志数据中每个事件描述为一个示例,玩家对多种游戏道具的购买状态表示为多个标记,从而将游戏道具推荐问题抽象为多示例多标记学习问题。在此基础上,将快速多示例多标记学习算法用于手机网络游戏道具推荐,并利用半监督学习提升推荐性能。离线数据集以及实际在线手机网络游戏实验结果表明,基于多示例多标记学习的游戏道具推荐技术带来了游戏营收的显著增长。  相似文献   

8.
针对有特殊结构的文本,传统的文本分类算法已经不能满足需求,为此提出一种基于多示例学习框架的文本分类算法。将每个文本当作一个示例包,文本中的标题和正文视为该包的两个示例;利用基于一类分类的多类分类支持向量机算法,将包映射到高维特征空间中;引入高斯核函数训练分类器,完成对无标记文本的分类预测。实验结果表明,该算法相较于传统的机器学习分类算法具有更高的分类精度,为具有特殊文本结构的文本挖掘领域研究提供了新的角度。  相似文献   

9.
在多示例学习(Multi-instance learning,MIL)中,核心示例对于包类别的预测具有重要作用。若两个示例周围分布不同数量的同类示例,则这两个示例的代表程度不同。为了从包中选出最具有代表性的示例组成核心示例集,提高分类精度,本文提出多示例学习的示例层次覆盖算法(Multi-instance learning with instance_level covering algorithm,MILICA)。该算法首先利用最大Hausdorff距离和覆盖算法构建初始核心示例集,然后通过覆盖算法和反验证获得最终的核心示例集和各覆盖包含的示例数,最后使用相似函数将包转为单示例。在两类数据集和多类图像数据集上的实验证明,MILICA算法具有较好的分类性能。  相似文献   

10.
为了有效地解决多示例图像自动分类问题,提出一种将多示例图像转化为包空间的单示例描述方法.该方法将图像视为包,图像中的区域视为包中的示例,根据具有相同视觉区域的样本都会聚集成一簇,用聚类算法为每类图像确定其特有的“视觉词汇”,并利用负包示例标注确定的这一信息指导典型“视觉词汇”的选择;然后根据得到的“视觉词汇”构造一个新的空间—包空间,利用基于视觉词汇定义的非线性函数将多个示例描述的图像映射到包空间的一个点,变为单示例描述;最后利用标准的支持向量机进行监督学习,实现图像自动分类.在Corel图像库的图像数据集上进行对比实验,实验结果表明该算法具有良好的图像分类性能.  相似文献   

11.
针对许多多示例算法都对正包中的示例情况做出假设的问题,提出了结合模糊聚类的多示例集成算法(ISFC).结合模糊聚类和多示例学习中负包的特点,提出了"正得分"的概念,用于衡量示例标签为正的可能性,降低了多示例学习中示例标签的歧义性;考虑到多示例学习中将负示例分类错误的代价更大,设计了一种包的代表示例选择策略,选出的代表示...  相似文献   

12.
In multi-instance learning, the training set is composed of labeled bags each consists of many unlabeled instances, that is, an object is represented by a set of feature vectors instead of only one feature vector. Most current multi-instance learning algorithms work through adapting single-instance learning algorithms to the multi-instance representation, while this paper proposes a new solution which goes at an opposite way, that is, adapting the multi-instance representation to single-instance learning algorithms. In detail, the instances of all the bags are collected together and clustered into d groups first. Each bag is then re-represented by d binary features, where the value of the ith feature is set to one if the concerned bag has instances falling into the ith group and zero otherwise. Thus, each bag is represented by one feature vector so that single-instance classifiers can be used to distinguish different classes of bags. Through repeating the above process with different values of d, many classifiers can be generated and then they can be combined into an ensemble for prediction. Experiments show that the proposed method works well on standard as well as generalized multi-instance problems. Zhi-Hua Zhou is currently Professor in the Department of Computer Science & Technology and head of the LAMDA group at Nanjing University. His main research interests include machine learning, data mining, information retrieval, and pattern recognition. He is associate editor of Knowledge and Information Systems and on the editorial boards of Artificial Intelligence in Medicine, International Journal of Data Warehousing and Mining, Journal of Computer Science & Technology, and Journal of Software. He has also been involved in various conferences. Min-Ling Zhang received his B.Sc. and M.Sc. degrees in computer science from Nanjing University, China, in 2001 and 2004, respectively. Currently he is a Ph.D. candidate in the Department of Computer Science & Technology at Nanjing University and a member of the LAMDA group. His main research interests include machine learning and data mining, especially in multi-instance learning and multi-label learning.  相似文献   

13.
Multi-instance clustering with applications to multi-instance prediction   总被引:2,自引:0,他引:2  
In the setting of multi-instance learning, each object is represented by a bag composed of multiple instances instead of by a single instance in a traditional learning setting. Previous works in this area only concern multi-instance prediction problems where each bag is associated with a binary (classification) or real-valued (regression) label. However, unsupervised multi-instance learning where bags are without labels has not been studied. In this paper, the problem of unsupervised multi-instance learning is addressed where a multi-instance clustering algorithm named Bamic is proposed. Briefly, by regarding bags as atomic data items and using some form of distance metric to measure distances between bags, Bamic adapts the popular k -Medoids algorithm to partition the unlabeled training bags into k disjoint groups of bags. Furthermore, based on the clustering results, a novel multi-instance prediction algorithm named Bartmip is developed. Firstly, each bag is re-represented by a k-dimensional feature vector, where the value of the i-th feature is set to be the distance between the bag and the medoid of the i-th group. After that, bags are transformed into feature vectors so that common supervised learners are used to learn from the transformed feature vectors each associated with the original bag’s label. Extensive experiments show that Bamic could effectively discover the underlying structure of the data set and Bartmip works quite well on various kinds of multi-instance prediction problems.  相似文献   

14.
In multi-instance learning, the training set comprises labeled bags that are composed of unlabeled instances, and the task is to predict the labels of unseen bags. This paper studies multi-instance learning from the view of supervised learning. First, by analyzing some representative learning algorithms, this paper shows that multi-instance learners can be derived from supervised learners by shifting their focuses from the discrimination on the instances to the discrimination on the bags. Second, considering that ensemble learning paradigms can effectively enhance supervised learners, this paper proposes to build multi-instance ensembles to solve multi-instance problems. Experiments on a real-world benchmark test show that ensemble learning paradigms can significantly enhance multi-instance learners.  相似文献   

15.
作为监督学习的一种变体,多示例学习(MIL)试图从包中的示例中学习分类器。在多示例学习中,标签与包相关联,而不是与单个示例相关联。包的标签是已知的,示例的标签是未知的。MIL可以解决标记模糊问题,但要解决带有弱标签的问题并不容易。对于弱标签问题,包和示例的标签都是未知的,但它们是潜在的变量。现在有多个标签和示例,可以通过对不同标签进行加权来近似估计包和示例的标签。提出了一种新的基于迁移学习的多示例学习框架来解决弱标签的问题。首先构造了一个基于多示例方法的迁移学习模型,该模型可以将知识从源任务迁移到目标任务中,从而将弱标签问题转换为多示例学习问题。在此基础上,提出了一种求解多示例迁移学习模型的迭代框架。实验结果表明,该方法优于现有多示例学习方法。  相似文献   

16.
In multi-instance learning, the training examples are bags composed of instances without labels, and the task is to predict the labels of unseen bags through analyzing the training bags with known labels. A bag is positive if it contains at least one positive instance, while it is negative if it contains no positive instance. In this paper, a neural network based multi-instance learning algorithm named RBF-MIP is presented, which is derived from the popular radial basis function (RBF) methods. Briefly, the first layer of an RBF-MIP neural network is composed of clusters of bags formed by merging training bags agglomeratively, where Hausdorff metric is utilized to measure distances between bags and between clusters. Weights of second layer of the RBF-MIP neural network are optimized by minimizing a sum-of-squares error function and worked out through singular value decomposition (SVD). Experiments on real-world multi-instance benchmark data, artificial multi-instance benchmark data and natural scene image database retrieval are carried out. The experimental results show that RBF-MIP is among the several best learning algorithms on multi-instance problems.  相似文献   

17.
针对训练包不含标签的无监督多示例问题,本文提出了聚类和分类结合的多示例预测算法。首先利用多示例聚类算法完成无监督多示例学习的聚类任务,并根据聚类结果,将各个簇中的每个包转换成相应的k维特征向量。在标准多示例预测模型和一般性多示例预测模型上进行实验,可以得到较高的预测准确度,与其它多示例预测算法相比,本文算法具有较好的性能。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号