首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 368 毫秒
1.
多示例多标记是一种新的机器学习框架,在该框架下一个对象用多个示例来表示,同时与多个类别标记相关联。MIMLSVM+算法将多示例多标记问题转化为一系列独立的二类分类问题,但是在退化过程中标记之间的联系信息会丢失,而E-MIMLSVM+算法则通过引入多任务学习技术对MIMLSVM+算法进行了改进。为了充分利用未标记样本来提高分类准确率,使用半监督支持向量机TSVM对E-MIMLSVM+算法进行了改进。通过实验将该算法与其他多示例多标记算法进行了比较,实验结果显示,改进算法取得了良好的分类效果。  相似文献   

2.
In this paper, we propose the MIML (Multi-Instance Multi-Label learning) framework where an example is described by multiple instances and associated with multiple class labels. Compared to traditional learning frameworks, the MIML framework is more convenient and natural for representing complicated objects which have multiple semantic meanings. To learn from MIML examples, we propose the MimlBoost and MimlSvm algorithms based on a simple degeneration strategy, and experiments show that solving problems involving complicated objects with multiple semantic meanings in the MIML framework can lead to good performance. Considering that the degeneration process may lose information, we propose the D-MimlSvm algorithm which tackles MIML problems directly in a regularization framework. Moreover, we show that even when we do not have access to the real objects and thus cannot capture more information from real objects by using the MIML representation, MIML is still useful. We propose the InsDif and SubCod algorithms. InsDif works by transforming single-instances into the MIML representation for learning, while SubCod works by transforming single-label examples into the MIML representation for learning. Experiments show that in some tasks they are able to achieve better performance than learning the single-instances or single-label examples directly.  相似文献   

3.
多示例多标记学习(Multi-Instance Multi-Label,MIML)是一种新的机器学习框架,基于该框架上的样本由多个示例组成并且与多个类别相关联,该框架因其对多义性对象具有出色的表达能力,已成为机器学习界研究的热点.解决MIML分类问题的最直接的思路是采用退化策略,通过向多示例学习或多标记学习的退化,将MIML框架下的分类问题简化为一系列的二类分类问题进行求解.但是在退化过程中会丢失标记之间的关联信息,降低分类的准确率.针对此问题,本文提出了MIMLSVM-LOC算法,该算法将改进的MIMLSVM算法与一种局部标记相关性的方法ML-LOC相结合,在训练过程中结合标记之间的关联信息进行分类.算法首先对MIMLSVM算法中的K-medoids聚类算法进行改进,采用的混合Hausdorff距离,将每一个示例包转化为一个示例,将MIML问题进行了退化.然后采用单示例多标记的算法ML-LOC算法继续以后的分类工作.在实验中,通过与其他多示例多标记算法对比,得出本文提出的算法取得了比其他分类算法更优的分类效果.  相似文献   

4.
Multi-instance multi-label learning (MIML) is a newly proposed framework, in which the multi-label problems are investigated by representing each sample with multiple feature vectors named instances. In this framework, the multi-label learning task becomes to learn a many-to-many relationship, and it also offers a possibility for explaining why a concerned sample has the certain class labels. The connections between instances and labels as well as the correlations among labels are equally crucial information for MIML. However, the existing MIML algorithms can rarely exploit them simultaneously. In this paper, a new MIML algorithm is proposed based on Gaussian process. The basic idea is to suppose a latent function with Gaussian process prior in the instance space for each label and infer the predictive probability of labels by integrating over uncertainties in these functions using the Bayesian approach, so that the connection between instances and every label can be exploited by defining a likelihood function and the correlations among labels can be identified by the covariance matrix of the latent functions. Moreover, since different relationships between instances and labels can be captured by defining different likelihood functions, the algorithm may be used to deal with the problems with various multi-instance assumptions. Experimental results on several benchmark data sets show that the proposed algorithm is valid and can achieve superior performance to the existing ones.  相似文献   

5.
Multi-instance multi-label learning (MIML) is an innovative learning framework where each sample is represented by multiple instances and associated with multiple class labels. In several learning situations, the multi-instance multi-label RBF neural networks (MIMLRBF) can exploit connections between the instances and the labels of an MIML example directly, while most of other algorithms cannot learn that directly. However, the singular value decomposition (SVD) method used to compute the weights of the output layer will cause augmented overall error in network performance when training data are noisy or not easily discernible. This paper presents an improved approach to learning algorithms used for training MIMLRBF. The steepest descent (SD) method is used to optimize the weights after they are initialized by the SVD method. Comparing results employing diverse learning strategies shows interesting outcomes as have come out of this paper.  相似文献   

6.
多示例多标签学习是一种新型的机器学习框架。在多示例多标签学习中,样本以包的形式存在,一个包由多个示例组成,并被标记多个标签。以往的多示例多标签学习研究中,通常认为包中的示例是独立同分布的,但这个假设在实际应用中是很难保证的。为了利用包中示例的相关性特征,提出了一种基于示例非独立同分布的多示例多标签分类算法。该算法首先通过建立相关性矩阵表示出包内示例的相关关系,每个多示例包由一个相关性矩阵表示;然后建立基于不同尺度的相关性矩阵的核函数;最后考虑到不同标签的预测对应不同的核函数,引入多核学习构造并训练针对不同标签预测的多核SVM分类器。图像和文本数据集上的实验结果表明,该算法大大提高了多标签分类的准确性。  相似文献   

7.
多示例多标记学习是用多个示例来表示一个对象,同时该对象与多个类别标记相关联的新型机器学习框架.设计多示例多标记算法的一种方法是使用退化策略将其转化为多示例学习或者是多标记学习,最后退化为传统监督学习,然后使用某种算法进行训练和建模,但是在退化过程中会有信息丢失,从而影响到分类准确率.MIMLSVM算法是以多标记学习为桥梁,将多示例多标记学习问题退化为传统监督学习问题求解,但是该算法在退化过程中没有考虑标记之间的相关信息,本文利用一种既考虑到全局相关性又考虑到局部相关性的多标记算法GLOCAL来对MIMLSVM进行改进,实验结果显示,改进的算法取得了良好的分类效果.  相似文献   

8.
The facts show that multi-instance multi-label (MIML) learning plays a pivotal role in Artificial Intelligence studies. Evidently, the MIML learning introduces a framework in which data is described by a bag of instances associated with a set of labels. In this framework, the modeling of the connection is the challenging problem for MIML. The RBF neural network can explain the complex relations between the instances and labels in the MIMLRBF. The parameters estimation of the RBF network is a difficult task. In this paper, the computational convergence and the modeling accuracy of the RBF network has been improved. The present study aimed to investigate the impact of a novel hybrid algorithm consisting of Gases Brownian Motion optimization (GBMO) algorithm and the gradient based fast converging parameter estimation method on multi-instance multi-label learning. In the current study, a hybrid algorithm was developed to estimate the RBF neural network parameters (the weights, widths and centers of the hidden units) simultaneously. The algorithm uses the robustness of the GBMO to search the parameter space and the efficiency of the gradient. For this purpose, two real-world MIML tasks and a Corel dataset were utilized within a two-step experimental design. In the first step, the GBMO algorithm was used to determine the widths and centers of the network nodes. In the second step, for each molecule with fixed inputs and number of hidden nodes, the parameters were optimized by a structured nonlinear parameter optimization method (SNPOM). The findings demonstrated the superior performance of the hybrid algorithmic method. Additionally, the results for training and testing the dataset revealed that the hybrid method enhances RBF network learning more efficiently in comparison with other conventional RBF approaches. The results obtain better modeling accuracy than some other algorithms.  相似文献   

9.
针对现有的大部分多示例多标记(MIML)算法都没有考虑如何更好地表示对象特征这一问题,将概率潜在语义分析(PLSA)模型和神经网络(NN)相结合,提出了基于主题模型的多示例多标记学习方法。算法通过概率潜在语义分析模型学习到所有训练样本的潜在主题分布,该过程是一个特征学习的过程,用于学习到更好的特征表达,用学习到的每个样本的潜在主题分布作为输入来训练神经网络。当给定一个测试样本时,学习测试样本的潜在主题分布,将学习到的潜在主题分布输入到训练好的神经网络中,从而得到测试样本的标记集合。与两种经典的基于分解策略的多示例多标记算法相比,实验结果表明提出的新方法在现实世界中的两种多示例多标记学习任务中具有更优越的性能。  相似文献   

10.
The main aim of this paper is to propose an efficient and novel Markov chain-based multi-instance multi-label (Markov-Miml) learning algorithm to evaluate the importance of a set of labels associated with objects of multiple instances. The algorithm computes ranking of labels to indicate the importance of a set of labels to an object. Our approach is to exploit the relationships between instances and labels of objects. The rank of a class label to an object depends on (i) the affinity metric between the bag of instances of this object and the bag of instances of the other objects, and (ii) the rank of a class label of similar objects. An object, which contains a bag of instances that are highly similar to bags of instances of the other objects with a high rank of a particular class label, receives a high rank of this class label. Experimental results on benchmark data have shown that the proposed algorithm is computationally efficient and effective in label ranking for MIML data. In the comparison, we find that the classification performance of the Markov-Miml algorithm is competitive with those of the three popular MIML algorithms based on boosting, support vector machine, and regularization, but the computational time required by the proposed algorithm is less than those by the other three algorithms.  相似文献   

11.
多标记学习不同于传统的监督学习,它是为了解决客观世界中多义性对象的建模问题而提出的一种学习框架。在该框架下,一个实例可以同时隶属于多个标记。已有的多标记学习算法大多假设每个样本的标记集合都是完整的,但有时某些实例对应的标记会出现缺失。为了应对这一问题,本文提出一种针对弱标记文档的分类方法,该方法基于标记之间不同的相关性和相似实例具有相似标记的假设,构造一个最优化问题,以尽可能地补全缺失的标记。实验结果表明,该方法可以有效地提升学习系统的泛化性能。   相似文献   

12.
作为监督学习的一种变体,多示例学习(MIL)试图从包中的示例中学习分类器。在多示例学习中,标签与包相关联,而不是与单个示例相关联。包的标签是已知的,示例的标签是未知的。MIL可以解决标记模糊问题,但要解决带有弱标签的问题并不容易。对于弱标签问题,包和示例的标签都是未知的,但它们是潜在的变量。现在有多个标签和示例,可以通过对不同标签进行加权来近似估计包和示例的标签。提出了一种新的基于迁移学习的多示例学习框架来解决弱标签的问题。首先构造了一个基于多示例方法的迁移学习模型,该模型可以将知识从源任务迁移到目标任务中,从而将弱标签问题转换为多示例学习问题。在此基础上,提出了一种求解多示例迁移学习模型的迭代框架。实验结果表明,该方法优于现有多示例学习方法。  相似文献   

13.
偏标记学习是一种重要的弱监督学习框架。在偏标记学习中,每个实例与一组候选标记相关联,它的真实标记隐藏在候选标记集合中,且在学习过程中不可获知。为了消除候选标记对学习过程的影响,提出了一种融合实例语义差别最大化和流型学习的偏标记学习方法(partial label learning by semantic difference and manifold learning, PL-SDML)。该方法是一个两阶段的方法:在训练阶段,基于实例的语义差别最大化准则和流型学习方法为训练实例生成标记置信度;在预测阶段,使用基于最近邻投票的方法为未知实例预测标记类别。在四组人工改造的UCI数据集中,在平均70%的情况下优于其他对比算法。在四组真实偏标记数据集中,相比其他对比算法,取得了0.3%~13.8%的性能提升。  相似文献   

14.
一种增量贝叶斯分类模型   总被引:40,自引:0,他引:40  
分类一直是机器学习,模型识别和数据挖掘研究的核心问题,从海量数据中学习分类知识,尤其是当获得大量的带有类别标注的样本代价较高时,增量学习是解决该问题的有效途径,该文将简单贝叶期方法应用于增量分类中,提出了一种增量贝叶斯学习模型,给出了增量贝叶斯推理过程,包括增量地修正分类器参数和增量地分类测试样本,实验结果表明,该算法是可行的和有效。  相似文献   

15.
Multi-label learning is an effective framework for learning with objects that have multiple semantic labels, and has been successfully applied into many real-world tasks. In contrast with traditional single-label learning, the cost of labeling a multi-label example is rather high, thus it becomes an important task to train an effectivemulti-label learning model with as few labeled examples as possible. Active learning, which actively selects the most valuable data to query their labels, is the most important approach to reduce labeling cost. In this paper, we propose a novel approach MADM for batch mode multi-label active learning. On one hand, MADM exploits representativeness and diversity in both the feature and label space by matching the distribution between labeled and unlabeled data. On the other hand, it tends to query predicted positive instances, which are expected to be more informative than negative ones. Experiments on benchmark datasets demonstrate that the proposed approach can reduce the labeling cost significantly.  相似文献   

16.
Multi-label learning deals with the problem where each instance is associated with multiple labels simultaneously. The task of this learning paradigm is to predict the label set for each unseen instance, through analyzing training instances with known label sets. In this paper, a neural network based multi-label learning algorithm named Ml-rbf is proposed, which is derived from the traditional radial basis function (RBF) methods. Briefly, the first layer of an Ml-rbf neural network is formed by conducting clustering analysis on instances of each possible class, where the centroid of each clustered groups is regarded as the prototype vector of a basis function. After that, second layer weights of the Ml-rbf neural network are learned by minimizing a sum-of-squares error function. Specifically, information encoded in the prototype vectors corresponding to all classes are fully exploited to optimize the weights corresponding to each specific class. Experiments on three real-world multi-label data sets show that Ml-rbf achieves highly competitive performance to other well-established multi-label learning algorithms.  相似文献   

17.
赵海峰  余强  曹俞旦 《计算机科学》2014,41(12):160-163
多标签学习用于处理一个样本同时拥有多个标签的问题。已有的多标签懒惰学习算法IMLLA未充分考虑样本分布的特点,即在构建样本的近邻点集时,近邻点个数取固定值,这可能会将相似度高的点排除在近邻集之外,或者将相似度低的点包括在近邻集内,影响分类方法的性能。针对IMLLA的缺陷,将粒计算的思想加入近邻集的构建,提出一种基于粒计算的多标签懒惰学习算法(GMLLA)。该方法通过粒度控制,确定样本近邻点集,使得近邻集内的样本具有高相似度。实验结果表明,本算法的性能优于IMLLA。  相似文献   

18.
In multi-instance learning, the training examples are bags composed of instances without labels, and the task is to predict the labels of unseen bags through analyzing the training bags with known labels. A bag is positive if it contains at least one positive instance, while it is negative if it contains no positive instance. In this paper, a neural network based multi-instance learning algorithm named RBF-MIP is presented, which is derived from the popular radial basis function (RBF) methods. Briefly, the first layer of an RBF-MIP neural network is composed of clusters of bags formed by merging training bags agglomeratively, where Hausdorff metric is utilized to measure distances between bags and between clusters. Weights of second layer of the RBF-MIP neural network are optimized by minimizing a sum-of-squares error function and worked out through singular value decomposition (SVD). Experiments on real-world multi-instance benchmark data, artificial multi-instance benchmark data and natural scene image database retrieval are carried out. The experimental results show that RBF-MIP is among the several best learning algorithms on multi-instance problems.  相似文献   

19.
Feature selection for multi-label naive Bayes classification   总被引:4,自引:0,他引:4  
In multi-label learning, the training set is made up of instances each associated with a set of labels, and the task is to predict the label sets of unseen instances. In this paper, this learning problem is addressed by using a method called Mlnb which adapts the traditional naive Bayes classifiers to deal with multi-label instances. Feature selection mechanisms are incorporated into Mlnb to improve its performance. Firstly, feature extraction techniques based on principal component analysis are applied to remove irrelevant and redundant features. After that, feature subset selection techniques based on genetic algorithms are used to choose the most appropriate subset of features for prediction. Experiments on synthetic and real-world data show that Mlnb achieves comparable performance to other well-established multi-label learning algorithms.  相似文献   

20.
In multi-label classification, examples can be associated with multiple labels simultaneously. The task of learning from multi-label data can be addressed by methods that transform the multi-label classification problem into several single-label classification problems. The binary relevance approach is one of these methods, where the multi-label learning task is decomposed into several independent binary classification problems, one for each label in the set of labels, and the final labels for each example are determined by aggregating the predictions from all binary classifiers. However, this approach fails to consider any dependency among the labels. Aiming to accurately predict label combinations, in this paper we propose a simple approach that enables the binary classifiers to discover existing label dependency by themselves. An experimental study using decision trees, a kernel method as well as Naïve Bayes as base-learning techniques shows the potential of the proposed approach to improve the multi-label classification performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号