首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
MILES: multiple-instance learning via embedded instance selection   总被引:4,自引:0,他引:4  
Multiple-instance problems arise from the situations where training class labels are attached to sets of samples (named bags), instead of individual samples within each bag (called instances). Most previous multiple-instance learning (MIL) algorithms are developed based on the assumption that a bag is positive if and only if at least one of its instances is positive. Although the assumption works well in a drug activity prediction problem, it is rather restrictive for other applications, especially those in the computer vision area. We propose a learning method, MILES (multiple-instance learning via embedded instance selection), which converts the multiple-instance learning problem to a standard supervised learning problem that does not impose the assumption relating instance labels to bag labels. MILES maps each bag into a feature space defined by the instances in the training bags via an instance similarity measure. This feature mapping often provides a large number of redundant or irrelevant features. Hence, 1-norm SVM is applied to select important features as well as construct classifiers simultaneously. We have performed extensive experiments. In comparison with other methods, MILES demonstrates competitive classification accuracy, high computation efficiency, and robustness to labeling uncertainty  相似文献   

2.
Multiple instance learning (MIL) is concerned with learning from sets (bags) of objects (instances), where the individual instance labels are ambiguous. In this setting, supervised learning cannot be applied directly. Often, specialized MIL methods learn by making additional assumptions about the relationship of the bag labels and instance labels. Such assumptions may fit a particular dataset, but do not generalize to the whole range of MIL problems. Other MIL methods shift the focus of assumptions from the labels to the overall (dis)similarity of bags, and therefore learn from bags directly. We propose to represent each bag by a vector of its dissimilarities to other bags in the training set, and treat these dissimilarities as a feature representation. We show several alternatives to define a dissimilarity between bags and discuss which definitions are more suitable for particular MIL problems. The experimental results show that the proposed approach is computationally inexpensive, yet very competitive with state-of-the-art algorithms on a wide range of MIL datasets.  相似文献   

3.
4.
Multiple instance learning attempts to learn from a training set consists of labeled bags each containing many unlabeled instances. In previous works, most existing algorithms mainly pay attention to the ‘most positive’ instance in each positive bag, but ignore the other instances. For utilizing these unlabeled instances in positive bags, we present a new multiple instance learning algorithm via semi-supervised laplacian twin support vector machines (called Miss-LTSVM). In Miss-LTSVM, all instances in positive bags are used in the manifold regularization terms for improving the performance of classifier. For verifying the effectiveness of the presented method, a series of comparative experiments are performed on seven multiple instance data sets. Experimental results show that the proposed method has better classification accuracy than other methods in most cases.  相似文献   

5.
作为监督学习的一种变体,多示例学习(MIL)试图从包中的示例中学习分类器。在多示例学习中,标签与包相关联,而不是与单个示例相关联。包的标签是已知的,示例的标签是未知的。MIL可以解决标记模糊问题,但要解决带有弱标签的问题并不容易。对于弱标签问题,包和示例的标签都是未知的,但它们是潜在的变量。现在有多个标签和示例,可以通过对不同标签进行加权来近似估计包和示例的标签。提出了一种新的基于迁移学习的多示例学习框架来解决弱标签的问题。首先构造了一个基于多示例方法的迁移学习模型,该模型可以将知识从源任务迁移到目标任务中,从而将弱标签问题转换为多示例学习问题。在此基础上,提出了一种求解多示例迁移学习模型的迭代框架。实验结果表明,该方法优于现有多示例学习方法。  相似文献   

6.
7.
针对许多多示例算法都对正包中的示例情况做出假设的问题,提出了结合模糊聚类的多示例集成算法(ISFC).结合模糊聚类和多示例学习中负包的特点,提出了"正得分"的概念,用于衡量示例标签为正的可能性,降低了多示例学习中示例标签的歧义性;考虑到多示例学习中将负示例分类错误的代价更大,设计了一种包的代表示例选择策略,选出的代表示...  相似文献   

8.
Automatic content-based image categorization is a challenging research topic and has many practical applications. Images are usually represented as bags of feature vectors, and the categorization problem is studied in the Multiple-Instance Learning (MIL) framework. In this paper, we propose a novel learning technique which transforms the MIL problem into a standard supervised learning problem by defining a feature vector for each image bag. Specifically, the feature vectors of the image bags are grouped into clusters and each cluster is given a label. Using these labels, each instance of an image bag can be replaced by a corresponding label to obtain a bag of cluster labels. Data mining can then be employed to uncover common label patterns for each image category. These label patterns are converted into bags of feature vectors; and they are used to transform each image bag in the data set into a feature vector such that each vector element is the distance of the image bag to a distinct pattern bag. With this new image representation, standard supervised learning algorithms can be applied to classify the images into the pre-defined categories. Our experimental results demonstrate the superiority of the proposed technique in categorization accuracy as compared to state-of-the-art methods.  相似文献   

9.
基于多例学习的Web图像聚类   总被引:2,自引:0,他引:2  
在图像分类和自动标注系统中,多例学习(MIL)是研究的热点.目前MIL中的算法多为监督学习方法.针对非监督学习,在基于EM算法和启发式迭代优化算法的框架下,提出了6种多例聚类算法,并通过它们对来自于真实Web环境下的图像进行聚类以分析用户的搜索兴趣.由于一幅图像含有若干个区域,每个区域可被看为一个样例,属于同一个图像的区域则组成一个包.因此如何理解图像语义内容的问题即转化为多例学习.在多例学习的经典数据集MUSK数据和来自于Web图像集上的比较实验表明,提出的多例聚类算法具有优良的聚类性能.  相似文献   

10.
多示例多标签学习是一种新型的机器学习框架。在多示例多标签学习中,样本以包的形式存在,一个包由多个示例组成,并被标记多个标签。以往的多示例多标签学习研究中,通常认为包中的示例是独立同分布的,但这个假设在实际应用中是很难保证的。为了利用包中示例的相关性特征,提出了一种基于示例非独立同分布的多示例多标签分类算法。该算法首先通过建立相关性矩阵表示出包内示例的相关关系,每个多示例包由一个相关性矩阵表示;然后建立基于不同尺度的相关性矩阵的核函数;最后考虑到不同标签的预测对应不同的核函数,引入多核学习构造并训练针对不同标签预测的多核SVM分类器。图像和文本数据集上的实验结果表明,该算法大大提高了多标签分类的准确性。  相似文献   

11.
In active learning, the learner is required to measure the importance of unlabeled samples in a large dataset and select the best one iteratively. This sample selection process could be treated as a decision making problem, which evaluates, ranks, and makes choices from a finite set of alternatives. In many decision making problems, it usually applied multiple criteria since the performance is better than using a single criterion. Motivated by these facts, an active learning model based on multi-criteria decision making (MCMD) is proposed in this paper. After the investigation between any two unlabeled samples, a preference preorder is determined for each criterion. The dominated index and the dominating index are then defined and calculated to evaluate the informativeness of unlabeled samples, which provide an effective metric measure for sample selection. On the other hand, under multiple-instance learning (MIL) environment, the instances/samples are grouped into bags, a bag is negative only if all of its instances are negative, and is positive otherwise. Multiple-instance active learning (MIAL) aims to select and label the most informative bags from numerous unlabeled ones, and learn a MIL classifier for accurately predicting unseen bags by requesting as few labels as possible. It adopts a MIL algorithm as the base classifier, and follows an active learning procedure. In order to achieve a balance between learning efficiency and generalization capability, the proposed active learning model is restricted to a specific algorithm under MIL environment. Experimental results demonstrate the effectiveness of the proposed method.  相似文献   

12.
Multiple instance learning (MIL) is a binary classification problem with loosely supervised data where a class label is assigned only to a bag of instances indicating presence/absence of positive instances. In this paper we introduce a novel MIL algorithm using Gaussian processes (GP). The bag labeling protocol of the MIL can be effectively modeled by the sigmoid likelihood through the max function over GP latent variables. As the non-continuous max function makes exact GP inference and learning infeasible, we propose two approximations: the soft-max approximation and the introduction of witness indicator variables. Compared to the state-of-the-art MIL approaches, especially those based on the Support Vector Machine, our model enjoys two most crucial benefits: (i) the kernel parameters can be learned in a principled manner, thus avoiding grid search and being able to exploit a variety of kernel families with complex forms, and (ii) the efficient gradient search for kernel parameter learning effectively leads to feature selection to extract most relevant features while discarding noise. We demonstrate that our approaches attain superior or comparable performance to existing methods on several real-world MIL datasets including large-scale content-based image retrieval problems.  相似文献   

13.
Motion object tracking is an important issue in computer vision. In this paper, a robust tracking algorithm based on multiple instance learning (MIL) is proposed. First, a coarse-to-fine search method is designed to reduce the computation load of cropping candidate samples for a new arriving frame. Then, a bag-level similarity metric is proposed to select the most correct positive instances to form the positive bag. The instance’s importance to bag probability is determined by their Mahalanobis distance. Furthermore, an online discriminative classifier selection method, which exploits the average gradient and average weak classifiers strategy to optimize the margin function between positive and negative bags, is presented to solve the suboptimal problem in the process of selecting weak classifiers. Experimental results on challenging sequences show that the proposed method is superior to other compared methods in terms of both qualitative and quantitative assessments.  相似文献   

14.
In multi-instance learning, the training examples are bags composed of instances without labels, and the task is to predict the labels of unseen bags through analyzing the training bags with known labels. A bag is positive if it contains at least one positive instance, while it is negative if it contains no positive instance. In this paper, a neural network based multi-instance learning algorithm named RBF-MIP is presented, which is derived from the popular radial basis function (RBF) methods. Briefly, the first layer of an RBF-MIP neural network is composed of clusters of bags formed by merging training bags agglomeratively, where Hausdorff metric is utilized to measure distances between bags and between clusters. Weights of second layer of the RBF-MIP neural network are optimized by minimizing a sum-of-squares error function and worked out through singular value decomposition (SVD). Experiments on real-world multi-instance benchmark data, artificial multi-instance benchmark data and natural scene image database retrieval are carried out. The experimental results show that RBF-MIP is among the several best learning algorithms on multi-instance problems.  相似文献   

15.
邓波  陆颖隽  王如志 《计算机科学》2017,44(3):264-267, 287
在多示例学习(MIL)中,包是含有多个示例的集合,训练样本只给出包的标记,而没有给出单个示例的标记。提出一种基于示例标记强度的MIL方法(ILI-MIL),其允许示例标记强度为任何实数。考虑到基于梯度训练神经网络方法的计算复杂性和ILI-MIL目标函数的复杂性,利用基于化学反应优化的高阶神经网络来实现ILI-MIL,学习方法具有较强的非线性表达能力和较高的计算效率。实验结果表明,该算法比已有算法具有更加有效的分类能力,且适应范围更广。  相似文献   

16.

Dictionary plays an important role in multi-instance data representation. It maps bags of instances to histograms. Earth mover’s distance (EMD) is the most effective histogram distance metric for the application of multi-instance retrieval. However, up to now, there is no existing multi-instance dictionary learning methods designed for EMD-based histogram comparison. To fill this gap, we develop the first EMD-optimal dictionary learning method using stochastic optimization method. In the stochastic learning framework, we have one triplet of bags, including one basic bag, one positive bag, and one negative bag. These bags are mapped to histograms using a multi-instance dictionary. We argue that the EMD between the basic histogram and the positive histogram should be smaller than that between the basic histogram and the negative histogram. Base on this condition, we design a hinge loss. By minimizing this hinge loss and some regularization terms of the dictionary, we update the dictionary instances. The experiments over multi-instance retrieval applications shows its effectiveness when compared to other dictionary learning methods over the problems of medical image retrieval and natural language relation classification.

  相似文献   

17.
多数多标记学习方法通过在输出空间中,单示例同时与多个类别标记相关联表示多义性,目前有研究通过在输入空间将单一示例转化为示例包,建立包中多示例与多标记的联系。算法在生成示例包时采用等权重平均法计算每个标记对应样例的均值。由于数据具有局部分布特征,在计算该均值时考虑数据局部分布,将会使生成的示例包更加准确。本论文充分考虑数据分布特性,提出新的分类算法。实验表明改进算法性能优于其他常用多标记学习算法。  相似文献   

18.
高光谱目标表述是高光谱目标检测中的核心问题。在众多高光谱目标表述方法中, 多示例学习方法(MIL)由于不需要精确的像素级语义标签等因素,而成为研究高光谱目标表述的 一个有效方法。但是,面向高光谱目标表述的多示例学习方法中,存在正包内目标示例远少于 背景示例的示例级数据不均衡问题,导致学习到的目标表述性能不佳。为此,提出一种面向不 均衡数据的多示例学习方法,提取每个包中最可能为正的示例组成正示例集,以此为基础合成 新的正样本,增加正样本在正包中所占比例,改善高光谱目标表述能力。在真实高光谱数据上 验证所提方法的有效性,结果表明该方法使正包样本组成更均衡,从而学习到更正确的目标表 述,提高目标检测的性能。  相似文献   

19.
甘睿  印鉴 《计算机科学》2012,39(7):144-147
在多示例学习问题中,训练数据集里面的每一个带标记的样本都是由多个示例组成的包,其最终目的是利用这一数据集去训练一个分类器,使得可以利用该分类器去预测还没有被标记的包。在以往的关于多示例学习问题的研究中,有的是通过修改现有的单示例学习算法来迎合多示例的需要,有的则是通过提出新的方法来挖掘示例与包之间的关系并利用挖掘的结果来解决问题。以改变包的表现形式为出发点,提出了一个解决多示例学习问题的算法——概念评估算法。该算法首先利用聚类算法将所有示例聚成d簇,每一个簇可以看作是包含在示例中的概念;然后利用原本用于文本检索的TF-IDF(Term Frequency-Inverse Document Frequency)算法来评估出每一个概念在每个包中的重要性;最后将包表示成一个d维向量——概念评估向量,其第i个位置表示第i个簇所代表的概念在某个包中的重要程度。经重新表示后,原有的多示例数据集已不再是"多示例",以至于一些现有的单示例学习算法能够用来高效地解决多示例学习问题。  相似文献   

20.
多示例学习以示例组成的包作为训练样本,学习的目的是预测新包的类型。从分类角度上,处理问题的策略类似于以均质对象为基本处理单元的面向对象影像分类。针对两者之间理论和方法相似性,将多样性密度多示例学习算法与面向对象方法相结合用于高分辨率遥感图像分类。以图像分割方法获取均值对象作为示例,利用多样性密度算法对样本包进行学习获取最大多样性密度示例,最后根据相似性最大准则对单示例包或是经聚类算法得到的新包进行类别标记,以获取最终分类结果。通过与SVM分类器的比较,发现多样性密度算法的平均分类精度都在70%以上,最高可达96%左右,且对小样本问题学习能力更强,结果表明多示例学习在遥感图像分类中有着广泛应用前景。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号