首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
为解决监督学习过程中难以获得大量带有类标记样本且样本数据标记代价较高的问题,结合主动学习和半监督学习方法,提出基于Tri-training半监督学习和凸壳向量的SVM主动学习算法.通过计算样本集的壳向量,选择最有可能成为支持向量的壳向量进行标记.为解决以往主动学习算法在选择最富有信息量的样本标记后,不再进一步利用未标记样本的问题,将Tri-training半监督学习方法引入SVM主动学习过程,选择类标记置信度高的未标记样本加入训练样本集,利用未标记样本集中有利于学习器的信息.在UCI数据集上的实验表明,文中算法在标记样本较少时获得分类准确率较高和泛化性能较好的SVM分类器,降低SVM训练学习的样本标记代价.  相似文献   

2.
在很多信息处理任务中,人们容易获得大量的无标签样本,但对样本进行标注是非常费时和费力的。作为机器学习领域中一种重要的学习方法,主动学习通过选择最有信息量的样本进行标注,减少了人工标注的代价。然而,现有的大多数主动学习算法都是基于分类器的监督学习方法,这类算法并不适用于无任何标签信息的样本选择。针对这个问题,借鉴最优实验设计的算法思想,结合自适应稀疏邻域重构理论,提出基于自适应稀疏邻域重构的主动学习算法。该算法可以根据数据集各区域的不同分布自适应地选择邻域规模,同步完成邻域点的搜寻和重构系数的计算,能在无任何标签信息的情况下较好地选择最能代表样本集分布结构的样本。基于人工合成数据集和真实数据集的实验表明,在同等标注代价下,基于自适应稀疏邻域重构的主动学习算法在分类精度和鲁棒性上具有较高的性能。  相似文献   

3.
As a recently proposed machine learning method, active learning of Gaussian processes can effectively use a small number of labeled examples to train a classifier, which in turn is used to select the most informative examples from unlabeled data for manual labeling. However, in the process of example selection, active learning usually need consider all the unlabeled data without exploiting the structural space connectivity among them. This will decrease the classification accuracy to some extent since the selected points may not be the most informative. To overcome this shortcoming, in this paper, we present a method which applies the manifold-preserving graph reduction (MPGR) algorithm to the traditional active learning method of Gaussian processes. MPGR is a simple and efficient example sparsification algorithm which can construct a subset to represent the global structure and simultaneously eliminate the influence of noisy points and outliers. Thereby, when actively selecting examples to label, we just choose from the subset constructed by MPGR instead of the whole unlabeled data. We report experimental results on multiple data sets which demonstrate that our method obtains better classification performance compared with the original active learning method of Gaussian processes.  相似文献   

4.
Support vector machines (SVMs) are a popular class of supervised learning algorithms, and are particularly applicable to large and high-dimensional classification problems. Like most machine learning methods for data classification and information retrieval, they require manually labeled data samples in the training stage. However, manual labeling is a time consuming and errorprone task. One possible solution to this issue is to exploit the large number of unlabeled samples that are easily accessible via the internet. This paper presents a novel active learning method for text categorization. The main objective of active learning is to reduce the labeling effort, without compromising the accuracy of classification, by intelligently selecting which samples should be labeled. The proposed method selects a batch of informative samples using the posterior probabilities provided by a set of multi-class SVM classifiers, and these samples are then manually labeled by an expert. Experimental results indicate that the proposed active learning method significantly reduces the labeling effort, while simultaneously enhancing the classification accuracy.  相似文献   

5.
龚彦鹭  吕佳 《计算机应用》2019,39(8):2297-2301
针对协同训练算法对模糊度高的样本容易标记错误导致分类器精度降低和协同训练在迭代时选择加入的无标记样本隐含有用信息不够的问题,提出了一种结合主动学习和密度峰值聚类的协同训练算法。在每次迭代之前,先选择模糊度高的无标记样本主动标记后加入有标记样本集,然后利用密度峰值聚类对无标记样本聚类得到每个无标记样本的密度和相对距离。迭代时选择具有较高密度和相对距离较远的无标记样本交由朴素贝叶斯(NB)分类,反复上述过程直到满足终止条件。利用主动学习标记模糊度高的样本能够改善分类器误标记识别问题,利用密度峰值聚类能够选择出较好表现数据空间结构的样本。在UCI的8个数据集和Kaggle的pima数据集上的实验表明,与SSLNBCA算法相比,所提算法的准确率最高提升6.7个百分点,平均提升1.46个百分点。  相似文献   

6.
Image classification is an important task in computer vision and machine learning. However, it is known that manually labeling images is time-consuming and expensive, but the unlabeled images are easily available. Active learning is a mechanism which tries to determine which unlabeled data points would be the most informative (i.e., improve the classifier the most) if they are labeled and used as training samples. In this paper, we introduce the idea of column subset selection, which aims to select the most representation columns from a data matrix, into active learning and propose a novel active learning algorithm, column subset selection for active learning (CSSactive). CSSactive selects the most representative images to label, then the other images are reconstructed by these labeled images. The goal of CSSactive is to minimize the reconstruction error. Besides, most of the previous active learning approaches are based on linear model, and hence they only consider linear functions. Therefore, they fail to discover the intrinsic geometry in images when the image space is highly nonlinear. Therefore, we provide a kernel-based column subset selection for active learning (KCSSactive) algorithm which performs the active learning in Reproducing Kernel Hilbert Space (RKHS) instead of the original image space to address this problem. Experimental results on Yale, AT&T and COIL20 data sets demonstrate the effectiveness of our proposed approaches.  相似文献   

7.
提出一种选择最富信息数据并予以标记的基于主动学习策略的半监督聚类算法。首先, 采用传统K-均值聚类算法对数据集进行粗聚类; 其次, 根据粗聚类结果计算出每个数据隶属于每个类簇的隶属度, 筛选出满足最大与次大隶属度差值小于阈值的候选数据, 并从中选择差值较小的数据作为最富信息的数据进行标记; 最后, 将候选数据集合中未标记数据分组到与每类已被标记数据平均距离最小的类簇中。实验表明, 提出的主动学习策略能够很好地学习到最富信息数据, 基于该学习策略的半监督聚类算法在测试不同数据集时均获得了较高的准确率。  相似文献   

8.
在实际应用问题中,由于客观世界物质的多样性、模糊性和复杂性,经常会遇到大量未知样本类别信息的数据挖掘问题,而传统方法往往都依赖于已知样本类别信息才能对数据进行有效挖掘,对于未知模式类别信息的多类数据目前还没有有效的处理方法.针对未知类别信息的多类样本挖掘问题,提出了一种基于主动学习的模式类别挖掘模型(pattern class mining model based on active learning, PM_AL)来解决未知类别信息的模式类别挖掘问题.该模型通过衡量已得到的模式类别与未标记样本间的关系,引入样本差异度的方法来抽取最有价值样本,通过主动学习方式以较小的标记代价快速挖掘无标记样本所蕴含的可能模式类别,从而有助于将无类别标记的多分类问题转化成有类别标记的多分类问题.实验结果表明,PM_AL算法能够以较小的标记代价处理无类别信息的模式类别挖掘问题.  相似文献   

9.
Learning from labeled and unlabeled data using a minimal number of queries   总被引:4,自引:0,他引:4  
The considerable time and expense required for labeling data has prompted the development of algorithms which maximize the classification accuracy for a given amount of labeling effort. On the one hand, the effort has been to develop the so-called "active learning" algorithms which sequentially choose the patterns to be explicitly labeled so as to realize the maximum information gain from each labeling. On the other hand, the effort has been to develop algorithms that can learn from labeled as well as the more abundant unlabeled data. Proposed in this paper is an algorithm that integrates the benefits of active learning with the benefits of learning from labeled and unlabeled data. Our approach is based on reversing the roles of the labeled and unlabeled data. Specifically, we use a Genetic Algorithm (GA) to iteratively refine the class membership of the unlabeled patterns so that the maximum a posteriori (MAP) based predicted labels of the patterns in the labeled dataset are in agreement with the known labels. This reversal of the role of labeled and unlabeled patterns leads to an implicit class assignment of the unlabeled patterns. For active learning, we use a subset of the GA population to construct multiple MAP classifiers. Points in the input space where there is maximal disagreement amongst these classifiers are then selected for explicit labeling. The learning from labeled and unlabeled data and active learning phases are interlaced and together provide accurate classification while minimizing the labeling effort.  相似文献   

10.
In content-based image retrieval (CBIR), relevance feedback has been proven to be a powerful tool for bridging the gap between low level visual features and high level semantic concepts. Traditionally, relevance feedback driven CBIR is often considered as a supervised learning problem where the user provided feedbacks are used to learn a distance metric or classification function. However, CBIR is intrinsically a semi-supervised learning problem in which the testing samples (images in the database) are present during the learning process. Moreover, when there are no sufficient feedbacks, these methods may suffer from the overfitting problem. In this paper, we propose a novel neighborhood preserving regression algorithm which makes efficient use of both labeled and unlabeled images. By using the unlabeled images, the geometrical structure of the image space can be incorporated into the learning system through a regularizer. Specifically, from all the functions which minimize the empirical loss on the labeled images, we select the one which best preserves the local neighborhood structure of the image space. In this way, our method can obtain a regression function which respects both semantic and geometrical structures of the image database. We present experimental evidence suggesting that our algorithm is able to use unlabeled data effectively for image retrieval.  相似文献   

11.
The traditional setting of supervised learning requires a large amount of labeled training examples in order to achieve good generalization. However, in many practical applications, unlabeled training examples are readily available, but labeled ones are fairly expensive to obtain. Therefore, semisupervised learning has attracted much attention. Previous research on semisupervised learning mainly focuses on semisupervised classification. Although regression is almost as important as classification, semisupervised regression is largely understudied. In particular, although cotraining is a main paradigm in semisupervised learning, few works has been devoted to cotraining-style semisupervised regression algorithms. In this paper, a cotraining-style semisupervised regression algorithm, that is, COREG, is proposed. This algorithm uses two regressors, each labels the unlabeled data for the other regressor, where the confidence in labeling an unlabeled example is estimated through the amount of reduction in mean squared error over the labeled neighborhood of that example. Analysis and experiments show that COREG can effectively exploit unlabeled data to improve regression estimates.  相似文献   

12.
In real applications of inductive learning for classifi cation, labeled instances are often defi cient, and labeling them by an oracle is often expensive and time-consuming. Active learning on a single task aims to select only informative unlabeled instances for querying to improve the classifi cation accuracy while decreasing the querying cost. However, an inevitable problem in active learning is that the informative measures for selecting queries are commonly based on the initial hypotheses sampled from only a few labeled instances. In such a circumstance, the initial hypotheses are not reliable and may deviate from the true distribution underlying the target task. Consequently, the informative measures will possibly select irrelevant instances. A promising way to compensate this problem is to borrow useful knowledge from other sources with abundant labeled information, which is called transfer learning. However, a signifi cant challenge in transfer learning is how to measure the similarity between the source and the target tasks. One needs to be aware of different distributions or label assignments from unrelated source tasks;otherwise, they will lead to degenerated performance while transferring. Also, how to design an effective strategy to avoid selecting irrelevant samples to query is still an open question. To tackle these issues, we propose a hybrid algorithm for active learning with the help of transfer learning by adopting a divergence measure to alleviate the negative transfer caused by distribution differences. To avoid querying irrelevant instances, we also present an adaptive strategy which could eliminate unnecessary instances in the input space and models in the model space. Extensive experiments on both the synthetic and the real data sets show that the proposed algorithm is able to query fewer instances with a higher accuracy and that it converges faster than the state-of-the-art methods.  相似文献   

13.
Most machine learning tasks in data classification and information retrieval require manually labeled data examples in the training stage. The goal of active learning is to select the most informative examples for manual labeling in these learning tasks. Most of the previous studies in active learning have focused on selecting a single unlabeled example in each iteration. This could be inefficient, since the classification model has to be retrained for every acquired labeled example. It is also inappropriate for the setup of information retrieval tasks where the user's relevance feedback is often provided for the top K retrieved items. In this paper, we present a framework for batch mode active learning, which selects a number of informative examples for manual labeling in each iteration. The key feature of batch mode active learning is to reduce the redundancy among the selected examples such that each example provides unique information for model updating. To this end, we employ the Fisher information matrix as the measurement of model uncertainty, and choose the set of unlabeled examples that can efficiently reduce the Fisher information of the classification model. We apply our batch mode active learning framework to both text categorization and image retrieval. Promising results show that our algorithms are significantly more effective than the active learning approaches that select unlabeled examples based only on their informativeness for the classification model.  相似文献   

14.
图像检索中很多时候会出现相关反馈提供的标注样本数不足,从而导致监督学习方法面临过适应问题的困扰。提出一种能有效使用未标记数据的半监督新型算法:近邻保留回归算法,它通过使已标记数据的观测误差函数最小化,来选择综合性能最好的回归函数,以兼顾图像的语义特征及图像空间的几何结构,并解决过适应问题。实验结果证明,算法能有效提高图像检索系统的性能。  相似文献   

15.
Label Propagation through Linear Neighborhoods   总被引:8,自引:0,他引:8  
In many practical data mining applications such as text classification, unlabeled training examples are readily available, but labeled ones are fairly expensive to obtain. Therefore, semi supervised learning algorithms have aroused considerable interests from the data mining and machine learning fields. In recent years, graph-based semi supervised learning has been becoming one of the most active research areas in the semi supervised learning community. In this paper, a novel graph-based semi supervised learning approach is proposed based on a linear neighborhood model, which assumes that each data point can be linearly reconstructed from its neighborhood. Our algorithm, named linear neighborhood propagation (LNP), can propagate the labels from the labeled points to the whole data set using these linear neighborhoods with sufficient smoothness. A theoretical analysis of the properties of LNP is presented in this paper. Furthermore, we also derive an easy way to extend LNP to out-of-sample data. Promising experimental results are presented for synthetic data, digit, and text classification tasks.  相似文献   

16.
在主动学习中,采用近邻熵(NeighborhoodEntropy)作为样例的挑选标准,熵值最大的样例体现基于近邻分类规则,最无法确定该样例的类标。而标注不确定性高的样例可用尽量少的样例获得较高的分类性能。文中提出一种基于近邻熵的主动学习算法。该算法首先计算未标注样例的近邻样例类别熵,然后挑选熵值最大样例的进行标注。实验表明,基于近邻熵挑选样例进行标注,较基于最大距离(MaximalDistance)挑选和随机样例挑选可获得更高的分类性能。  相似文献   

17.
在许多分类任务中,存在大量未标记的样本,并且获取样本标签耗时且昂贵。利用主动学习算法确定最应被标记的关键样本,来构建高精度分类器,可以最大限度地减少标记成本。本文提出一种基于PageRank的主动学习算法(PAL),充分利用数据分布信息进行有效的样本选择。利用PageRank根据样本间的相似度关系依次计算邻域、分值矩阵和排名向量;选择代表样本,并根据其相似度关系构建二叉树,利用该二叉树对代表样本进行聚类,标记和预测;将代表样本作为训练集,对其他样本进行分类。实验采用8个公开数据集,与5种传统的分类算法和3种流行的主动学习算法比较,结果表明PAL算法能取得更好的分类效果。  相似文献   

18.
针对现有的威胁感知算法对样本标注代价较大且在训练分类器时只使用已标注的威胁样本,提出了一种基于图约束和预聚类的主动学习算法,该算法旨在通过降低标注威胁样本的代价和充分利用未标注的威胁样本对训练分类器的辅助作用,训练出更好的分类器,实现有效地感知威胁情景。该算法首先用已标注的威胁样本集合训练分类器,接着从未标注的威胁样本集中挑选出最有价值的威胁样本,并对其进行标注,再将标注后的威胁样本加入已标注的样本集中并从原来未标注的样本集中删去此样本,最后用新的已标注的威胁样本集重新训练分类器,直到满足循环条件终止。仿真实验表明,基于图约束与预聚类的主动学习算法在达到目标准确率的同时降低了标注代价且能够有效地感知威胁情景,具有一定的研究意义。  相似文献   

19.
《Journal of Process Control》2014,24(9):1454-1461
This contribution proposes a new active learning strategy for smart soft sensor development. The main objective of the smart soft sensor is to opportunely collect labeled data samples in such a way as to minimize the error of the regression process while minimizing the number of labeled samples used, and thus to reduce the costs related to labeling training samples. Instead of randomly labeling data samples, the smart soft sensor only labels those data samples which can provide the most significant information for construction of the soft sensor. In this paper, without loss of generality, the smart soft sensor is built based on the widely used principal component regression model. For performance evaluation, an industrial case study is provided. Compared to the random sample labeling strategy, both accuracy and stability have been improved by the active learning strategy based smart soft sensor.  相似文献   

20.
Semi-supervised dimensional reduction methods play an important role in pattern recognition, which are likely to be more suitable for plant leaf and palmprint classification, since labeling plant leaf and palmprint often requires expensive human labor, whereas unlabeled plant leaf and palmprint is far easier to obtain at very low cost. In this paper, we attempt to utilize the unlabeled data to aid plant leaf and palmprint classification task with the limited number of the labeled plant leaf or palmprint data, and propose a semi-supervised locally discriminant projection (SSLDP) algorithm for plant leaf and palmprint classification. By making use of both labeled and unlabeled data in learning a transformation for dimensionality reduction, the proposed method can overcome the small-sample-size (SSS) problem under the situation where labeled data are scant. In SSLDP, the labeled data points, combined with the unlabeled data ones, are used to construct the within-class and between-class weight matrices incorporating the neighborhood information of the data set. The experiments on plant leaf and palmprint databases demonstrate that SSLDP is effective and feasible for plant leaf and palmprint classification.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号