首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
基于增强稀疏性特征选择的网络图像标注   总被引:1,自引:0,他引:1  
史彩娟  阮秋琦 《软件学报》2015,26(7):1800-1811
面对网络图像的爆炸性增长,网络图像标注成为近年来一个热点研究内容,稀疏特征选择在提升网络图像标注效率和性能方面发挥着重要的作用.提出了一种增强稀疏性特征选择算法,即,基于l2,1/2矩阵范数和共享子空间的半监督稀疏特征选择算法(semi-supervised sparse feature selection based on l2,1/2-matix norm with shared subspace learning,简称SFSLS)进行网络图像标注.在SFSLS算法中,应用l2,1/2矩阵范数来选取最稀疏和最具判别性的特征,通过共享子空间学习,考虑不同特征之间的关联信息.另外,基于图拉普拉斯的半监督学习,使SFSLS算法同时利用了有标签数据和无标签数据.设计了一种有效的迭代算法来最优化目标函数.SFSLS算法与其他稀疏特征选择算法在两个大规模网络图像数据库上进行了比较,结果表明,SFSLS算法更适合于大规模网络图像的标注.  相似文献   

2.
As a recently proposed machine learning method, active learning of Gaussian processes can effectively use a small number of labeled examples to train a classifier, which in turn is used to select the most informative examples from unlabeled data for manual labeling. However, in the process of example selection, active learning usually need consider all the unlabeled data without exploiting the structural space connectivity among them. This will decrease the classification accuracy to some extent since the selected points may not be the most informative. To overcome this shortcoming, in this paper, we present a method which applies the manifold-preserving graph reduction (MPGR) algorithm to the traditional active learning method of Gaussian processes. MPGR is a simple and efficient example sparsification algorithm which can construct a subset to represent the global structure and simultaneously eliminate the influence of noisy points and outliers. Thereby, when actively selecting examples to label, we just choose from the subset constructed by MPGR instead of the whole unlabeled data. We report experimental results on multiple data sets which demonstrate that our method obtains better classification performance compared with the original active learning method of Gaussian processes.  相似文献   

3.
Confronted with the explosive growth of web images, the web image annotation has become a critical research issue for image search and index. Sparse feature selection plays an important role in improving the efficiency and performance of web image annotation. Meanwhile, it is beneficial to developing an effective mechanism to leverage the unlabeled training data for large-scale web image annotation. In this paper we propose a novel sparse feature selection framework for web image annotation, namely sparse Feature Selection based on Graph Laplacian (FSLG)2. FSLG applies the l2,1/2-matrix norm into the sparse feature selection algorithm to select the most sparse and discriminative features. Additional, graph Laplacian based semi-supervised learning is used to exploit both labeled and unlabeled data for enhancing the annotation performance. An efficient iterative algorithm is designed to optimize the objective function. Extensive experiments on two web image datasets are performed and the results illustrate that our method is promising for large-scale web image annotation.  相似文献   

4.
半监督学习过程中,由于无标记样本的随机选择造成分类器性能降低及不稳定性的情况经常发生;同时,面对仅包含少量有标记样本的高维数据的分类问题,传统的半监督学习算法效果不是很理想.为了解决这些问题,本文从探索数据样本空间和特征空间两个角度出发,提出一种结合随机子空间技术和集成技术的安全半监督学习算法(A safe semi-supervised learning algorithm combining stochastic subspace technology and ensemble technology,S3LSE),处理仅包含极少量有标记样本的高维数据分类问题.首先,S3LSE采用随机子空间技术将高维数据集分解为B个特征子集,并根据样本间的隐含信息对每个特征子集优化,形成B个最优特征子集;接着,将每个最优特征子集抽样形成G个样本子集,在每个样本子集中使用安全的样本标记方法扩充有标记样本,生成G个分类器,并对G个分类器进行集成;然后,对B个最优特征子集生成的B个集成分类器再次进行集成,实现高维数据的分类.最后,使用高维数据集模拟半监督学习过程进行实验,实验结果表明S3LSE具有较好的性能.  相似文献   

5.
李延超  肖甫  陈志  李博 《软件学报》2020,31(12):3808-3822
主动学习从大量无标记样本中挑选样本交给专家标记.现有的批抽样主动学习算法主要受3个限制:(1)一些主动学习方法基于单选择准则或对数据、模型设定假设,这类方法很难找到既有不确定性又有代表性的未标记样本;(2)现有批抽样主动学习方法的性能很大程度上依赖于样本之间相似性度量的准确性,例如预定义函数或差异性衡量;(3)噪声标签问题一直影响批抽样主动学习算法的性能.提出一种基于深度学习批抽样的主动学习方法.通过深度神经网络生成标记和未标记样本的学习表示和采用标签循环模式,使得标记样本与未标记样本建立联系,再回到相同标签的标记样本.这样同时考虑了样本的不确定性和代表性,并且算法对噪声标签具有鲁棒性.在提出的批抽样主动学习方法中,算法使用的子模块函数确保选择的样本集合具有多样性.此外,自适应参数的优化,使得主动学习算法可以自动平衡样本的不确定性和代表性.将提出的主动学习方法应用到半监督分类和半监督聚类中,实验结果表明,所提出的主动学习方法的性能优于现有的一些先进的方法.  相似文献   

6.
Learning from labeled and unlabeled data using a minimal number of queries   总被引:4,自引:0,他引:4  
The considerable time and expense required for labeling data has prompted the development of algorithms which maximize the classification accuracy for a given amount of labeling effort. On the one hand, the effort has been to develop the so-called "active learning" algorithms which sequentially choose the patterns to be explicitly labeled so as to realize the maximum information gain from each labeling. On the other hand, the effort has been to develop algorithms that can learn from labeled as well as the more abundant unlabeled data. Proposed in this paper is an algorithm that integrates the benefits of active learning with the benefits of learning from labeled and unlabeled data. Our approach is based on reversing the roles of the labeled and unlabeled data. Specifically, we use a Genetic Algorithm (GA) to iteratively refine the class membership of the unlabeled patterns so that the maximum a posteriori (MAP) based predicted labels of the patterns in the labeled dataset are in agreement with the known labels. This reversal of the role of labeled and unlabeled patterns leads to an implicit class assignment of the unlabeled patterns. For active learning, we use a subset of the GA population to construct multiple MAP classifiers. Points in the input space where there is maximal disagreement amongst these classifiers are then selected for explicit labeling. The learning from labeled and unlabeled data and active learning phases are interlaced and together provide accurate classification while minimizing the labeling effort.  相似文献   

7.
Cost-Sensitive Active Visual Category Learning   总被引:1,自引:0,他引:1  
We present an active learning framework that predicts the tradeoff between the effort and information gain associated with a candidate image annotation, thereby ranking unlabeled and partially labeled images according to their expected ??net worth?? to an object recognition system. We develop a multi-label multiple-instance approach that accommodates realistic images containing multiple objects and allows the category-learner to strategically choose what annotations it receives from a mixture of strong and weak labels. Since the annotation cost can vary depending on an image??s complexity, we show how to improve the active selection by directly predicting the time required to segment an unlabeled image. Our approach accounts for the fact that the optimal use of manual effort may call for a combination of labels at multiple levels of granularity, as well as accurate prediction of manual effort. As a result, it is possible to learn more accurate category models with a lower total expenditure of annotation effort. Given a small initial pool of labeled data, the proposed method actively improves the category models with minimal manual intervention.  相似文献   

8.
目前基于PU问题的时间序列分类常采用半监督学习对未标注数据集[U]中数据进行自动标注并构建分类器,但在这种方法中,边界数据样本类别的自动标注难以保证正确性,从而导致构建分类器的效果不佳。针对以上问题,提出一种采用主动学习对未标注数据集[U]中数据进行人工标注从而构建分类器的方法OAL(Only Active Learning),基于投票委员会(QBC)对标注数据集构建多个分类器进行投票,以计算未标注数据样本的类别不一致性,并综合考虑数据样本的分布密度,计算数据样本的信息量,作为主动学习的数据选择策略。鉴于人工标注数据量有限,在上述OAL方法的基础上,将主动学习与半监督学习相结合,即在主动学习迭代过程中,将类别一致性高的部分数据样本自动标注,以增加训练数据中标注数据量,保证构建分类器的训练数据量。实验表明了该方法通过部分人工标注,相比半监督学习,能够为PU数据集构建更高准确率的分类器。  相似文献   

9.
Chen  Ailin  Yang  Pin  Cheng  Pengsen 《The Journal of supercomputing》2022,78(2):2744-2771

The rumors, advertisements and malicious links are spread in social networks by social spammers, which affect users’ normal access to social networks and cause security problems. Most methods aim to detect social spammers by various features, such as content features, behavior features and relationship graph features, which rely on a large-scale labeled data. However, labeled data are lacking for training in real world, and manual annotating is time-consuming and labor-intensive. To solve this problem, we propose a novel method which combines active learning algorithm with co-training algorithm to make full use of unlabeled data. In co-training, user features are divided into two views without overlap. Classifiers are trained iteratively with labeled instances and the most confident unlabeled instances with pseudo-labels. In active learning, the most representative and uncertain instances are selected and annotated with real labels to extend labeled dataset. Experimental results on the Twitter and Apontador datasets show that our method can effectively detect social spammers in the case of limited labeled data.

  相似文献   

10.
基于主动学习和半监督学习的多类图像分类   总被引:5,自引:0,他引:5  
陈荣  曹永锋  孙洪 《自动化学报》2011,37(8):954-962
多数图像分类算法需要大量的训练样本对分类器模型进行训练.在实际应用中, 对大量样本进行标注非常枯燥、耗时.对于一些特殊图像,如合成孔径雷达 (Synthetic aperture radar, SAR)图像, 对其内容判读非常困难,因此能够获得的标注样本数量非常有限. 本文将基于最优标号和次优标号(Best vs second-best, BvSB)的主动学习和带约束条件的自学习(Constrained self-training, CST) 引入到基于支持向量机(Support vector machine, SVM)分类器的图像分类算法中,提出了一种新的图像分类方法.通过BvSB 主动学习去挖掘那些对当前分类器模型最有价值的样本进行人工标注,并借助CST半 监督学习进一步利用样本集中大量的未标注样本,使得在花费较小标注代价情况下, 能够获得良好的分类性能.将新方法与随机样本选择、基于熵的不确定性采样主动学 习算法以及BvSB主动学习方法进行了性能比较.对3个光学图像集及1个SAR图像集分类 问题的实验结果显示,新方法能够有效地减少分类器训练时所需的人工标注样本的数 量,并获得较高的准确率和较好的鲁棒性.  相似文献   

11.
基于深度贝叶斯主动学习的高光谱图像分类   总被引:1,自引:0,他引:1       下载免费PDF全文
针对高光谱图像分类中标记样本获取费时费力,无标记数据难以得到有效利用以及主动学习与深度学习结合难等问题,结合贝叶斯深度学习与主动学习的最新进展,提出一种基于深度贝叶斯的主动学习高光谱图像分类算法。利用少量标记样本训练一个卷积神经网络模型,根据与贝叶斯方法结合的主动学习采样策略从无标记样本中选择模型分类最不确定性的样本,选取的样本经人工标记后加入到训练集重新训练模型,减小模型不确定性,提高模型分类精度。通过PaviaU高光谱图像分类的实验结果表明,在少量的标记样本下,提出的方法比传统的方法分类效果更好。  相似文献   

12.
Label propagation consists in annotating an unlabeled dataset starting from a set of labeled items. However, most current methods exploit only image similarity between labeled and unlabeled images in order to find propagation candidates, which may result, especially in very large datasets, in retrieving mostly near-duplicate images. While such approaches are technically correct, as they maximize the propagation precision, the resulting annotated dataset may not be as useful, since they lack intra-class variability within the set of images sharing the same label. In this paper, we propose an approach for label propagation which favors the propagation of an object’s label to a set of images representing as many different views of that object as possible, while at the same time preserving the relevance of the retrieved items to the query. Our method is based on a diversity-based clustering technique using a random forest framework and a label propagation approach which is able to effectively and efficiently propagate annotations using a similarity-based approach operating on clusters. The method was tested on a very large dataset of fish images achieving good performance in automated label propagation, ensuring diversification of the annotated items while preserving precision.  相似文献   

13.
Semi-supervised model-based document clustering: A comparative study   总被引:4,自引:0,他引:4  
Semi-supervised learning has become an attractive methodology for improving classification models and is often viewed as using unlabeled data to aid supervised learning. However, it can also be viewed as using labeled data to help clustering, namely, semi-supervised clustering. Viewing semi-supervised learning from a clustering angle is useful in practical situations when the set of labels available in labeled data are not complete, i.e., unlabeled data contain new classes that are not present in labeled data. This paper analyzes several multinomial model-based semi-supervised document clustering methods under a principled model-based clustering framework. The framework naturally leads to a deterministic annealing extension of existing semi-supervised clustering approaches. We compare three (slightly) different semi-supervised approaches for clustering documents: Seeded damnl, Constrained damnl, and Feedback-based damnl, where damnl stands for multinomial model-based deterministic annealing algorithm. The first two are extensions of the seeded k-means and constrained k-means algorithms studied by Basu et al. (2002); the last one is motivated by Cohn et al. (2003). Through empirical experiments on text datasets, we show that: (a) deterministic annealing can often significantly improve the performance of semi-supervised clustering; (b) the constrained approach is the best when available labels are complete whereas the feedback-based approach excels when available labels are incomplete. Editor: Andrew Moore  相似文献   

14.
Many data mining applications have a large amount of data but labeling data is usually difficult, expensive, or time consuming, as it requires human experts for annotation. Semi-supervised learning addresses this problem by using unlabeled data together with labeled data in the training process. Co-Training is a popular semi-supervised learning algorithm that has the assumptions that each example is represented by multiple sets of features (views) and these views are sufficient for learning and independent given the class. However, these assumptions are strong and are not satisfied in many real-world domains. In this paper, a single-view variant of Co-Training, called Co-Training by Committee (CoBC) is proposed, in which an ensemble of diverse classifiers is used instead of redundant and independent views. We introduce a new labeling confidence measure for unlabeled examples based on estimating the local accuracy of the committee members on its neighborhood. Then we introduce two new learning algorithms, QBC-then-CoBC and QBC-with-CoBC, which combine the merits of committee-based semi-supervised learning and active learning. The random subspace method is applied on both C4.5 decision trees and 1-nearest neighbor classifiers to construct the diverse ensembles used for semi-supervised learning and active learning. Experiments show that these two combinations can outperform other non committee-based ones.  相似文献   

15.
Most machine learning tasks in data classification and information retrieval require manually labeled data examples in the training stage. The goal of active learning is to select the most informative examples for manual labeling in these learning tasks. Most of the previous studies in active learning have focused on selecting a single unlabeled example in each iteration. This could be inefficient, since the classification model has to be retrained for every acquired labeled example. It is also inappropriate for the setup of information retrieval tasks where the user's relevance feedback is often provided for the top K retrieved items. In this paper, we present a framework for batch mode active learning, which selects a number of informative examples for manual labeling in each iteration. The key feature of batch mode active learning is to reduce the redundancy among the selected examples such that each example provides unique information for model updating. To this end, we employ the Fisher information matrix as the measurement of model uncertainty, and choose the set of unlabeled examples that can efficiently reduce the Fisher information of the classification model. We apply our batch mode active learning framework to both text categorization and image retrieval. Promising results show that our algorithms are significantly more effective than the active learning approaches that select unlabeled examples based only on their informativeness for the classification model.  相似文献   

16.
In this paper, we address the challenge about insufficiency of training set and limited feedback information in each relevance feedback (RF) round during the process of content based image retrieval (CBIR). We propose a novel active learning scheme to utilize the labeled and unlabeled images to build the initial Support Vector Machine (SVM) classifier for image retrieving. In our framework, two main components, a pseudo-label strategy and an improved active learning selection method, are included. Moreover, a feature subspace partition algorithm is proposed to model the retrieval target from users by the analysis from relevance labeled images. Experimental results demonstrate the superiority of the proposed method on a range of databases with respect to the retrieval accuracy.  相似文献   

17.
It is very expensive and time-consuming to annotate huge amounts of data. Active learning would be a suitable approach to minimize the effort of annotation. A novel active learning approach, coupled K nearest neighbor pseudo pruning (CKNNPP), is proposed in the paper, which is based on querying examples by KNNPP method. The KNNPP method applies k nearest neighbor technique to search for k neighbor samples from labeled samples of unlabeled samples. When k labeled samples are not belong to the same class, the corresponded unlabeled sample is queried and given its right label by supervisor, and then it is added to labeled training set. In contrast with the previous depiction, the unlabeled sample is not selected and pruned, that is the pseudo pruning. This definition is enlightened from the K nearest neighbor pruning preprocessing. These samples selected by KNNPP are considered to be near or on the optimal classification hyperplane that is crucial for active learning. Especially, in order to avoid the excursion of the optimal classification hyperplane after adding a queried sample, CKNNPP method is proposed finally that two samples with different class label (like a couple, annotated by supervisor) are queried by KNNPP and added in the training set simultaneously for updating training set in each iteration. The CKNNPP can provide a good performance, and especially it is simple, effective, and robust, and can solve the classification problem with unbalanced dataset compared with the existing methods. Then, the computational complexity of CKNNPP is analyzed. Additionally, a new stopping criterion is applied in the proposed method, and the classifier is implemented by Lagrangian Support Vector Machines in iterations of active learning. Finally, twelve UCI datasets, image datasets of aircrafts, and the dataset of radar high-resolution range profile are used to validate the feasibility and effectiveness of the proposed method. The results illuminate that CKNNPP gains superior performance compared with the other seven state-of-the-art active learning approaches.  相似文献   

18.
为解决监督学习过程中难以获得大量带有类标记样本且样本数据标记代价较高的问题,结合主动学习和半监督学习方法,提出基于Tri-training半监督学习和凸壳向量的SVM主动学习算法.通过计算样本集的壳向量,选择最有可能成为支持向量的壳向量进行标记.为解决以往主动学习算法在选择最富有信息量的样本标记后,不再进一步利用未标记样本的问题,将Tri-training半监督学习方法引入SVM主动学习过程,选择类标记置信度高的未标记样本加入训练样本集,利用未标记样本集中有利于学习器的信息.在UCI数据集上的实验表明,文中算法在标记样本较少时获得分类准确率较高和泛化性能较好的SVM分类器,降低SVM训练学习的样本标记代价.  相似文献   

19.
Multimedia understanding for high dimensional data is still a challenging work, due to redundant features, noises and insufficient label information it contains. Graph-based semi-supervised feature learning is an effective approach to address this problem. Nevertheless, Existing graph-based semi-supervised methods usually depend on the pre-constructed Laplacian matrix but rarely modify it in the subsequent classification tasks. In this paper, an adaptive local manifold learning based semi-supervised feature selection is proposed. Compared to the state-of-the-art, the proposed algorithm has two advantages: 1) Adaptive local manifold learning and feature selection are integrated jointly into a single framework, where both the labeled and unlabeled data are utilized. Besides, the correlations between different components are also considered. 2) A group sparsity constraint, i.e. l 2?,?1-norm, is imposed to select the most relevant features. We also apply the proposed algorithm to serval kinds of multimedia understanding applications. Experimental results demonstrate the effectiveness of the proposed algorithm.  相似文献   

20.
In recent years, hashing-based methods for large-scale similarity search have sparked considerable research interests in the data mining and machine learning communities. While unsupervised hashing-based methods have achieved promising successes for metric similarity, they cannot handle semantic similarity which is usually given in the form of labeled point pairs. To overcome this limitation, some attempts have recently been made on semi-supervised hashing which aims at learning hash functions from both metric and semantic similarity simultaneously. Existing semi-supervised hashing methods can be regarded as passive hashing since they assume that the labeled pairs are provided in advance. In this paper, we propose a novel framework, called active hashing, which can actively select the most informative labeled pairs for hash function learning. Specifically, it identifies the most informative points to label and constructs labeled pairs accordingly. Under this framework, we use data uncertainty as a measure of informativeness and develop a batch mode algorithm to speed up active selection. We empirically compare our method with a state-of-the-art passive hashing method on two benchmark data sets, showing that the proposed method can reduce labeling cost as well as overcome the limitations of passive hashing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号