首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we propose a novel semi-supervised learning approach based on nearest neighbor rule and cut edges. In the first step of our approach, a relative neighborhood graph based on all training samples is constructed for each unlabeled sample, and the unlabeled samples whose edges are all connected to training samples from the same class are labeled. These newly labeled samples are then added into the training samples. In the second step, standard self-training algorithm using nearest neighbor rule is applied for classification until a predetermined stopping criterion is met. In the third step, a statistical test is applied for label modification, and in the last step, the remaining unlabeled samples are classified using standard nearest neighbor rule. The main advantages of the proposed method are: (1) it reduces the error reinforcement by using relative neighborhood graph for classification in the initial stages of semi-supervised learning; (2) it introduces a label modification mechanism for better classification performance. Experimental results show the effectiveness of the proposed approach.  相似文献   

2.
Nearest neighbor editing aided by unlabeled data   总被引:1,自引:0,他引:1  
This paper proposes a novel method for nearest neighbor editing. Nearest neighbor editing aims to increase the classifier’s generalization ability by removing noisy instances from the training set. Traditionally nearest neighbor editing edits (removes/retains) each instance by the voting of the instances in the training set (labeled instances). However, motivated by semi-supervised learning, we propose a novel editing methodology which edits each training instance by the voting of all the available instances (both labeled and unlabeled instances). We expect that the editing performance could be boosted by appropriately using unlabeled data. Our idea relies on the fact that in many applications, in addition to the training instances, many unlabeled instances are also available since they do not need human annotation effort. Three popular data editing methods, including edited nearest neighbor, repeated edited nearest neighbor and All k-NN are adopted to verify our idea. They are tested on a set of UCI data sets. Experimental results indicate that all the three editing methods can achieve improved performance with the aid of unlabeled data. Moreover, the improvement is more remarkable when the ratio of training data to unlabeled data is small.  相似文献   

3.
针对NN(nearest neighbor)和kNN(k-nearest neighbor)方法在标记样本较少时,分类正确率不高的缺陷,根据人脑分类样本时,自觉地利用未标记样本的半监督学习机理,提出一种人脑半监督学习机理分类方法。该方法利用未标记样本间的近邻关系,减少了标记样本数量对分类正确率的影响程度。在MNIST手写体数字库和ORL人脸库上的样本分类实验表明,在标记样本数较少的情况下,该方法的分类正确率比NN和kNN方法高得多。  相似文献   

4.
梁喜涛  顾磊 《计算机科学》2015,42(6):228-232, 261
分词是中文自然语言处理中的一项关键基础技术.为了解决训练样本不足以及获取大量标注样本费时费力的问题,提出了一种基于最近邻规则的主动学习分词方法.使用新提出的选择策略从大量无标注样本中选择最有价值的样本进行标注,再把标注好的样本加入到训练集中,接着使用该集合来训练分词器.最后在PKU数据集、MSR数据集和山西大学数据集上进行测试,并与传统的基于不确定性的选择策略进行比较.实验结果表明,提出的最近邻主动学习方法在进行样本选择时能够选出更有价值的样本,有效降低了人工标注的代价,同时还提高了分词结果的准确率.  相似文献   

5.
k nearest neighbor (kNN) is an effective and powerful lazy learning algorithm, notwithstanding its easy-to-implement. However, its performance heavily relies on the quality of training data. Due to many complex real-applications, noises coming from various possible sources are often prevalent in large scale databases. How to eliminate anomalies and improve the quality of data is still a challenge. To alleviate this problem, in this paper we propose a new anomaly removal and learning algorithm under the framework of kNN. The primary characteristic of our method is that the evidence of removing anomalies and predicting class labels of unseen instances is mutual nearest neighbors, rather than k nearest neighbors. The advantage is that pseudo nearest neighbors can be identified and will not be taken into account during the prediction process. Consequently, the final learning result is more creditable. An extensive comparative experimental analysis carried out on UCI datasets provided empirical evidence of the effectiveness of the proposed method for enhancing the performance of the k-NN rule.  相似文献   

6.
The k nearest neighbor is a lazy learning algorithm that is inefficient in the classification phase because it needs to compare the query sample with all training samples. A template reduction method is recently proposed that uses only samples near the decision boundary for classification and removes those far from the decision boundary. However, when class distributions overlap, more border samples are retrained and it leads to inefficient performance in the classification phase. Because the number of reduced samples are limited, using an appropriate feature reduction method seems a logical choice to improve classification time. This paper proposes a new prototype reduction method for the k nearest neighbor algorithm, and it is based on template reduction and ViSOM. The potential property of ViSOM is displaying the topology of data on a two-dimensional feature map, it provides an intuitive way for users to observe and analyze data. An efficient classification framework is then presented, which combines the feature reduction method and the prototype selection algorithm. It needs a very small data size for classification while keeping recognition rate. In the experiments, both of synthetic and real datasets are used to evaluate the performance. Experimental results demonstrate that the proposed method obtains above 70 % speedup ratio and 90 % compression ratio while maintaining similar performance to kNN.  相似文献   

7.
针对既包含有标记样本又包含未标记样本的分类数据,提出数据分布一致性原理,并将其融入到最小最大概率机中。把有标记样本和无标记样本映射到决策超平面所在空间(简称超空间),通过最小化有标记样本和无标记样本在超空间的概率分布差异,充分利用无标签样本来修正最小最大概率机的误差,使得修正后的决策超平面更接近于真正的分类超平面。实验证明,数据分布一致性最小最大概率机(DCMPM)比最小最大概率机(MPM)具有更好的分类性能。  相似文献   

8.
基于深度贝叶斯主动学习的高光谱图像分类   总被引:1,自引:0,他引:1       下载免费PDF全文
针对高光谱图像分类中标记样本获取费时费力,无标记数据难以得到有效利用以及主动学习与深度学习结合难等问题,结合贝叶斯深度学习与主动学习的最新进展,提出一种基于深度贝叶斯的主动学习高光谱图像分类算法。利用少量标记样本训练一个卷积神经网络模型,根据与贝叶斯方法结合的主动学习采样策略从无标记样本中选择模型分类最不确定性的样本,选取的样本经人工标记后加入到训练集重新训练模型,减小模型不确定性,提高模型分类精度。通过PaviaU高光谱图像分类的实验结果表明,在少量的标记样本下,提出的方法比传统的方法分类效果更好。  相似文献   

9.
提出了一种基于两阶段学习的半监督支持向量机(semi-supervised SVM)分类算法.首先使用基于图的标签传递算法给未标识样本赋予初始伪标识,并利用k近邻图将可能的噪声样本点识别出来并剔除;然后将去噪处理后的样本集视为已标识样本集输入到支持向量机(SVM)中,使得SVM在训练时能兼顾整个样本集的信息,从而提高SVM的分类准确率.实验结果证明,同其它半监督学习算法相比较,本文算法在标识的训练样本较少的情况下,分类性能有所提高且具有较高的可靠性.  相似文献   

10.
为解决行人重识别标注成本巨大的问题,提出了基于单标注样本的多损失学习与联合度量视频行人重识别方法.针对标签样本数量少,得到的模型不够鲁棒的问题,提出了多损失学习(MLL)策略:在每次训练过程中,针对不同的数据,采用不同的损失函数进行优化,提高模型的判别力.其次,在标签估计时,提出了一个联合距离度量(JDM),该度量将样...  相似文献   

11.
Most manifold learning algorithms adopt the k nearest neighbors function to construct the adjacency graph. However, severe bias may be introduced in this case if the samples are not uniformly distributed in the ambient space. In this paper a semi-supervised dimensionality reduction method is proposed to alleviate this problem. Based on the notion of local margin, we simultaneously maximize the separability between different classes and estimate the intrinsic geometric structure of the data by both the labeled and unlabeled samples. For high-dimensional data, a discriminant subspace is derived via maximizing the cumulative local margins. Experimental results on high-dimensional classification tasks demonstrate the efficacy of our algorithm.  相似文献   

12.
张良  罗祎敏  马洪超  张帆  胡川 《计算机应用》2017,37(6):1768-1771
针对高光谱遥感影像分类中,传统的主动学习算法仅利用已标签数据训练样本,大量未标签数据被忽视的问题,提出一种结合未标签信息的主动学习算法。首先,通过K近邻一致性原则、前后预测一致性原则和主动学习算法信息量评估3重筛选得到预测标签可信度高并具备一定信息量的未标签样本;然后,将其预测标签当作真实标签加入到标签样本集中;最后,训练得到更优质的分类模型。实验结果表明,与被动学习算法和传统的主动学习算法相比,所提算法能够在同等标记的代价下获得更高的分类精度,同时具有更好的参数敏感性。  相似文献   

13.
提出一种基于图的半指导学习算法用于网页分类.采用k近邻算法构建一个带权图,图中节点为已标志或未标志的网页,连接边的权重表示类的传播概率,将网页分类问题形式化为图中类的概率传播.为有效利用图中未标志节点辅助分类,结合网页的内容信息和链接信息计算网页间的链接权重,通过已标志节点,类别信息以一定概率从已标志节点推向未标志节点.实验表明,本文提出的算法能有效改进网页分类结果.  相似文献   

14.
K nearest neighbor and Bayesian methods are effective methods of machine learning. Expectation maximization is an effective Bayesian classifier. In this work a data elimination approach is proposed to improve data clustering. The proposed method is based on hybridization of k nearest neighbor and expectation maximization algorithms. The k nearest neighbor algorithm is considered as the preprocessor for expectation maximization algorithm to reduce the amount of training data making it difficult to learn. The suggested method is tested on well-known machine learning data sets iris, wine, breast cancer, glass and yeast. Simulations are done in MATLAB environment and performance results are concluded.  相似文献   

15.
The k-nearest neighbor (KNN) rule is a classical and yet very effective nonparametric technique in pattern classification, but its classification performance severely relies on the outliers. The local mean-based k-nearest neighbor classifier (LMKNN) was firstly introduced to achieve robustness against outliers by computing the local mean vector of k nearest neighbors for each class. However, its performances suffer from the choice of the single value of k for each class and the uniform value of k for different classes. In this paper, we propose a new KNN-based classifier, called multi-local means-based k-harmonic nearest neighbor (MLM-KHNN) rule. In our method, the k nearest neighbors in each class are first found, and then used to compute k different local mean vectors, which are employed to compute their harmonic mean distance to the query sample. Finally, MLM-KHNN proceeds in classifying the query sample to the class with the minimum harmonic mean distance. The experimental results, based on twenty real-world datasets from UCI and KEEL repository, demonstrated that the proposed MLM-KHNN classifier achieves lower classification error rate and is less sensitive to the parameter k, when compared to nine related competitive KNN-based classifiers, especially in small training sample size situations.  相似文献   

16.
加权KNN(k-nearest neighbor)方法,仅利用了k个最近邻训练样本所提供的类别信息,而没考虑测试样本的贡献,因而常会导致一些误判。针对这个缺陷,提出了半监督KNN分类方法。该方法对序列样本和非序列样本,均能够较好地执行分类。在分类决策时,还考虑了c个最近邻测试样本的贡献,从而提高了分类的正确性。在Cohn-Kanade人脸库上,序列图像的识别率提高了5.95%,在CMU-AMP人脸库上,非序列图像的识别率提高了7.98%。实验结果表明,该方法执行效率高,分类效果好。  相似文献   

17.
In pattern recognition, instance-based learning (also known as nearest neighbor rule) has become increasingly popular and can yield excellent performance. In instance-based learning, however, the storage of training set rises along with the number of training instances. Moreover, in such a case, a new, unseen instance takes a long time to classify because all training instances have to be considered when determining the ‘nearness’ or ‘similarity’ among instances. This study presents a novel reduced classification method for instance-based learning based on the gray relational structure. Here, only some training instances in the original training set are adopted for the pattern classification tasks. The relationships among instances are first determined according to the gray relational structure. In the relational structure, the inward edges of each training instance, indicating how many times each instance is considered as the nearest neighbor or neighbors in determining the class labels of other instances can be obtained. This method excludes training instances with no or few inward edges for the pattern classification tasks. By using the proposed instance pruning approach, new instances can be classified with a few training instances. Nine data sets are adopted to demonstrate the performance of the proposed learning approach. Experimental results indicate that the classification accuracy can be maintained when most of the training instances are pruned before learning. Additionally, the number of remained training instances in the proposal presented here is comparable to that of other existing instance pruning techniques.  相似文献   

18.
Support vector machines (SVMs) are a popular class of supervised learning algorithms, and are particularly applicable to large and high-dimensional classification problems. Like most machine learning methods for data classification and information retrieval, they require manually labeled data samples in the training stage. However, manual labeling is a time consuming and errorprone task. One possible solution to this issue is to exploit the large number of unlabeled samples that are easily accessible via the internet. This paper presents a novel active learning method for text categorization. The main objective of active learning is to reduce the labeling effort, without compromising the accuracy of classification, by intelligently selecting which samples should be labeled. The proposed method selects a batch of informative samples using the posterior probabilities provided by a set of multi-class SVM classifiers, and these samples are then manually labeled by an expert. Experimental results indicate that the proposed active learning method significantly reduces the labeling effort, while simultaneously enhancing the classification accuracy.  相似文献   

19.
The recognition of pen-based visual patterns such as sketched symbols is amenable to supervised machine learning models such as neural networks. However, a sizable, labeled training corpus is often required to learn the high variations of freehand sketches. To circumvent the costs associated with creating a large training corpus, improve the recognition accuracy with only a limited amount of training samples and accelerate the development of sketch recognition system for novel sketch domains, we present a neural network training protocol that consists of three steps. First, a large pool of unlabeled, synthetic samples are generated from a small set of existing, labeled training samples. Then, a Deep Belief Network (DBN) is pre-trained with those synthetic, unlabeled samples. Finally, the pre-trained DBN is fine-tuned using the limited amount of labeled samples for classification. The training protocol is evaluated against supervised baseline approaches such as the nearest neighbor classifier and the neural network classifier. The benchmark data sets used are partitioned such that there are only a few labeled samples for training, yet a large number of labeled test cases featuring rich variations. Results suggest that our training protocol leads to a significant error reduction compared to the baseline approaches.  相似文献   

20.
为解决监督学习过程中难以获得大量带有类标记样本且样本数据标记代价较高的问题,结合主动学习和半监督学习方法,提出基于Tri-training半监督学习和凸壳向量的SVM主动学习算法.通过计算样本集的壳向量,选择最有可能成为支持向量的壳向量进行标记.为解决以往主动学习算法在选择最富有信息量的样本标记后,不再进一步利用未标记样本的问题,将Tri-training半监督学习方法引入SVM主动学习过程,选择类标记置信度高的未标记样本加入训练样本集,利用未标记样本集中有利于学习器的信息.在UCI数据集上的实验表明,文中算法在标记样本较少时获得分类准确率较高和泛化性能较好的SVM分类器,降低SVM训练学习的样本标记代价.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号