首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
加权KNN(k-nearest neighbor)方法,仅利用了k个最近邻训练样本所提供的类别信息,而没考虑测试样本的贡献,因而常会导致一些误判。针对这个缺陷,提出了半监督KNN分类方法。该方法对序列样本和非序列样本,均能够较好地执行分类。在分类决策时,还考虑了c个最近邻测试样本的贡献,从而提高了分类的正确性。在Cohn-Kanade人脸库上,序列图像的识别率提高了5.95%,在CMU-AMP人脸库上,非序列图像的识别率提高了7.98%。实验结果表明,该方法执行效率高,分类效果好。  相似文献   

2.
It is very expensive and time-consuming to annotate huge amounts of data. Active learning would be a suitable approach to minimize the effort of annotation. A novel active learning approach, coupled K nearest neighbor pseudo pruning (CKNNPP), is proposed in the paper, which is based on querying examples by KNNPP method. The KNNPP method applies k nearest neighbor technique to search for k neighbor samples from labeled samples of unlabeled samples. When k labeled samples are not belong to the same class, the corresponded unlabeled sample is queried and given its right label by supervisor, and then it is added to labeled training set. In contrast with the previous depiction, the unlabeled sample is not selected and pruned, that is the pseudo pruning. This definition is enlightened from the K nearest neighbor pruning preprocessing. These samples selected by KNNPP are considered to be near or on the optimal classification hyperplane that is crucial for active learning. Especially, in order to avoid the excursion of the optimal classification hyperplane after adding a queried sample, CKNNPP method is proposed finally that two samples with different class label (like a couple, annotated by supervisor) are queried by KNNPP and added in the training set simultaneously for updating training set in each iteration. The CKNNPP can provide a good performance, and especially it is simple, effective, and robust, and can solve the classification problem with unbalanced dataset compared with the existing methods. Then, the computational complexity of CKNNPP is analyzed. Additionally, a new stopping criterion is applied in the proposed method, and the classifier is implemented by Lagrangian Support Vector Machines in iterations of active learning. Finally, twelve UCI datasets, image datasets of aircrafts, and the dataset of radar high-resolution range profile are used to validate the feasibility and effectiveness of the proposed method. The results illuminate that CKNNPP gains superior performance compared with the other seven state-of-the-art active learning approaches.  相似文献   

3.
基于样本密度和分类误差率的增量学习矢量量化算法研究   总被引:1,自引:0,他引:1  
李娟  王宇平 《自动化学报》2015,41(6):1187-1200
作为一种简单而成熟的分类方法, K最近邻(K nearest neighbor, KNN)算法在数据挖掘、模式识别等领域获得了广泛的应用, 但仍存在计算量大、高空间消耗、运行时间长等问题. 针对这些问题, 本文在增量学习型矢量量化(Incremental learning vector quantization, ILVQ)的单层竞争学习基础上, 融合样本密度和分类误差率的邻域思想, 提出了一种新的增量学习型矢量量化方法, 通过竞争学习策略对代表点邻域实现自适应增删、合并、分裂等操作, 快速获取原始数据集的原型集, 进而在保障分类精度基础上, 达到对大规模数据的高压缩效应. 此外, 对传统近邻分类算法进行了改进, 将原型近邻集的样本密度和分类误差率纳入到近邻判决准则中. 所提出算法通过单遍扫描学习训练集可快速生成有效的代表原型集, 具有较好的通用性. 实验结果表明, 该方法同其他算法相比较, 不仅可以保持甚至提高分类的准确性和压缩比, 且具有快速分类的优势.  相似文献   

4.

Recently, big data are widely noticed in many fields like machine learning, pattern recognition, medical, financial, and transportation fields. Data analysis is crucial to converting data into more specific information fed to the decision-making systems. With the diverse and complex types of datasets, knowledge discovery becomes more difficult. One solution is to use feature subset selection preprocessing that reduces this complexity, so the computation and analysis become convenient. Preprocessing produces a reliable and suitable source for any data-mining algorithm. The effective features’ selection can improve a model’s performance and help us understand the characteristics and underlying structure of complex data. This study introduces a novel hybrid feature selection cloud-based model for imbalanced data based on the k nearest neighbor algorithm. The proposed model showed good performance compared with the simple weighted nearest neighbor. The proposed model combines the firefly distance metric and the Euclidean distance used in the k nearest neighbor. The experimental results showed good insights in both time usage and feature weights compared with the weighted nearest neighbor. It also showed improvement in the classification accuracy by 12% compared with the weighted nearest neighbor algorithm. And using the cloud-distributed model reduced the processing time up to 30%, which is deliberated to be substantial compared with the recent state-of-the-art methods.

  相似文献   

5.
Hao Du 《Pattern recognition》2007,40(5):1486-1497
This paper points out and analyzes the advantages and drawbacks of the nearest feature line (NFL) classifier. To overcome the shortcomings, a new feature subspace with two simple and effective improvements is built to represent each class. The proposed method, termed rectified nearest feature line segment (RNFLS), is shown to possess a novel property of concentration as a result of the added line segments (features), which significantly enhances the classification ability. Another remarkable merit is that RNFLS is applicable to complex tasks such as the two-spiral distribution, which the original NFL cannot deal with properly. Finally, experimental comparisons with NFL, NN(nearest neighbor), k-NN and NNL (nearest neighbor line) using both artificial and real-world data-sets demonstrate that RNFLS offers the best performance.  相似文献   

6.
k-nearest neighbor (k-NN) classification is a well-known decision rule that is widely used in pattern classification. However, the traditional implementation of this method is computationally expensive. In this paper we develop two effective techniques, namely, template condensing and preprocessing, to significantly speed up k-NN classification while maintaining the level of accuracy. Our template condensing technique aims at “sparsifying” dense homogeneous clusters of prototypes of any single class. This is implemented by iteratively eliminating patterns which exhibit high attractive capacities. Our preprocessing technique filters a large portion of prototypes which are unlikely to match against the unknown pattern. This again accelerates the classification procedure considerably, especially in cases where the dimensionality of the feature space is high. One of our case studies shows that the incorporation of these two techniques to k-NN rule achieves a seven-fold speed-up without sacrificing accuracy.  相似文献   

7.
K nearest neighbor and Bayesian methods are effective methods of machine learning. Expectation maximization is an effective Bayesian classifier. In this work a data elimination approach is proposed to improve data clustering. The proposed method is based on hybridization of k nearest neighbor and expectation maximization algorithms. The k nearest neighbor algorithm is considered as the preprocessor for expectation maximization algorithm to reduce the amount of training data making it difficult to learn. The suggested method is tested on well-known machine learning data sets iris, wine, breast cancer, glass and yeast. Simulations are done in MATLAB environment and performance results are concluded.  相似文献   

8.
The nearest neighbor classification method assigns an unclassified point to the class of the nearest case of a set of previously classified points. This rule is independent of the underlying joint distribution of the sample points and their classifications. An extension to this approach is the k-NN method, in which the classification of the unclassified point is made by following a voting criteria within the k nearest points.The method we present here extends the k-NN idea, searching in each class for the k nearest points to the unclassified point, and classifying it in the class which minimizes the mean distance between the unclassified point and the k nearest points within each class. As all classes can take part in the final selection process, we have called the new approach k Nearest Neighbor Equality (k-NNE).Experimental results we obtained empirically show the suitability of the k-NNE algorithm, and its effectiveness suggests that it could be added to the current list of distance based classifiers.  相似文献   

9.
This paper describes a fully automatic chromosome classification algorithm for Multiplex Fluorescence In Situ Hybridization (M-FISH) images using supervised parametric and non-parametric techniques. M-FISH is a recently developed chromosome imaging method in which each chromosome is labelled with 5 fluors (dyes) and a DNA stain. The classification problem is modelled as a 25-class 6-feature pixel-by-pixel classification task. The 25 classes are the 24 types of human chromosomes and the background, while the six features correspond to the brightness of the dyes at each pixel. Maximum likelihood estimation, nearest neighbor and k-nearest neighbor methods are implemented for the classification. The highest classification accuracy is achieved with the k-nearest neighbor method and k=7 is an optimal value for this classification task.  相似文献   

10.
A novel method based on multi-modal discriminant analysis is proposed to reduce feature dimensionality. First, each class is divided into several clusters by the k-means algorithm. The optimal discriminant analysis is implemented by multi-modal mapping. Our method utilizes only those training samples on and near the effective decision boundary to generate a between-class scatter matrix, which requires less CPU time than other nonparametric discriminant analysis (NDA) approaches [Fukunaga and Mantock in IEEE Trans PAMI 5(6):671–677, 1983; Bressan and Vitria in Pattern Recognit Lett 24(5):2473–2749, 2003]. In addition, no prior assumptions about class and cluster densities are needed. In order to achieve a high verification performance of confusing handwritten numeral pairs, a hybrid feature extraction scheme is developed, which consists of a set of gradient-based wavelet features and a set of geometric features. Our proposed dimensionality reduction algorithm is used to congregate features, and it outperforms the principal component analysis (PCA) and other NDA approaches. Experiments proved that our proposed method could achieve a high feature compression performance without sacrificing its discriminant ability for classification. As a result, this new method can reduce artificial neural network (ANN) training complexity and make the ANN classifier more reliable.  相似文献   

11.
徐政  邓安生  曲衍鹏 《计算机应用研究》2021,38(5):1355-1359,1364
针对传统的K近邻算法在计算样本之间相似度时将每个属性视为同等重要的问题,提出了一种基于推土机距离的方法来计算每个条件属性的权重。首先根据近邻关系划分用于比较一致性的两个分布;之后根据推土机距离设计不一致性评价函数,用于衡量每个属性下各个样本的近邻样本集与这一集合由决策属性细化的等价划分之间的不一致性程度;最后将近邻的不一致性程度转换为相应属性的重要性,用于实现属性加权K近邻分类器。通过在多个数据集上进行实验,该方法对参数的敏感程度低,在多个参数下可以显著提高K近邻的分类精度,并且在多个指标下的表现优于现有的一些分类方法。结果表明,该方法可以通过属性加权选择出更加准确的近邻样本,可广泛应用于基于近邻的机器学习方法中。  相似文献   

12.
The k-nearest-neighbor rule is one of the most attractive pattern classification algorithms. In practice, the choice of k is determined by the cross-validation method. In this work, we propose a new method for neighborhood size selection that is based on the concept of statistical confidence. We define the confidence associated with a decision that is made by the majority rule from a finite number of observations and use it as a criterion to determine the number of nearest neighbors needed. The new algorithm is tested on several real-world datasets and yields results comparable to the k-nearest-neighbor rule. However, in contrast to the k-nearest-neighbor rule that uses a fixed number of nearest neighbors throughout the feature space, our method locally adjusts the number of nearest neighbors until a satisfactory level of confidence is reached. In addition, the statistical confidence provides a natural way to balance the trade-off between the reject rate and the error rate by excluding patterns that have low confidence levels. We believe that this property of our method can be of great importance in applications where the confidence with which a decision is made is equally or more important than the overall error rate.  相似文献   

13.
The k-nearest neighbor (KNN) rule is a classical and yet very effective nonparametric technique in pattern classification, but its classification performance severely relies on the outliers. The local mean-based k-nearest neighbor classifier (LMKNN) was firstly introduced to achieve robustness against outliers by computing the local mean vector of k nearest neighbors for each class. However, its performances suffer from the choice of the single value of k for each class and the uniform value of k for different classes. In this paper, we propose a new KNN-based classifier, called multi-local means-based k-harmonic nearest neighbor (MLM-KHNN) rule. In our method, the k nearest neighbors in each class are first found, and then used to compute k different local mean vectors, which are employed to compute their harmonic mean distance to the query sample. Finally, MLM-KHNN proceeds in classifying the query sample to the class with the minimum harmonic mean distance. The experimental results, based on twenty real-world datasets from UCI and KEEL repository, demonstrated that the proposed MLM-KHNN classifier achieves lower classification error rate and is less sensitive to the parameter k, when compared to nine related competitive KNN-based classifiers, especially in small training sample size situations.  相似文献   

14.
When classes are nonseparable or overlapping, training samples in a local neighborhood may come from different classes. In this situation, the samples with different class labels may be comparable in the neighborhood of query. As a consequence, the conventional nearest neighbor classifier, such as kappa-nearest neighbor scheme, may produce a wrong prediction. To address this issue, in this paper, we propose a new classification method, which performs a classification task based on the local probabilistic centers of each class. This method works by reducing the number of negative contributing points, which are the known samples falling on the wrong side of the ideal decision boundary, in a training set and by restricting their influence regions. In classification, this method classifies the query sample by using two measures of which one is the distance between the query and the local categorical probability centers, and the other is the computed posterior probability of the query. Although both measures are effective, the experiments show that the second one achieves the smaller classification error. Meanwhile, the theoretical analyses of the suggested methods are investigated, and some experiments are conducted on the basis of both constructed and real datasets. The investigation results show that this method substantially improves the classification performance of the nearest neighbor algorithm.  相似文献   

15.
代表点选择是面向数据挖掘与模式识别的数据预处理的重要内容之一,是提高分类器分类正确率和执行效率的重要途径。提出了一种基于投票机制的代表点选择算法,该算法能使所得到的代表点尽可能分布在类别边界上,且投票选择机制易于排除异常点,减少数据量,从而有利于提高最近邻分类器的分类精度和效率。通过与多个经典的代表点选择算法的实验比较分析,表明所提出的基于投票机制的代表点选择算法在提高最近邻分类器分类精度和数据降低率上都具有一定的优势。  相似文献   

16.
基于特征加权的KNNFP改进算法及在故障诊断中的应用   总被引:2,自引:1,他引:1  
赵俊杰 《电子技术应用》2011,37(4):113-116,121
针对传统K最近邻特征投影(KNNFP)算法中假设各维特征对分类的贡献相同而导致分类性能下降的问题,提出一种基于特征加权的KNNFP改进算法(WKNNFP).改进算法利用ReliefF算法确定特征的权值,使样本的分类效果更好,同时还可以分析各特征对分类的贡献程度,并利用改进算法对轴承故障进行诊断.结果表明,改进算法的诊断...  相似文献   

17.
k近邻方法是文本分类中广泛应用的方法,对其性能的优化具有现实需求。使用一种改进的聚类算法进行样本剪裁以提高训练样本的类别表示能力;根据样本的空间位置先后实现了基于类内和类间分布的样本加权;改善了k近邻算法中的大类别、高密度训练样本占优现象。实验结果表明,提出的改进文本加权方法提高了分类器的分类效率。  相似文献   

18.
Vector quantization(VQ) can perform efficient feature extraction from electrocardiogram (ECG) with the advantages of dimensionality reduction and accuracy increase. However, the existing dictionary learning algorithms for vector quantization are sensitive to dirty data, which compromises the classification accuracy. To tackle the problem, we propose a novel dictionary learning algorithm that employs k-medoids cluster optimized by k-means++ and builds dictionaries by searching and using representative samples, which can avoid the interference of dirty data, and thus boost the classification performance of ECG systems based on vector quantization features. We apply our algorithm to vector quantization feature extraction for ECG beats classification, and compare it with popular features such as sampling point feature, fast Fourier transform feature, discrete wavelet transform feature, and with our previous beats vector quantization feature. The results show that the proposed method yields the highest accuracy and is capable of reducing the computational complexity of ECG beats classification system. The proposed dictionary learning algorithm provides more efficient encoding for ECG beats, and can improve ECG classification systems based on encoded feature.  相似文献   

19.
Choosing appropriate classification algorithms for a given data set is very important and useful in practice but also is full of challenges. In this paper, a method of recommending classification algorithms is proposed. Firstly the feature vectors of data sets are extracted using a novel method and the performance of classification algorithms on the data sets is evaluated. Then the feature vector of a new data set is extracted, and its k nearest data sets are identified. Afterwards, the classification algorithms of the nearest data sets are recommended to the new data set. The proposed data set feature extraction method uses structural and statistical information to characterize data sets, which is quite different from the existing methods. To evaluate the performance of the proposed classification algorithm recommendation method and the data set feature extraction method, extensive experiments with the 17 different types of classification algorithms, the three different types of data set characterization methods and all possible numbers of the nearest data sets are conducted upon the 84 publicly available UCI data sets. The results indicate that the proposed method is effective and can be used in practice.  相似文献   

20.
针对图像分类特征点特性界定模糊,导致相似性度量误差较大的问题,提出采用特征点类别可分性判断准则的图像分类方法。结合信息熵理论提取图像特征点的可分性特性,根据图像特征向量标识决策属性的不同性质,计算特征向量间的可分性距离值,得到最近邻特征向量集,从待分图像各特征向量与最近邻特征向量集标识类别的平均距离,及平均可分性度量值两方面定义新的图像类别判断准则。理论分析与Caltech256图像库仿真实验表明,基于特征点类别可分性判断准则有效地提高了图像的分类准确率。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号