首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Prototype classifiers have been studied for many years. However, few methods can realize incremental learning. On the other hand, most prototype classifiers need users to predetermine the number of prototypes; an improper prototype number might undermine the classification performance. To deal with these issues, in the paper we propose an online supervised algorithm named Incremental Learning Vector Quantization (ILVQ) for classification tasks. The proposed method has three contributions. (1) By designing an insertion policy, ILVQ incrementally learns new prototypes, including both between-class incremental learning and within-class incremental learning. (2) By employing an adaptive threshold scheme, ILVQ automatically learns the number of prototypes needed for each class dynamically according to the distribution of training data. Therefore, unlike most current prototype classifiers, ILVQ needs no prior knowledge of the number of prototypes or their initial value. (3) A technique for removing useless prototypes is used to eliminate noise interrupted into the input data. Results of experiments show that the proposed ILVQ can accommodate the incremental data environment and provide good recognition performance and storage efficiency.  相似文献   

2.
Soft nearest prototype classification   总被引:3,自引:0,他引:3  
We propose a new method for the construction of nearest prototype classifiers which is based on a Gaussian mixture ansatz and which can be interpreted as an annealed version of learning vector quantization (LVQ). The algorithm performs a gradient descent on a cost-function minimizing the classification error on the training set. We investigate the properties of the algorithm and assess its performance for several toy data sets and for an optical letter classification task. Results show 1) that annealing in the dispersion parameter of the Gaussian kernels improves classification accuracy; 2) that classification results are better than those obtained with standard learning vector quantization (LVQ 2.1, LVQ 3) for equal numbers of prototypes; and 3) that annealing of the width parameter improved the classification capability. Additionally, the principled approach provides an explanation of a number of features of the (heuristic) LVQ methods.  相似文献   

3.
A novel neuralnet-based method of constructing optimized prototypes for nearest-neighbor classifiers is proposed. Based on an effective classification oriented error function containing class classification and class separation components, the corresponding prototype and feature weight update rules are derived. The proposed method consists of several distinguished properties. First, not only prototypes but also feature weights are constructed during the optimization process. Second, several instead of one prototype not belonging to the genuine class of input sample x are updated when x is classified incorrectly. Third, it intrinsically distinguishes different learning contribution from training samples, which enables a large amount of learning from constructive samples, and limited learning from outliers. Experiments have shown the superiority of this method compared with LVQ2 and other previous works.  相似文献   

4.
Soft learning vector quantization   总被引:3,自引:0,他引:3  
Seo S  Obermayer K 《Neural computation》2003,15(7):1589-1604
Learning vector quantization (LVQ) is a popular class of adaptive nearest prototype classifiers for multiclass classification, but learning algorithms from this family have so far been proposed on heuristic grounds. Here, we take a more principled approach and derive two variants of LVQ using a gaussian mixture ansatz. We propose an objective function based on a likelihood ratio and derive a learning rule using gradient descent. The new approach provides a way to extend the algorithms of the LVQ family to different distance measure and allows for the design of "soft" LVQ algorithms. Benchmark results show that the new methods lead to better classification performance than LVQ 2.1. An additional benefit of the new method is that model assumptions are made explicit, so that the method can be adapted more easily to different kinds of problems.  相似文献   

5.
A variant of nearest-neighbor (NN) pattern classification and supervised learning by learning vector quantization (LVQ) is described. The decision surface mapping method (DSM) is a fast supervised learning algorithm and is a member of the LVQ family of algorithms. A relatively small number of prototypes are selected from a training set of correctly classified samples. The training set is then used to adapt these prototypes to map the decision surface separating the classes. This algorithm is compared with NN pattern classification, learning vector quantization, and a two-layer perceptron trained by error backpropagation. When the class boundaries are sharply defined (i.e., no classification error in the training set), the DSM algorithm outperforms these methods with respect to error rates, learning rates, and the number of prototypes required to describe class boundaries.  相似文献   

6.
This paper presents a new approach to Particle Swarm Optimization, called Michigan Approach PSO (MPSO), and its application to continuous classification problems as a Nearest Prototype (NP) classifier. In Nearest Prototype classifiers, a collection of prototypes has to be found that accurately represents the input patterns. The classifier then assigns classes based on the nearest prototype in this collection. The MPSO algorithm is used to process training data to find those prototypes. In the MPSO algorithm each particle in a swarm represents a single prototype in the solution and it uses modified movement rules with particle competition and cooperation that ensure particle diversity. The proposed method is tested both with artificial problems and with real benchmark problems and compared with several algorithms of the same family. Results show that the particles are able to recognize clusters, find decision boundaries and reach stable situations that also retain adaptation potential. The MPSO algorithm is able to improve the accuracy of 1-NN classifiers, obtains results comparable to the best among other classifiers, and improves the accuracy reported in literature for one of the problems.
Pedro IsasiEmail:
  相似文献   

7.
A new fast prototype selection method based on clustering   总被引:2,自引:1,他引:1  
In supervised classification, a training set T is given to a classifier for classifying new prototypes. In practice, not all information in T is useful for classifiers, therefore, it is convenient to discard irrelevant prototypes from T. This process is known as prototype selection, which is an important task for classifiers since through this process the time for classification or training could be reduced. In this work, we propose a new fast prototype selection method for large datasets, based on clustering, which selects border prototypes and some interior prototypes. Experimental results showing the performance of our method and comparing accuracy and runtimes against other prototype selection methods are reported.  相似文献   

8.
We present a new classifier fusion method to combine soft-level classifiers with a new approach, which can be considered as a generalized decision templates method. Previous combining methods based on decision templates employ a single prototype for each class, but this global point of view mostly fails to properly represent the decision space. This drawback extremely affects the classification rate in such cases: insufficient number of training samples, island-shaped decision space distribution, and classes with highly overlapped decision spaces. To better represent the decision space, we utilize a prototype selection method to obtain a set of local decision prototypes for each class. Afterward, to determine the class of a test pattern, its decision profile is computed and then compared to all decision prototypes. In other words, for each class, the larger the numbers of decision prototypes near to the decision profile of a given pattern, the higher the chance for that class. The efficiency of our proposed method is evaluated over some well-known classification datasets suggesting superiority of our method in comparison with other proposed techniques.  相似文献   

9.
Local Averaging of Ensembles of LVQ-Based Nearest Neighbor Classifiers   总被引:1,自引:0,他引:1  
Ensemble learning is a well-established method for improving the generalization performance of learning machines. The idea is to combine a number of learning systems that have been trained in the same task. However, since all the members of the ensemble are operating at the same time, large amounts of memory and long execution times are needed, limiting its practical application. This paper presents a new method (called local averaging) in the context of nearest neighbor (NN) classifiers that generates a classifier from the ensemble with the same complexity as the individual members. Once a collection of prototypes is generated from different learning sessions using a Kohonen's LVQ algorithm, a single set of prototypes is computed by applying a cluster algorithm (such as K-means) to this collection. Local averaging can be viewed either as a technique to reduce the variance of the prototypes or as the result of averaging a series of particular bootstrap replicates. Experimental results using several classification problems confirm the utility of the method and show that local averaging can compute a single classifier that achieves a similar (or even better) accuracy than ensembles generated with voting.  相似文献   

10.
A conventional way to discriminate between objects represented by dissimilarities is the nearest neighbor method. A more efficient and sometimes a more accurate solution is offered by other dissimilarity-based classifiers. They construct a decision rule based on the entire training set, but they need just a small set of prototypes, the so-called representation set, as a reference for classifying new objects. Such alternative approaches may be especially advantageous for non-Euclidean or even non-metric dissimilarities.The choice of a proper representation set for dissimilarity-based classifiers is not yet fully investigated. It appears that a random selection may work well. In this paper, a number of experiments has been conducted on various metric and non-metric dissimilarity representations and prototype selection methods. Several procedures, like traditional feature selection methods (here effectively searching for prototypes), mode seeking and linear programming are compared to the random selection. In general, we find out that systematic approaches lead to better results than the random selection, especially for a small number of prototypes. Although there is no single winner as it depends on data characteristics, the k-centres works well, in general. For two-class problems, an important observation is that our dissimilarity-based discrimination functions relying on significantly reduced prototype sets (3-10% of the training objects) offer a similar or much better classification accuracy than the best k-NN rule on the entire training set. This may be reached for multi-class data as well, however such problems are more difficult.  相似文献   

11.
Prototype generation deals with the problem of generating a small set of instances, from a large data set, to be used by KNN for classification. The two key aspects to consider when developing a prototype generation method are: (1) the generalization performance of a KNN classifier when using the prototypes; and (2) the amount of data set reduction, as given by the number of prototypes. Both factors are in conflict because, in general, maximizing data set reduction implies decreasing accuracy and viceversa. Therefore, this problem can be naturally approached with multi-objective optimization techniques. This paper introduces a novel multi-objective evolutionary algorithm for prototype generation where the objectives are precisely the amount of reduction and an estimate of generalization performance achieved by the selected prototypes. Through a comprehensive experimental study we show that the proposed approach outperforms most of the prototype generation methods that have been proposed so far. Specifically, the proposed approach obtains prototypes that offer a better tradeoff between accuracy and reduction than alternative methodologies.  相似文献   

12.
We introduce a batch learning algorithm to design the set of prototypes of 1 nearest-neighbour classifiers. Like Kohonen's LVQ algorithms, this procedure tends to perform vector quantization over a probability density function that has zero points at Bayes borders. Although it differs significantly from their online counterparts since: (1) its statistical goal is clearer and better defined; and (2) it converges superlinearly due to its use of the very fast Newton's optimization method. Experiments results using artificial data confirm faster training time and better classification performance than Kohonen's LVQ algorithms.  相似文献   

13.
Nearest prototype classification of noisy data   总被引:1,自引:1,他引:0  
Nearest prototype approaches offer a common way to design classifiers. However, when data is noisy, the success of this sort of classifiers depends on some parameters that the designer needs to tune, as the number of prototypes. In this work, we have made a study of the ENPC technique, based on the nearest prototype approach, in noisy datasets. Previous experimentation of this algorithm had shown that it does not require any parameter tuning to obtain good solutions in problems where class limits are well defined, and data is not noisy. In this work, we show that the algorithm is able to obtain solutions with high classification success even when data is noisy. A comparison with optimal (hand made) solutions and other different classification algorithms demonstrates the good performance of the ENPC algorithm in accuracy and number of prototypes as the noise level increases. We have performed experiments in four different datasets, each of them with different characteristics.  相似文献   

14.
Various prototype reduction schemes have been reported in the literature. Foremost among these are the prototypes for nearest neighbor (PNN), the vector quantization (VQ), and the support vector machines (SVM) methods. In this paper, we shall show that these schemes can be enhanced by the introduction of a post-processing phase that is related, but not identical to, the LVQ3 process. Although the post-processing with LVQ3 has been reported for the SOM and the basic VQ methods, in this paper, we shall show that an analogous philosophy can be used in conjunction with the SVM and PNN rules. Our essential modification to LVQ3 first entails a partitioning of the respective training sets into two sets called the Placement set and the Optimizing set, which are instrumental in determining the LVQ3 parameters. Such a partitioning is novel to the literature. Our experimental results demonstrate that the proposed enhancement yields the best reported prototype condensation scheme to-date for both artificial data sets, and for samples involving real-life data sets.  相似文献   

15.
In this paper, we propose a prototype classification method that employs a learning process to determine both the number and the location of prototypes. This learning process decides whether to stop adding prototypes according to a certain termination condition, and also adjusts the location of prototypes using either the K-means (KM) or the fuzzy c-means (FCM) clustering algorithms. When the prototype classification method is applied, the support vector machine (SVM) method can be used to post-process the top-rank candidates obtained during the prototype learning or matching process. We apply this hybrid solution to handwriting recognition and address the convergence behavior and runtime consumption of the prototype construction process, and discuss how to combine our prototype classifier with SVM classifiers to form an effective hybrid classifier.  相似文献   

16.
A number of approaches to pattern recognition employ variants of nearest neighbor recall. This procedure uses a number of prototypes of known class and identifies an unknown pattern vector according to the prototype it is nearest to. A recall criterion of this type that depends on the relation of the unknown to a single prototype is a non-smooth function and leads to a decision boundary that is a jagged, piecewise linear hypersurface. Collective recall, a pattern recognition method based on a smooth nearness measure of the unknown to all the prototypes, is developed. The prototypes are represented as cells in a brain-state-in-a-box (BSB) network. Cells that represent the same pattern class are linked by positive weights and cells representing different pattern classes are linked by negative weights. Computer simulations of collective recall used in conjunction with learning vector quantization (LVQ) show significant improvement in performance relative to nearest neighbor recall for pattern classes defined by nonspherically symmetric Gaussians.  相似文献   

17.
针对传统K近邻分类器在大规模数据集中存在时间和空间复杂度过高的问题,可采取原型选择的方法进行处理,即从原始数据集中挑选出代表原型(样例)进行K近邻分类而不降低其分类准确率.本文在CURE聚类算法的基础上,针对CURE的噪声点不易确定及代表点分散性差的特点,利用共享邻居密度度量给出了一种去噪方法和使用最大最小距离选取代表点进行改进,从而提出了一种新的原型选择算法PSCURE (improved prototype selection algorithm based on CURE algorithm).基于UCI数据集进行实验,结果表明:提出的PSCURE原型选择算法与相关原型算法相比,不仅能筛选出较少的原型,而且可获得较高的分类准确率.  相似文献   

18.
This paper presents some new approaches for computing graph prototypes in the context of the design of a structural nearest prototype classifier. Four kinds of prototypes are investigated and compared: set median graphs, generalized median graphs, set discriminative graphs and generalized discriminative graphs. They differ according to (i) the graph space where they are searched for and (ii) the objective function which is used for their computation. The first criterion allows to distinguish set prototypes which are selected in the initial graph training set from generalized prototypes which are generated in an infinite set of graphs. The second criterion allows to distinguish median graphs which minimize the sum of distances to all input graphs of a given class from discriminative graphs, which are computed using classification performance as criterion, taking into account the inter-class distribution. For each kind of prototype, the proposed approach allows to identify one or many prototypes per class, in order to manage the trade-off between the classification accuracy and the classification time.Each graph prototype generation/selection is performed through a genetic algorithm which can be specialized to each case by setting the appropriate encoding scheme, fitness and genetic operators.An experimental study performed on several graph databases shows the superiority of the generation approach over the selection one. On the other hand, discriminative prototypes outperform the generative ones. Moreover, we show that the classification rates are improved while the number of prototypes increases. Finally, we show that discriminative prototypes give better results than the median graph based classifier.  相似文献   

19.
Prototype-based classification relies on the distances between the examples to be classified and carefully chosen prototypes. A small set of prototypes is of interest to keep the computational complexity low, while maintaining high classification accuracy. An experimental study of some old and new prototype optimisation techniques is presented, in which the prototypes are either selected or generated from the given data. These condensing techniques are evaluated on real data, represented in vector spaces, by comparing their resulting reduction rates and classification performance.Usually the determination of prototypes is studied in relation with the nearest neighbour rule. We will show that the use of more general dissimilarity-based classifiers can be more beneficial. An important point in our study is that the adaptive condensing schemes here discussed allow the user to choose the number of prototypes freely according to the needs. If such techniques are combined with linear dissimilarity-based classifiers, they provide the best trade-off of small condensed sets and high classification accuracy.  相似文献   

20.
针对脑肿瘤磁共振成像(MRI)模态多、训练数据少、类别不平衡以及各个私有数据库差异大等导致分割困难的问题,引入小样本分割方法,并提出基于U-net的原型网络(PU-net)模型用以对脑肿瘤磁共振(MR)图像进行分割。首先对U-net的结构进行调整来提取各类瘤体的特征用以计算原型;然后在原型网络的基础上,逐像素利用原型对各空间位置进行分类,从而获取各类瘤体区域的概率图与分割结果;针对瘤体像素类别不平衡问题,采用自适应权重交叉熵损失函数来减小背景类对损失计算的影响;最后加入原型校验机制,即融合利用分割得到的概率图和查询图像对原型进行校验。所提方法在公开数据集BraTS2018上进行实验,得到的平均Dice系数为0.654,阳性预测率为0.662,灵敏度为0.687,豪斯多夫距离为3.858,平均交并比(mIOU)达到61.4%,与最新的小样本分割方法原型校准网络(PANet)和基于注意力的多上下文引导网络(A-MCG)相比各项指标均有所提升。结果显示引入小样本分割方法对脑肿瘤MR图像进行分割有不错的效果,采用自适应权重交叉熵损失函数也有着一定的帮助,可以对脑肿瘤诊断治疗起到有效的辅助作用。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号