首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper presents the development of soft clustering and learning vector quantization (LVQ) algorithms that rely on a weighted norm to measure the distance between the feature vectors and their prototypes. The development of LVQ and clustering algorithms is based on the minimization of a reformulation function under the constraint that the generalized mean of the norm weights be constant. According to the proposed formulation, the norm weights can be computed from the data in an iterative fashion together with the prototypes. An error analysis provides some guidelines for selecting the parameter involved in the definition of the generalized mean in terms of the feature variances. The algorithms produced from this formulation are easy to implement and they are almost as fast as clustering algorithms relying on the Euclidean norm. An experimental evaluation on four data sets indicates that the proposed algorithms outperform consistently clustering algorithms relying on the Euclidean norm and they are strong competitors to non-Euclidean algorithms which are computationally more demanding.  相似文献   

2.
This paper presents the development of soft clustering and learning vector quantization (LVQ) algorithms that rely on multiple weighted norms to measure the distance between the feature vectors and their prototypes. Clustering and LVQ are formulated in this paper as the minimization of a reformulation function that employs distinct weighted norms to measure the distance between each of the prototypes and the feature vectors under a set of equality constraints imposed on the weight matrices. Fuzzy LVQ and clustering algorithms are obtained as special cases of the proposed formulation. The resulting clustering algorithm is evaluated and benchmarked on three data sets that differ in terms of the data structure and the dimensionality of the feature vectors. This experimental evaluation indicates that the proposed multinorm algorithm outperforms algorithms employing the Euclidean norm as well as existing clustering algorithms employing weighted norms.  相似文献   

3.
Soft learning vector quantization   总被引:3,自引:0,他引:3  
Seo S  Obermayer K 《Neural computation》2003,15(7):1589-1604
Learning vector quantization (LVQ) is a popular class of adaptive nearest prototype classifiers for multiclass classification, but learning algorithms from this family have so far been proposed on heuristic grounds. Here, we take a more principled approach and derive two variants of LVQ using a gaussian mixture ansatz. We propose an objective function based on a likelihood ratio and derive a learning rule using gradient descent. The new approach provides a way to extend the algorithms of the LVQ family to different distance measure and allows for the design of "soft" LVQ algorithms. Benchmark results show that the new methods lead to better classification performance than LVQ 2.1. An additional benefit of the new method is that model assumptions are made explicit, so that the method can be adapted more easily to different kinds of problems.  相似文献   

4.
《Information Fusion》2008,9(2):310-316
Xu and Da [Z.S. Xu, Q.L. Da, The uncertain OWA operator, International Journal of Intelligent Systems, 17 (2002) 569–575] introduced the uncertain ordered weighted averaging (UOWA) operator to aggregate the input arguments taking the form of intervals rather than exact numbers. In this paper, we develop some dependent uncertain ordered weighted aggregation operators, including dependent uncertain ordered weighted averaging (DUOWA) operators and dependent uncertain ordered weighted geometric (DUOWG) operators, in which the associated weights only depend on the aggregated interval arguments and can relieve the influence of unfair interval arguments on the aggregated results by assigning low weights to those “false” and “biased” ones.  相似文献   

5.
Fuzzy algorithms for learning vector quantization   总被引:14,自引:0,他引:14  
This paper presents the development of fuzzy algorithms for learning vector quantization (FALVQ). These algorithms are derived by minimizing the weighted sum of the squared Euclidean distances between an input vector, which represents a feature vector, and the weight vectors of a competitive learning vector quantization (LVQ) network, which represent the prototypes. This formulation leads to competitive algorithms, which allow each input vector to attract all prototypes. The strength of attraction between each input and the prototypes is determined by a set of membership functions, which can be selected on the basis of specific criteria. A gradient-descent-based learning rule is derived for a general class of admissible membership functions which satisfy certain properties. The FALVQ 1, FALVQ 2, and FALVQ 3 families of algorithms are developed by selecting admissible membership functions with different properties. The proposed algorithms are tested and evaluated using the IRIS data set. The efficiency of the proposed algorithms is also illustrated by their use in codebook design required for image compression based on vector quantization.  相似文献   

6.
This paper describes a new soft clustering algorithm in which each cluster is modelled by a one-class support vector machine (OC-SVM). The proposed algorithm extends a previously proposed hard clustering algorithm, also based on OC-SVM representation of clusters. The key building block of our method is the weighted OC-SVM (WOC-SVM), a novel tool introduced in this paper, based on which an expectation-maximization-type soft clustering algorithm is defined. A deterministic annealing version of the algorithm is also introduced, and shown to improve the robustness with respect to initialization. Experimental results show that the proposed soft clustering algorithm outperforms its hard clustering counterpart, namely in terms of robustness with respect to initialization, as well as several other state-of-the-art methods.  相似文献   

7.
Induced ordered weighted averaging operators   总被引:23,自引:0,他引:23  
We briefly describe the Ordered Weighted Averaging (OWA) operator and discuss a methodology for learning the associated weighting vector from observational data. We then introduce a more general type of OWA operator called the Induced Ordered Weighted Averaging (IOWA) Operator. These operators take as their argument pairs, called OWA pairs, in which one component is used to induce an ordering over the second components which are then aggregated. A number of different aggregation situations have been shown to be representable in this framework. We then show how this tool can be used to represent different types of aggregation models.  相似文献   

8.
Derives an interpretation for a family of competitive learning algorithms and investigates their relationship to fuzzy c-means and fuzzy learning vector quantization. These algorithms map a set of feature vectors into a set of prototypes associated with a competitive network that performs unsupervised learning. Derivation of the new algorithms is accomplished by minimizing an average generalized distance between the feature vectors and prototypes using gradient descent. A close relationship between the resulting algorithms and fuzzy c-means is revealed by investigating the functionals involved. It is also shown that the fuzzy c-means and fuzzy learning vector quantization algorithms are related to the proposed algorithms if the learning rate at each iteration is selected to satisfy a certain condition  相似文献   

9.
In this paper, we discuss the influence of feature vectors contributions at each learning time t on a sequential-type competitive learning algorithm. We then give a learning rate annealing schedule to improve the unsupervised learning vector quantization (ULVQ) algorithm which uses the winner-take-all competitive learning principle in the self-organizing map (SOM). We also discuss the noisy and outlying problems of a sequential competitive learning algorithm and then propose an alternative learning formula to make the sequential competitive learning robust to noise and outliers. Combining the proposed learning rate annealing schedule and alternative learning formula, we propose an alternative learning vector quantization (ALVQ) algorithm. Some discussion and experimental results from comparing ALVQ with ULVQ show the superiority of the proposed method.  相似文献   

10.
In this article we extend the similarity classifier to cover also ordered weighted averaging (OWA) operators. Earlier, similarity classifier was mainly used with generalized mean operator, but in this article we extend this aggregation process to cover more general OWA operators. With OWA operators we concentrate on linguistic quantifier guided aggregation where several different quantifiers are studied and on how they best suite for the similarity classifier. Our proposed method is applied to real world medical data sets which are new thyroid, hypothyroid, lymphography and hepatitis data sets. Results are very promising and show improvement compared to the earlier used generalized mean operator. In this article we will show that by using OWA operators instead of generalized mean, we can improve classification accuracy with chosen data sets.  相似文献   

11.
Characterization of the ordered weighted averaging operators   总被引:5,自引:0,他引:5  
This paper deals with the characterization of two classes of monotonic and neutral (MN) aggregation operators. The first class corresponds to (MN) aggregators which are stable for the same positive linear transformations and presents the ordered linkage property. The second class deals with (MN)-idempotent aggregators which are stable for positive linear transformations with the same unit, independent zeroes and ordered values. These two classes correspond to the weighted ordered averaging operator (OWA) introduced by Yager in 1988. It is also shown that the OWA aggregator can be expressed as a Choquet integral  相似文献   

12.
In this paper, we extend the conventional vector quantization by incorporating a vigilance parameter, which steers the tradeoff between plasticity and stability during incremental online learning. This is motivated in the adaptive resonance theory (ART) network approach and is exploited in our paper for forming a one-pass incremental and evolving variant of vector quantization. This variant can be applied for online clustering, classification and approximation tasks with an unknown number of clusters. Additionally, two novel extensions are described: one concerns the incorporation of the sphere of influence of clusters in the vector quantization learning process by selecting the ‘winning cluster’ based on the distances of a data point to the surface of all clusters. Another one introduces a deletion of cluster satellites and an online split-and-merge strategy: clusters are dynamically split and merged after each incremental learning step. Both strategies prevent the algorithm to generate a wrong cluster partition due to a bad a priori setting of the most essential parameter(s). The extensions will be applied to clustering of two- and high-dimensional data, within an image classification framework and for model-based fault detection based on data-driven evolving fuzzy models.  相似文献   

13.
The fusion of transitive fuzzy relations preserving the transitivity is linked to the domination of the involved aggregation operator. The aim of this contribution is to investigate the domination of OWA operators over t-norms whereas the main emphasis is on the domination over the ukasiewicz t-norm. The domination of OWA operators and related operators over continuous Archimedean t-norms will also be discussed.This work was partly supported by network CEEPUS SK-42, COST Action 274 TARSKI and project APVT 20-023402.  相似文献   

14.
We discuss the use of divergences in dissimilarity-based classification. Divergences can be employed whenever vectorial data consists of non-negative, potentially normalized features. This is, for instance, the case in spectral data or histograms. In particular, we introduce and study divergence based learning vector quantization (DLVQ). We derive cost function based DLVQ schemes for the family of γdivergences which includes the well-known Kullback-Leibler divergence and the so-called Cauchy-Schwarz divergence as special cases. The corresponding training schemes are applied to two different real world data sets. The first one, a benchmark data set (Wisconsin Breast Cancer) is available in the public domain. In the second problem, color histograms of leaf images are used to detect the presence of cassava mosaic disease in cassava plants. We compare the use of standard Euclidean distances with DLVQ for different parameter settings. We show that DLVQ can yield superior classification accuracies and Receiver Operating Characteristics.  相似文献   

15.
Multistage vector quantization (MSVQ) and their variants have been recently proposed. Before MSVQ is designed, the user must artificially determine the number of codewords in each VQ stage. However, the users usually have no idea regarding the number of codewords in each VQ stage, and thus doubt whether the resulting MSVQ is optimal. This paper proposes the genetic design (GD) algorithm to design the MSVQ. The GD algorithm can automatically find the number of codewords to optimize each VQ stage according to the rate–distortion performance. Thus, the MSVQ based on the GD algorithm, namely MSVQ(GD), is proposed here. Furthermore, using a sharing codebook (SC) can further reduce the storage size of MSVQ. Combining numerous similar codewords in the VQ stages of MSVQ produces the codewords of the sharing codebook. This paper proposes the genetic merge (GM) algorithm to design the SC of MSVQ. Therefore, the constrained-storage MSVQ using a SC, namely CSMSVQ, is proposed and outperforms other MSVQs in the experiments presented here.  相似文献   

16.
In this work, we present a review of the state of the art of learning vector quantization (LVQ) classifiers. A taxonomy is proposed which integrates the most relevant LVQ approaches to date. The main concepts associated with modern LVQ approaches are defined. A comparison is made among eleven LVQ classifiers using one real-world and two artificial datasets.  相似文献   

17.
传统的轨迹聚类方法存在定义轨迹相似度难度大,聚类过程中容易忽略轨迹细节等问题.基于矢量场的轨迹聚类(VFC)在保持轨迹原始运动特征的基础上,利用矢量场的几何结构可以很好地度量轨迹相似度.引入加权拟合方法,降低噪声对聚类的影响,以解决VFC鲁棒性较差问题.采用层次聚类动态地决定聚类类别数,以解决聚类类别数不能自适应的问题,提高聚类有效性.采用亚特兰大飓风数据作为实验原始轨迹数据,分别使用经典矢量场的轨迹聚类,k-means聚类,k-mediods聚类以及提出的方法进行实验,实验结果证明了加权拟合矢量场的层次聚类算法的有效性.  相似文献   

18.
In this paper, we propose new aggregation operators for multi-criteria decision making under linguistic settings. The proposed operators are based on two sets of criteria weights. Besides the primary conventional criteria weights, we introduce a method to deduce secondary criteria weights from the criteria evaluations, which reflect the role of the different criteria in discriminating among the alternatives. The properties of the proposed operators are investigated. An approach for the application of the said operators in a group multi-criteria decision making problem is presented. Following the same, the proposed operators are applied in a case study on supplier selection. The empirical validation of the proposed operators is performed on a set of 12 real datasets.Note: All usages of he, him, his in the paper, also refer to she, and her.  相似文献   

19.
The self-organizing Maps (SOM) introduced by Kohonen implement two important operations: vector quantization (VQ) and a topology-preserving mapping. In this paper, an online self-organizing topological tree (SOTT) with faster learning is proposed. A new learning rule delivers the efficiency and topology preservation, which is superior of other structures of SOMs. The computational complexity of the proposed SOTT is O(log N) rather than O(N) as for the basic SOM. The experimental results demonstrate that the reconstruction performance of SOTT is comparable to the full-search SOM and its computation time is much shorter than the full-search SOM and other vector quantizers. In addition, SOTT delivers the hierarchical mapping of codevectors and the progressive transmission and decoding property, which are rarely supported by other vector quantizers at the same time. To circumvent the shortcomings of clustering performance of classical partition clustering algorithms, a hybrid clustering algorithm that fully exploit the online learning and multiresolution characteristics of SOTT is devised. A new linkage metric is proposed which can be updated online to accelerate the time consuming agglomerative hierarchical clustering stage. Besides the enhanced clustering performance, due to the online learning capability, the memory requirement of the proposed SOTT hybrid clustering algorithm is independent of the size of the data set, making it attractive for large database.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号