首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Unlike conventional unsupervised classification methods, such as K‐means and ISODATA, which are based on partitional clustering techniques, the methodology proposed in this work attempts to take advantage of the properties of Kohonen's self‐organizing map (SOM) together with agglomerative hierarchical clustering methods to perform the automatic classification of remotely sensed images. The key point of the proposed method is to execute the cluster analysis process by means of a set of SOM prototypes, instead of working directly with the original patterns of the image. This strategy significantly reduces the complexity of the data analysis, making it possible to use techniques that have not normally been considered viable in the processing of remotely sensed images, such as hierarchical clustering methods and cluster validation indices. Through the use of the SOM, the proposed method maps the original patterns of the image to a two‐dimensional neural grid, attempting to preserve the probability distribution and topology of the input space. Afterwards, an agglomerative hierarchical clustering method with restricted connectivity is applied to the trained neural grid, generating a simplified dendrogram for the image data. Utilizing SOM statistic properties, the method employs modified versions of cluster validation indices to automatically determine the ideal number of clusters for the image. The experimental results show examples of the application of the proposed methodology and compare its performance to the K‐means algorithm.  相似文献   

2.
In this paper, an efficient K-medians clustering (unsupervised) algorithm for prototype selection and Supervised K-medians (SKM) classification technique for protein sequences are presented. For sequence data sets, a median string/sequence can be used as the cluster/group representative. In K-medians clustering technique, a desired number of clusters, K, each represented by a median string/sequence, is generated and these median sequences are used as prototypes for classifying the new/test sequence whereas in SKM classification technique, median sequence in each group/class of labelled protein sequences is determined and the set of median sequences is used as prototypes for classification purpose. It is found that the K-medians clustering technique outperforms the leader based technique and also SKM classification technique performs better than that of motifs based approach for the data sets used. We further use a simple technique to reduce time and space requirements during protein sequence clustering and classification. During training and testing phase, the similarity score value between a pair of sequences is determined by selecting a portion of the sequence instead of the entire sequence. It is like selecting a subset of features for sequence data sets. The experimental results of the proposed method on K-medians, SKM and Nearest Neighbour Classifier (NNC) techniques show that the Classification Accuracy (CA) using the prototypes generated/used does not degrade much but the training and testing time are reduced significantly. Thus the experimental results indicate that the similarity score does not need to be calculated by considering the entire length of the sequence for achieving a good CA. Even space requirement is reduced during both training and classification.  相似文献   

3.
Hierarchical Clustering Algorithms for Document Datasets   总被引:9,自引:0,他引:9  
Fast and high-quality document clustering algorithms play an important role in providing intuitive navigation and browsing mechanisms by organizing large amounts of information into a small number of meaningful clusters. In particular, clustering algorithms that build meaningful hierarchies out of large document collections are ideal tools for their interactive visualization and exploration as they provide data-views that are consistent, predictable, and at different levels of granularity. This paper focuses on document clustering algorithms that build such hierarchical solutions and (i) presents a comprehensive study of partitional and agglomerative algorithms that use different criterion functions and merging schemes, and (ii) presents a new class of clustering algorithms called constrained agglomerative algorithms, which combine features from both partitional and agglomerative approaches that allows them to reduce the early-stage errors made by agglomerative methods and hence improve the quality of clustering solutions. The experimental evaluation shows that, contrary to the common belief, partitional algorithms always lead to better solutions than agglomerative algorithms; making them ideal for clustering large document collections due to not only their relatively low computational requirements, but also higher clustering quality. Furthermore, the constrained agglomerative methods consistently lead to better solutions than agglomerative methods alone and for many cases they outperform partitional methods, as well.  相似文献   

4.
R. A.  F. J.  E. 《Pattern recognition》2002,35(12):2771-2782
A generalized prototype-based classification scheme founded on hierarchical clustering is proposed. The basic idea is to obtain a condensed 1-NN classification rule by merging the two same-class nearest clusters, provided that the set of cluster representatives correctly classifies all the original points. Apart from the quality of the obtained sets and its flexibility which comes from the fact that different intercluster measures and criteria can be used, the proposed scheme includes a very efficient four-stage procedure which conveniently exploits geometric cluster properties to decide about each possible merge. Empirical results demonstrate the merits of the proposed algorithm taking into account the size of the condensed sets of prototypes, the accuracy of the corresponding condensed 1-NN classification rule and the computing time.  相似文献   

5.
Clustering algorithms are routinely used in biomedical disciplines, and are a basic tool in bioinformatics. Depending on the task at hand, there are two most popular options, the central partitional techniques and the agglomerative hierarchical clustering techniques and their derivatives. These methods are well studied and well established. However, both categories have some drawbacks related to data dimensionality (for partitional algorithms) and to the bottom-up structure (for hierarchical agglomerative algorithms). To overcome these limitations, motivated by the problem of gene expression analysis with DNA microarrays, we present a hierarchical clustering algorithm based on a completely different principle, which is the analysis of shared farthest neighbors. We present a framework for clustering using ranks and indexes, and introduce the shared farthest neighbors (SFN) clustering criterion. We illustrate the properties of the method and present experimental results on different data sets, using the strategy of evaluating data clustering by extrinsic knowledge given by class labels.  相似文献   

6.
A computationally efficient agglomerative clustering algorithm based on multilevel theory is presented. Here, the data set is divided randomly into a number of partitions. The samples of each such partition are clustered separately using hierarchical agglomerative clustering algorithm to form sub-clusters. These are merged at higher levels to get the final classification. This algorithm leads to the same classification as that of hierarchical agglomerative clustering algorithm when the clusters are well separated. The advantages of this algorithm are short run time and small storage requirement. It is observed that the savings, in storage space and computation time, increase nonlinearly with the sample size.  相似文献   

7.
文档的自动聚类技术被普遍认为是一种行之有效的网上信息资源管理方法。目前主要存在两种类型的聚类方法:分割型和凝聚型,它们在计算复杂度和聚类效果上各有千秋。该文提出了一种结合了这两种聚类技术的聚类法:PAC。实验结果表明,PAC具有较低的计算复杂度,且聚类结果优于传统的分割型和凝聚型方法。  相似文献   

8.
CURE算法是一种凝聚的层次聚类算法,它首先提出了使用多代表点描述簇的思想。本文通过对已有的基于多代表点的层次聚类算法特点的分析,提出了一种新的基于多代表点的层次聚类算法WRPC。它使用了基于影响因子的簇代表点选取机制和基于k-近邻方法的小簇合并机制,可以发现形状、尺寸更为复杂的簇。实验结果表明,该算法在保证执行效率的情况下取得了更好的聚类效果。  相似文献   

9.
Automatic Clustering Using an Improved Differential Evolution Algorithm   总被引:5,自引:0,他引:5  
Differential evolution (DE) has emerged as one of the fast, robust, and efficient global search heuristics of current interest. This paper describes an application of DE to the automatic clustering of large unlabeled data sets. In contrast to most of the existing clustering techniques, the proposed algorithm requires no prior knowledge of the data to be classified. Rather, it determines the optimal number of partitions of the data "on the run." Superiority of the new method is demonstrated by comparing it with two recently developed partitional clustering techniques and one popular hierarchical clustering algorithm. The partitional clustering algorithms are based on two powerful well-known optimization algorithms, namely the genetic algorithm and the particle swarm optimization. An interesting real-world application of the proposed method to automatic segmentation of images is also reported.  相似文献   

10.
Clustering is an important unsupervised learning technique widely used to discover the inherent structure of a given data set. Some existing clustering algorithms uses single prototype to represent each cluster, which may not adequately model the clusters of arbitrary shape and size and hence limit the clustering performance on complex data structure. This paper proposes a clustering algorithm to represent one cluster by multiple prototypes. The squared-error clustering is used to produce a number of prototypes to locate the regions of high density because of its low computational cost and yet good performance. A separation measure is proposed to evaluate how well two prototypes are separated. Multiple prototypes with small separations are grouped into a given number of clusters in the agglomerative method. New prototypes are iteratively added to improve the poor cluster separations. As a result, the proposed algorithm can discover the clusters of complex structure with robustness to initial settings. Experimental results on both synthetic and real data sets demonstrate the effectiveness of the proposed clustering algorithm.  相似文献   

11.
12.
A hybrid clustering procedure for concentric and chain-like clusters   总被引:1,自引:0,他引:1  
K-means algorithm is a well known nonhierarchical method for clustering data. The most important limitations of this algorithm are that: (1) it gives final clusters on the basis of the cluster centroids or the seed points chosen initially, and (2) it is appropriate for data sets having fairly isotropic clusters. But this algorithm has the advantage of low computation and storage requirements. On the other hand, hierarchical agglomerative clustering algorithm, which can cluster nonisotropic (chain-like and concentric) clusters, requires high storage and computation requirements. This paper suggests a new method for selecting the initial seed points, so that theK-means algorithm gives the same results for any input data order. This paper also describes a hybrid clustering algorithm, based on the concepts of multilevel theory, which is nonhierarchical at the first level and hierarchical from second level onwards, to cluster data sets having (i) chain-like clusters and (ii) concentric clusters. It is observed that this hybrid clustering algorithm gives the same results as the hierarchical clustering algorithm, with less computation and storage requirements.  相似文献   

13.
《Pattern recognition》1986,19(1):95-99
There is mounting evidence to suggest that the complete linkage method does the best clustering job among all hierarchical agglomerative techniques, particularly with respect to misclassification in samples from known multivariate normal distributions. However, clustering methods are notorious for discovering clusters on random data sets also. We compare six agglomerative hierarchical methods on univariate random data from uniform and standard normal distributions and find that the complete linkage method generally is best in not discovering false clusters. The criterion is the ratio of number of within-cluster distances to number of all distances at most equal to the maximum within-cluster distance.  相似文献   

14.
Parallel clustering algorithms   总被引:3,自引:0,他引:3  
Clustering techniques play an important role in exploratory pattern analysis, unsupervised learning and image segmentation applications. Many clustering algorithms, both partitional clustering and hierarchical clustering, require intensive computation, even for a modest number of patterns. This paper presents two parallel clustering algorithms. For a clustering problem with N = 2n patterns and M = 2m features, the time complexity of the traditional partitional clustering algorithm on a single processor computer is O(MNK), where K is the number of clusters. The proposed algorithm on anSIMD computer with MN processors has a time complexity O(K(n + m)). The time complexity of the proposed single-link hierarchical clustering algorithm is reduced from O(MN2) of the uniprocessor algorithm to O(nN) with MN processors.  相似文献   

15.
Clustering algorithms tend to generate clusters even when applied to random data. This paper provides a semi-tutorial review of the state-of-the-art in cluster validity, or the verification of results from clustering algorithms. The paper covers ways of measuring clustering tendency, the fit of hierarchical and partitional structures and indices of compactness and isolation for individual clusters. Included are structural criteria for validating clusters and the factors involved in choosing criteria, according to which the literature of cluster validity is classified. An application to speaker identification demonstrates several indices. The development of new clustering techniques and the wide availability of clustering programs necessitates vigorous research in cluster validity.  相似文献   

16.
为了更好地实现聚类,在汲取传统的划分算法、层次算法特性的基础上,提出了一种新的基于划分和层次的混合聚类算法(MPH),该算法将聚类的过程分为分裂和合并两个阶段,在分裂阶段反复采用k-means算法,将数据集划分为多个同质的子簇,在合并阶段采用凝聚的层次聚类算法。实验表明,该算法能够发现任意形状、任意大小的聚类,并且对噪声点不敏感。  相似文献   

17.
阐述了一种主题发现系统,它能发现数据流中的隐含知识,并将其表述为含有主题/副主题的层次树,每个主题包含与其相关的文档集和文档摘要,以便于用户从层次树中浏览和选择所需主题.并提出了一种增量层次聚类算法,该算法结合了划分聚类和凝聚聚类的主要优点.实验结果表明,无论是作为主题检测系统还是分类和概括工具,该算法都是高效的.  相似文献   

18.
Speed-density relationships are used by mesoscopic traffic simulators to represent traffic dynamics. While classical speed-density relationships provide useful insights into the traffic dynamics problem, they may be restrictive for such applications. This paper addresses the problem of calibrating speed-density relationship parameters using data mining techniques, and proposes a novel hierarchical clustering algorithm based on K-means clustering. By combining K-means with agglomerative hierarchical clustering, the proposed new algorithm is able to reduce early-stage errors inherent in agglomerative hierarchical clustering resulted in improved clustering performance. Moreover, in order to improve the precision of parametric calibration, densities and flows are utilized as variables. The proposed approach is tested against sensor data captured from the 3rd Ring Road of Beijing. The testing results show that the performance of our algorithm is better than existing solutions.  相似文献   

19.
In this paper, we extend the work of Kraft et al. to present a new method for fuzzy information retrieval based on fuzzy hierarchical clustering and fuzzy inference techniques. First, we present a fuzzy agglomerative hierarchical clustering algorithm for clustering documents and to get the document cluster centers of document clusters. Then, we present a method to construct fuzzy logic rules based on the document clusters and their document cluster centers. Finally, we apply the constructed fuzzy logic rules to modify the user's query for query expansion and to guide the information retrieval system to retrieve documents relevant to the user's request. The fuzzy logic rules can represent three kinds of fuzzy relationships (i.e., fuzzy positive association relationship, fuzzy specialization relationship and fuzzy generalization relationship) between index terms. The proposed fuzzy information retrieval method is more flexible and more intelligent than the existing methods due to the fact that it can expand users' queries for fuzzy information retrieval in a more effective manner.  相似文献   

20.
In this paper, we consider cluster analysis based on T‐transitive interval‐valued fuzzy relations. A fuzzy relation with its partitional tree for obtaining an agglomerative hierarchical clustering has been studied and applied. In general, these fuzzy‐relation‐based clustering approaches are based on real‐valued memberships of fuzzy relations. Since interval‐valued memberships may be better than real‐valued memberships to represent higher order imprecision and vagueness for human perception, in this paper we first extend fuzzy relations to interval‐valued fuzzy relations and then construct a clustering algorithm based on the proposed T‐transitive interval‐valued fuzzy relations. We use two examples to demonstrate the efficiency and usefulness of the proposed method. In practical application, we apply the proposed clustering method to performance evaluations for academic departments of higher education by using actual engineering school data in Taiwan.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号