首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
“Best K”: critical clustering structures in categorical datasets   总被引:2,自引:2,他引:0  
The demand on cluster analysis for categorical data continues to grow over the last decade. A well-known problem in categorical clustering is to determine the best K number of clusters. Although several categorical clustering algorithms have been developed, surprisingly, none has satisfactorily addressed the problem of best K for categorical clustering. Since categorical data does not have an inherent distance function as the similarity measure, traditional cluster validation techniques based on geometric shapes and density distributions are not appropriate for categorical data. In this paper, we study the entropy property between the clustering results of categorical data with different K number of clusters, and propose the BKPlot method to address the three important cluster validation problems: (1) How can we determine whether there is significant clustering structure in a categorical dataset? (2) If there is significant clustering structure, what is the set of candidate “best Ks”? (3) If the dataset is large, how can we efficiently and reliably determine the best Ks?  相似文献   

2.
In evaluating the results of cluster analysis, it is common practice to make use of a number of fixed heuristics rather than to compare a data clustering directly against an empirically derived standard, such as a clustering empirically obtained from human informants. Given the dearth of research into techniques to express the similarity between clusterings, there is broad scope for fundamental research in this area. In defining the comparative problem, we identify two types of worst-case matches between pairs of clusterings, characterised as independently codistributed clustering pairs and conjugate partition pairs. Desirable behaviour for a similarity measure in either of the two worst cases is discussed, giving rise to five test scenarios in which characteristics of one of a pair of clusterings was manipulated in order to compare and contrast the behaviour of different clustering similarity measures. This comparison is carried out for previously-proposed clustering similarity measures, as well as a number of established similarity measures that have not previously been applied to clustering comparison. We introduce a paradigm apparatus for the evaluation of clustering comparison techniques and distinguish between the goodness of clusterings and the similarity of clusterings by clarifying the degree to which different measures confuse the two. Accompanying this is the proposal of a novel clustering similarity measure, the Measure of Concordance (MoC). We show that only MoC, Powers’s measure, Lopez and Rajski’s measure and various forms of Normalised Mutual Information exhibit the desired behaviour under each of the test scenarios.
Darius PfitznerEmail:
  相似文献   

3.
We propose a new approach, the forward functional testing (FFT) procedure, to cluster number selection for functional data clustering. We present a framework of subspace projected functional data clustering based on the functional multiplicative random-effects model, and propose to perform functional hypothesis tests on equivalence of cluster structures to identify the number of clusters. The aim is to find the maximum number of distinctive clusters while retaining significant differences between cluster structures. The null hypotheses comprise equalities between the cluster mean functions and between the sets of cluster eigenfunctions of the covariance kernels. Bootstrap resampling methods are developed to construct reference distributions of the derived test statistics. We compare several other cluster number selection criteria, extended from methods of multivariate data, with the proposed FFT procedure. The performance of the proposed approaches is examined by simulation studies, with applications to clustering gene expression profiles.  相似文献   

4.
Clustering algorithms have the annoying habit of finding clusters even when the data are generated randomly. Verifying that potential clusterings are real in some objective sense is receiving more attention as the number of new clustering algorithms and their applications grow. We consider one aspect of this question and study the stability of a hierarchical structure with a variation on a measure of stability proposed in the literature.(1,2)Our measure of stability is appropriate for proximity matrices whose entries are on an ordinal scale. We randomly split the data set, cluster the two halves, and compare the two hierarchical clusterings with the clustering achieved on the entire data set. Two stability statistics, based on the Goodman-Kruskal rank correlation coefficient, are defined. The distributions of these statistics are estimated with Monte Carlo techniques for two clustering methods (single-link and complete-link) and under two conditions (randomly selected proximity matrices and proximity matrices with good hierarchical structure). The stability measures are applied to some real data sets.  相似文献   

5.
Due to data sparseness and attribute redundancy in high-dimensional data, clusters of objects often exist in subspaces rather than in the entire space. To effectively address this issue, this paper presents a new optimization algorithm for clustering high-dimensional categorical data, which is an extension of the k-modes clustering algorithm. In the proposed algorithm, a novel weighting technique for categorical data is developed to calculate two weights for each attribute (or dimension) in each cluster and use the weight values to identify the subsets of important attributes that categorize different clusters. The convergence of the algorithm under an optimization framework is proved. The performance and scalability of the algorithm is evaluated experimentally on both synthetic and real data sets. The experimental studies show that the proposed algorithm is effective in clustering categorical data sets and also scalable to large data sets owning to its linear time complexity with respect to the number of data objects, attributes or clusters.  相似文献   

6.
How many clusters are best? - An experiment   总被引:4,自引:0,他引:4  
This paper reports the results of a Monte Carlo study on the relative effectiveness of two internal indices in estimating the true number of clusters in multivariate data. The two indices are the Davies and Bouldin(1) index and a new modification of the Hubert Γ statistic(2). Data in d dimensions are clustered to create sequences of partitions. Estimates are based on plots of the indices as functions of the number of recovered clusters. Neither index uses a-priori information. The effects of sample size, dimensionality, cluster spread, number of true clusters, and sampling window are examined. Clustered data are generated to assure a given number of distinct clusters. The degree of clustering in the data is verified by a separate Monte Carlo study based on the Jaccard and corrected Rand indices that exhibits the importance of correcting external indices for chance.

The modified Hubert index, proposed here for the first time, is shown to perform better than the Davies-Bouldin index under all experimental conditions. Recovery of the true number of clusters gets better as the number of true clusters decreases and as the number of dimensions increases. No effect occurs due to sampling window. The complete link clustering method and a square error clustering method recognize the true number of clusters consistently better than the single link method. This study demonstrates the difficulty inherent in estimating the number of clusters.  相似文献   


7.
To cluster web documents, all of which have the same name entities, we attempted to use existing clustering algorithms such as K-means and spectral clustering. Unexpectedly, it turned out that these algorithms are not effective to cluster web documents. According to our intensive investigation, we found that clustering such web pages is more complicated because (1) the number of clusters (known as ground truth) is larger than two or three clusters as in general clustering problems and (2) clusters in the data set have extremely skewed distributions of cluster sizes. To overcome the aforementioned problem, in this paper, we propose an effective clustering algorithm to boost up the accuracy of K-means and spectral clustering algorithms. In particular, to deal with skewed distributions of cluster sizes, our algorithm performs both bisection and merge steps based on normalized cuts of the similarity graph G to correctly cluster web documents. Our experimental results show that our algorithm improves the performance by approximately 56% compared to spectral bisection and 36% compared to K-means.  相似文献   

8.
面向位置大数据的快速密度聚类算法   总被引:1,自引:0,他引:1  
本文面向位置大数据聚类,提出了一种简单但高效的快速密度聚类算法CBSCAN,以快速发现位置大数据中任意形状的聚类簇模式和噪声.首先,定义了Cell网格概念,并提出了基于Cell的距离分析理论,利用该距离分析,无需距离计算,可快速确定高密度区域的核心点和密度相连关系;其次,给出了网格簇定义,将基于位置点的密度簇映射成基于网格的密度簇,利用排他网格与相邻网格的密度关系,可快速确定网格簇的包含网格;第三,利用基于Cell的距离分析理论和网格簇概念,实现了一个快速密度聚类算法,将DBSCAN基于数据点的密度扩展聚类转换成基于Cell的密度扩展聚类,大大减少高密度区域的距离计算,利用位置数据的内在特性提高了聚类效率;最后,在基准测试数据上验证了所提算法的聚类效果,在位置大数据上的实验结果统计显示,相比DBSCAN、PR-Tree索引和Grid索引优化的DBSCAN,CBSCAN分别平均提升了525倍、30倍和11倍效率.  相似文献   

9.
Assessment of clustering tendency is an important first step in cluster analysis. One tool for assessing cluster tendency is the Visual Assessment of Tendency (VAT) algorithm. VAT produces an image matrix that can be used for visual assessment of cluster tendency in either relational or object data. However, VAT becomes intractable for large data sets. The revised VAT (reVAT) algorithm reduces the number of computations done by VAT, and replaces the image matrix with a set of profile graphs that are used for the visual assessment step. Thus, reVAT overcomes the large data set problem which encumbers VAT, but presents a new problem: interpretation of the set of reVAT profile graphs becomes very difficult when the number of clusters is large, or there is significant overlap between groups of objects in the data. In this paper, we propose a new algorithm called bigVAT which (i) solves the large data problem suffered by VAT, and (ii) solves the interpretation problem suffered by reVAT. bigVAT combines the quasi-ordering technique used by reVAT with an image display of the set of profile graphs displaying the clustering tendency information with a VAT-like image. Several numerical examples are given to illustrate and support the new technique.  相似文献   

10.
聚类集成中的差异性度量研究   总被引:14,自引:0,他引:14  
集体的差异性被认为是影响集成学习的一个关键因素.在分类器集成中有许多的差异性度量被提出,但是在聚类集成中如何测量聚类集体的差异性,目前研究得很少.作者研究了7种聚类集体差异性度量方法,并通过实验研究了这7种度量在不同的平均成员聚类准确度、不同的集体大小和不同的数据分布情况下与各种聚类集成算法性能之间的关系.实验表明:这些差异性度量与聚类集成性能间并没有单调关系,但是在平均成员准确度较高、聚类集体大小适中和数据中有均匀簇分布的情况下,它们与集成性能间的相关度还是比较高的.最后给出了一些差异性度量用于指导聚类集体生成的可行性建议.  相似文献   

11.
Clustering is an important data mining problem. However, most earlier work on clustering focused on numeric attributes which have a natural ordering to their attribute values. Recently, clustering data with categorical attributes, whose attribute values do not have a natural ordering, has received more attention. A common issue in cluster analysis is that there is no single correct answer to the number of clusters, since cluster analysis involves human subjective judgement. Interactive visualization is one of the methods where users can decide a proper clustering parameters. In this paper, a new clustering approach called CDCS (Categorical Data Clustering with Subjective factors) is introduced, where a visualization tool for clustered categorical data is developed such that the result of adjusting parameters is instantly reflected. The experiment shows that CDCS generates high quality clusters compared to other typical algorithms.  相似文献   

12.
We recently introduced the negentropy increment, a validity index for crisp clustering that quantifies the average normality of the clustering partitions using the negentropy. This index can satisfactorily deal with clusters with heterogeneous orientations, scales and densities. One of the main advantages of the index is the simplicity of its calculation, which only requires the computation of the log-determinants of the covariance matrices and the prior probabilities of each cluster. The negentropy increment provides validation results which are in general better than those from other classic cluster validity indices. However, when the number of data points in a partition region is small, the quality in the estimation of the log-determinant of the covariance matrix can be very poor. This affects the proper quantification of the index and therefore the quality of the clustering, so additional requirements such as limitations on the minimum number of points in each region are needed. Although this kind of constraints can provide good results, they need to be adjusted depending on parameters such as the dimension of the data space. In this article we investigate how the estimation of the negentropy increment of a clustering partition is affected by the presence of regions with small number of points. We find that the error in this estimation depends on the number of points in each region, but not on the scale or orientation of their distribution, and show how to correct this error in order to obtain an unbiased estimator of the negentropy increment. We also quantify the amount of uncertainty in the estimation. As we show, both for 2D synthetic problems and multidimensional real benchmark problems, these results can be used to validate clustering partitions with a substantial improvement.  相似文献   

13.
In this paper the problem of automatic clustering a data set is posed as solving a multiobjective optimization (MOO) problem, optimizing a set of cluster validity indices simultaneously. The proposed multiobjective clustering technique utilizes a recently developed simulated annealing based multiobjective optimization method as the underlying optimization strategy. Here variable number of cluster centers is encoded in the string. The number of clusters present in different strings varies over a range. The points are assigned to different clusters based on the newly developed point symmetry based distance rather than the existing Euclidean distance. Two cluster validity indices, one based on the Euclidean distance, XB-index, and another recently developed point symmetry distance based cluster validity index, Sym-index, are optimized simultaneously in order to determine the appropriate number of clusters present in a data set. Thus the proposed clustering technique is able to detect both the proper number of clusters and the appropriate partitioning from data sets either having hyperspherical clusters or having point symmetric clusters. A new semi-supervised method is also proposed in the present paper to select a single solution from the final Pareto optimal front of the proposed multiobjective clustering technique. The efficacy of the proposed algorithm is shown for seven artificial data sets and six real-life data sets of varying complexities. Results are also compared with those obtained by another multiobjective clustering technique, MOCK, two single objective genetic algorithm based automatic clustering techniques, VGAPS clustering and GCUK clustering.  相似文献   

14.
15.
A non-parametric unsupervised program was developed to identify clusters in multidimensional data by mode analysis using histograms. An implicit assumption in the histogram approach is that a relatively large number of samples is required to insure an accurate classification. Tests with randomly generated data show that the assumption is not true, i.e. a small number of samples does not necessarily result in a poor classification, nor does a relatively large number of samples guarantee the best classification. The histogram classifier was compared to two parametric classifiers, maximum likelihood and K-means clustering. Results from timing the classifiers show that, although the parametric classifiers are more efficient for a small number of samples, the histogram approach uses less CPU time for a large number of samples.  相似文献   

16.
Context-specific independence representations, such as tree-structured conditional probability distributions, capture local independence relationships among the random variables in a Bayesian network (BN). Local independence relationships among the random variables can also be captured by using attribute-value hierarchies to find an appropriate abstraction level for the values used to describe the conditional probability distributions. Capturing this local structure is important because it reduces the number of parameters required to represent the distribution. This can lead to more robust parameter estimation and structure selection, more efficient inference algorithms, and more interpretable models. In this paper, we introduce Tree-Abstraction-Based Search (TABS), an approach for learning a data distribution by inducing the graph structure and parameters of a BN from training data. TABS combines tree structure and attribute-value hierarchies to compactly represent conditional probability tables. To construct the attribute-value hierarchies, we investigate two data-driven techniques: a global clustering method, which uses all of the training data to build the attribute-value hierarchies, and can be performed as a preprocessing step; and a local clustering method, which uses only the local network structure to learn attribute-value hierarchies. We present empirical results for three real-world domains, finding that (1) combining tree structure and attribute-value hierarchies improves the accuracy of generalization, while providing a significant reduction in the number of parameters in the learned networks, and (2) data-derived hierarchies perform as well or better than expert-provided hierarchies.  相似文献   

17.
A clustering procedure called HICAP (HIstogram Cluster Analysis Procedure) was developed to perform an unsupervised classification of multidimensional image data. The clustering approach used in HICAP is based upon an algorithm described by Narendra and Goldberg to classify four-dimensional Landsat Multispectral Scanner data. HICAP incorporates two major modifications to the scheme by Narendra and Goldberg. The first modification is that HICAP is generalized to process up to 32-bit data with an arbitrary number of dimensions. The second modification is that HICAP uses more efficient algorithms to implement the clustering approach described by Narendra and Goldberg.(1) This means that the HICAP classification requires less computation, although it is otherwise identical to the original classification. The computational savings afforded by HICAP increases with the number of dimensions in the data.  相似文献   

18.
In this paper, a fuzzy clustering method based on evolutionary programming (EPFCM) is proposed. The algorithm benefits from the global search strategy of evolutionary programming, to improve fuzzy c-means algorithm (FCM). The cluster validity can be measured by some cluster validity indices. To increase the convergence speed of the algorithm, we exploit the modified algorithm to change the number of cluster centers dynamically. Experiments demonstrate EPFCM can find the proper number of clusters, and the result of clustering does not depend critically on the choice of the initial cluster centers. The probability of trapping into the local optima will be very lower than FCM.  相似文献   

19.
目前随着信息检索技术的不断深入,信息检索技术中的聚类分析也得到了不断的发展,特别是随着各种数据源的大量涌现,如图像数据,文本数据,DNA数据,时间序列数据,Web数据等等,聚类分析越来越受到重视,对聚类的研究已经成为信息检索领域中一个非常活跃的研究课题。论文以聚类分析方法为理论基础,利用面向对象编程技术完成了一个聚类软件,应用该聚类软件,可对信息实现快速检索,具有实用价值。  相似文献   

20.
The aim of this paper is to investigate the problem of finding the efficient number of clusters in fuzzy time series. The clustering process has been discussed in the existing literature, and a number of methods have been suggested. These methods have several drawbacks, especially the lack of cluster shape and quantity optimization. There are two critical dimensions in a fuzzy time series clustering: the selection of a proper interval for fuzzy clusters and the optimization of the membership degrees among the fuzzy cluster set. The existing methods for the interval selection assume that the intended data has a short-tailed distribution, and the cluster intervals are established in identical lengths (e.g. Song and Chissom, 1994; Chen, 1996; Yolcu et al., 2009). However, the time series data (particularly in economic research) is rarely short-tailed and mostly converges to long-tail distribution because of the boom-bust market behavior. This paper proposes a novel clustering method named histogram damping partition (HDP) to define sub-clusters on the standard deviation intervals and truncate the histogram of the data by a constraint based on the coefficient of variation. The HDP approach can be used for many different kinds of fuzzy time series models at the clustering stage.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号