首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We recently introduced the negentropy increment, a validity index for crisp clustering that quantifies the average normality of the clustering partitions using the negentropy. This index can satisfactorily deal with clusters with heterogeneous orientations, scales and densities. One of the main advantages of the index is the simplicity of its calculation, which only requires the computation of the log-determinants of the covariance matrices and the prior probabilities of each cluster. The negentropy increment provides validation results which are in general better than those from other classic cluster validity indices. However, when the number of data points in a partition region is small, the quality in the estimation of the log-determinant of the covariance matrix can be very poor. This affects the proper quantification of the index and therefore the quality of the clustering, so additional requirements such as limitations on the minimum number of points in each region are needed. Although this kind of constraints can provide good results, they need to be adjusted depending on parameters such as the dimension of the data space. In this article we investigate how the estimation of the negentropy increment of a clustering partition is affected by the presence of regions with small number of points. We find that the error in this estimation depends on the number of points in each region, but not on the scale or orientation of their distribution, and show how to correct this error in order to obtain an unbiased estimator of the negentropy increment. We also quantify the amount of uncertainty in the estimation. As we show, both for 2D synthetic problems and multidimensional real benchmark problems, these results can be used to validate clustering partitions with a substantial improvement.  相似文献   

2.
Clustering analysis is the process of separating data according to some similarity measure. A cluster consists of data which are more similar to each other than to other clusters. The similarity of a datum to a certain cluster can be defined as the distance of that datum to the prototype of that cluster. Typically, the prototype of a cluster is a real vector that is called the center of that cluster. In this paper, the prototype of a cluster is generalized to be a complex vector (complex center). A new distance measure is introduced. New formulas for the fuzzy membership and the fuzzy covariance matrix are introduced. Cluster validity measures are used to assess the goodness of the partitions obtained by the complex centers compared those obtained by the real centers. The validity measures used in this paper are the partition coefficient, classification entropy, partition index, separation index, Xie and Beni’s index, and Dunn’s index. It is shown in this paper that clustering with complex prototypes will give better partitions of the data than using real prototypes.  相似文献   

3.
In cluster analysis, one of the most challenging and difficult problems is the determination of the number of clusters in a data set, which is a basic input parameter for most clustering algorithms. To solve this problem, many algorithms have been proposed for either numerical or categorical data sets. However, these algorithms are not very effective for a mixed data set containing both numerical attributes and categorical attributes. To overcome this deficiency, a generalized mechanism is presented in this paper by integrating Rényi entropy and complement entropy together. The mechanism is able to uniformly characterize within-cluster entropy and between-cluster entropy and to identify the worst cluster in a mixed data set. In order to evaluate the clustering results for mixed data, an effective cluster validity index is also defined in this paper. Furthermore, by introducing a new dissimilarity measure into the k-prototypes algorithm, we develop an algorithm to determine the number of clusters in a mixed data set. The performance of the algorithm has been studied on several synthetic and real world data sets. The comparisons with other clustering algorithms show that the proposed algorithm is more effective in detecting the optimal number of clusters and generates better clustering results.  相似文献   

4.
张妨妨  钱雪忠 《计算机应用》2012,32(9):2476-2479
针对传统GK聚类算法无法自动确定聚类数和对初始聚类中心比较敏感的缺陷,提出一种改进的GK聚类算法。该算法首先通过基于类间分离度和类内紧致性的权和的新有效性指标来确定最佳聚类数;然后,利用改进的熵聚类的思想来确定初始聚类中心;最后,根据判定出的聚类数和新的聚类中心进行聚类。实验结果表明,新指标能准确地判断出类间有交叠的数据集的最佳聚类数,且改进后的算法具有更高的聚类准确率。  相似文献   

5.
Many clustering algorithms, including cluster ensembles, rely on a random component. Stability of the results across different runs is considered to be an asset of the algorithm. The cluster ensembles considered here are based on k-means clusterers. Each clusterer is assigned a random target number of clusters, k and is started from a random initialization. Here, we use 10 artificial and 10 real data sets to study ensemble stability with respect to random k, and random initialization. The data sets were chosen to have a small number of clusters (two to seven) and a moderate number of data points (up to a few hundred). Pairwise stability is defined as the adjusted Rand index between pairs of clusterers in the ensemble, averaged across all pairs. Nonpairwise stability is defined as the entropy of the consensus matrix of the ensemble. An experimental comparison with the stability of the standard k-means algorithm was carried out for k from 2 to 20. The results revealed that ensembles are generally more stable, markedly so for larger k. To establish whether stability can serve as a cluster validity index, we first looked at the relationship between stability and accuracy with respect to the number of clusters, k. We found that such a relationship strongly depends on the data set, varying from almost perfect positive correlation (0.97, for the glass data) to almost perfect negative correlation (-0.93, for the crabs data). We propose a new combined stability index to be the sum of the pairwise individual and ensemble stabilities. This index was found to correlate better with the ensemble accuracy. Following the hypothesis that a point of stability of a clustering algorithm corresponds to a structure found in the data, we used the stability measures to pick the number of clusters. The combined stability index gave best results  相似文献   

6.
A least biased fuzzy clustering method   总被引:2,自引:0,他引:2  
A new operational definition of cluster is proposed, and a fuzzy clustering algorithm with minimal biases is formulated by making use of the maximum entropy principle to maximize the entropy of the centroids with respect to the data points (clustering entropy). The authors make no assumptions on the number of clusters or their initial positions. For each value of an adimensional scale parameter β', the clustering algorithm makes each data point iterate towards one of the cluster's centroids, so that both hard and fuzzy partitions are obtained. Since the clustering algorithm can make a multiscale analysis of the given data set one can obtain both hierarchy and partitioning type clustering. The relative stability with respect to β' of each cluster structure is defined as the measurement of cluster validity. The authors determine the specific value of β' which corresponds to the optimal positions of cluster centroids by minimizing the entropy of the data points with respect to the centroids (clustered entropy). Examples are given to show how this least biased method succeeds in getting perceptually correct clustering results  相似文献   

7.
A measurement of cluster quality is often needed for DNA microarray data analysis. In this paper, we introduce a new cluster validity index, which measures geometrical features of the data. The essential concept of this index is to evaluate the ratio between the squared total length of the data eigen-axes with respect to the between-cluster separation. We show that this cluster validity index works well for data that contain clusters closely distributed or with different sizes. We verify the method using three simulated data sets, two real world data sets and two microarray data sets. The experiment results show that the proposed index is superior to five other cluster validity indices, including partition coefficients (PC), General silhouette index (GS), Dunn’s index (DI), CH Index and I-Index. Also, we have given a theorem to show for what situations the proposed index works well.  相似文献   

8.
Through incorporating a priori information available in some applications for independent component analysis (ICA) as the reference into the negentropy contrast function for FastICA, ICA with reference (ICA-R) or constrained ICA (cICA) is obtained as a constrained optimization problem. ICA-R achieves some advantages over earlier methods, whereas its computation load is somewhat high and its performance is strongly dependent on the threshold parameter. By alternately optimizing the negentropy contrast function for FastICA and the closeness measure for ICA-R, an improved method for ICA-R is proposed in this paper which can avoid the inherent drawbacks of ICA-R. The validity of the proposed method is demonstrated by simulation experiments.  相似文献   

9.
In this paper a fuzzy point symmetry based genetic clustering technique (Fuzzy-VGAPS) is proposed which can automatically determine the number of clusters present in a data set as well as a good fuzzy partitioning of the data. The clusters can be of any size, shape or convexity as long as they possess the property of symmetry. Here the membership values of points to different clusters are computed using the newly proposed point symmetry based distance. A variable number of cluster centers are encoded in the chromosomes. A new fuzzy symmetry based cluster validity index, FSym-index is first proposed here and thereafter it is utilized to measure the fitness of the chromosomes. The proposed index can detect non-convex, as well as convex-non-hyperspherical partitioning with variable number of clusters. It is mathematically justified via its relationship to a well-defined hard cluster validity function: the Dunn’s index, for which the condition of uniqueness has already been established. The results of the Fuzzy-VGAPS are compared with those obtained by seven other algorithms including both fuzzy and crisp methods on four artificial and four real-life data sets. Some real-life applications of Fuzzy-VGAPS to automatically cluster the gene expression data as well as segmenting the magnetic resonance brain image with multiple sclerosis lesions are also demonstrated.  相似文献   

10.
A new cluster validity index is proposed that determines the optimal partition and optimal number of clusters for fuzzy partitions obtained from the fuzzy c-means algorithm. The proposed validity index exploits an overlap measure and a separation measure between clusters. The overlap measure, which indicates the degree of overlap between fuzzy clusters, is obtained by computing an inter-cluster overlap. The separation measure, which indicates the isolation distance between fuzzy clusters, is obtained by computing a distance between fuzzy clusters. A good fuzzy partition is expected to have a low degree of overlap and a larger separation distance. Testing of the proposed index and nine previously formulated indexes on well-known data sets showed the superior effectiveness and reliability of the proposed index in comparison to other indexes.  相似文献   

11.
Identification of the correct number of clusters and the appropriate partitioning technique are some important considerations in clustering where several cluster validity indices, primarily utilizing the Euclidean distance, have been used in the literature. In this paper a new measure of connectivity is incorporated in the definitions of seven cluster validity indices namely, DB-index, Dunn-index, Generalized Dunn-index, PS-index, I-index, XB-index and SV-index, thereby yielding seven new cluster validity indices which are able to automatically detect clusters of any shape, size or convexity as long as they are well-separated. Here connectivity is measured using a novel approach following the concept of relative neighborhood graph. It is empirically established that incorporation of the property of connectivity significantly improves the capabilities of these indices in identifying the appropriate number of clusters. The well-known clustering techniques, single linkage clustering technique and K-means clustering technique are used as the underlying partitioning algorithms. Results on eight artificially generated and three real-life data sets show that connectivity based Dunn-index performs the best as compared to all the other six indices. Comparisons are made with the original versions of these seven cluster validity indices.  相似文献   

12.
We propose an internal cluster validity index for a fuzzy c-means algorithm which combines a mathematical model for the fuzzy c-partition and a heuristic search for the number of clusters in the data. Our index resorts to information theoretic principles, and aims to assess the congruence between such a model and the data that have been observed. The optimal cluster solution represents a trade-off between discrepancy and the complexity of the underlying fuzzy c-partition. We begin by testing the effectiveness of the proposed index using two sets of synthetic data, one comprising a well-defined cluster structure and the other containing only noise. Then we use datasets arising from real life problems. Our results are compared to those provided by several available indices and their goodness is judged by an external measure of similarity. We find substantial evidence supporting our index as a credible alternative to the cluster validation problem, especially when it concerns structureless data.  相似文献   

13.
Classical clustering methods, such as partitioning and hierarchical clustering algorithms, often fail to deliver satisfactory results, given clusters of arbitrary shapes. Motivated by a clustering validity index based on inter-cluster and intra-cluster density, we propose that the clustering validity index be used not only globally to find optimal partitions of input data, but also locally to determine which two neighboring clusters are to be merged in a hierarchical clustering of Self-Organizing Map (SOM). A new two-level SOM-based clustering algorithm using the clustering validity index is also proposed. Experimental results on synthetic and real data sets demonstrate that the proposed clustering algorithm is able to cluster data in a better way than classical clustering algorithms on an SOM.  相似文献   

14.
The leading partitional clustering technique, k-modes, is one of the most computationally efficient clustering methods for categorical data. However, in the k-modes-type algorithms, the performance of their clustering depends on initial cluster centers and the number of clusters needs be known or given in advance. This paper proposes a novel initialization method for categorical data which is implemented to the k-modes-type algorithms. The proposed method can not only obtain the good initial cluster centers but also provide a criterion to find candidates for the number of clusters. The performance and scalability of the proposed method has been studied on real data sets. The experimental results illustrate that the proposed method is effective and can be applied to large data sets for its linear time complexity with respect to the number of data points.  相似文献   

15.
A cluster validity index for fuzzy clustering   总被引:1,自引:0,他引:1  
A new cluster validity index is proposed for the validation of partitions of object data produced by the fuzzy c-means algorithm. The proposed validity index uses a variation measure and a separation measure between two fuzzy clusters. A good fuzzy partition is expected to have a low degree of variation and a large separation distance. Testing of the proposed index and nine previously formulated indices on well-known data sets shows the superior effectiveness and reliability of the proposed index in comparison to other indices and the robustness of the proposed index in noisy environments.  相似文献   

16.
Many validity indices have been proposed for quantitatively assessing the performance of clustering algorithms. One limitation of existing indices is their lack of generalizability, due to their dependence on the specific algorithms and structures of the data space. To handle large-scale datasets with arbitrary structures, this research study proposes a new cluster separation measure for improving the effectiveness of existing validity indices. This is achieved by partitioning the original data space into a grid-based structure which allows the introduction of a new measurement for assessing the true data distribution between any two clusters instead of the distance between the two cluster prototypes. To validate the effectiveness of the proposed separation measure, we adopt two commonly used validity indices, the Davies-Bouldin’s function (DB) and Tibshirani’s Gap statistic (GS). These indices are denoted as R-DB-1 and R-GS-1 for clusters with sphere-shaped structures and R-DB-2 and R-GS-2 for irregular-shaped structures. This integration enables the indices to evaluate both partitional algorithms and hierarchical algorithms. Partitional algorithms including C-Means (CM), Fuzzy C-Means (FCM), and hierarchical algorithms, including DBSCAN and CLIQUE, are used to test the performance of the new indices. Two synthetic datasets with spherical structures and four synthetic datasets with irregular shapes are first compared. Five real datasets from the UCI machine learning repository are then used to further test the measure’s performance. The experimental results provide evidence that the new indices outperform the original indices.  相似文献   

17.
基于模糊划分测度的聚类有效性指标   总被引:1,自引:0,他引:1       下载免费PDF全文
聚类有效性指标用于评价聚类结果的有效性。根据聚类的基本特性,提出了一个新的用于发现最优模糊划分的聚类有效性指标,该有效性指标采用模糊划分测度和信息熵两个重要因子来评价模糊聚类的有效性。其中,模糊划分测度用于评价聚类的类内紧致性与类间分离性,而信息熵则反映了模糊聚类划分结果的不确定性程度。实验结果表明,该聚类有效性指标能对模糊聚类结果的有效性进行正确的评价,特别是对于空间数据的聚类有效性评价,同其他有效性指标相比,它不仅能得到最优的模糊划分,而且对权重系数也是不敏感的。  相似文献   

18.
模糊聚类是模式识别、机器学习和图像处理等领域的重要研究内容。模糊C-均值聚类算法是最常用的模糊聚类实现算法,该算法需要预先给定聚类数才能对数据集进行聚类。提出了一种新的聚类有效性指标,对聚类结果进行有效性验证。该指标从划分熵、隶属度、几何结构角度,定义了紧凑度、分离度、重叠度三个重要特征测量。在此基础上,提出了一种最佳聚类数确定方法。将新聚类有效性指标和传统有效性指标在6个人工数据集和3个真实数据集进行实验验证。实验结果表明,所提出的指标和方法能够有效地对聚类结果进行评估,适合确定样本的最佳聚类数。  相似文献   

19.
Cluster validity indices are used to validate results of clustering and to find a set of clusters that best fits natural partitions for given data set. Most of the previous validity indices have been considerably dependent on the number of data objects in clusters, on cluster centroids and on average values. They have a tendency to ignore small clusters and clusters with low density. Two cluster validity indices are proposed for efficient validation of partitions containing clusters that widely differ in sizes and densities. The first proposed index exploits a compactness measure and a separation measure, and the second index is based an overlap measure and a separation measure. The compactness and the overlap measures are calculated from few data objects of a cluster while the separation measure uses all data objects. The compactness measure is calculated only from data objects of a cluster that are far enough away from the cluster centroids, while the overlap measure is calculated from data objects that are enough near to one or more other clusters. A good partition is expected to have low degree of overlap and a larger separation distance and compactness. The maximum value of the ratio of compactness to separation and the minimum value of the ratio of overlap to separation indicate the optimal partition. Testing of both proposed indices on some artificial and three well-known real data sets showed the effectiveness and reliability of the proposed indices.  相似文献   

20.
In this paper, we concentrate on the usage of uncertainty associated with the level of fuzziness in determination of the number of clusters in FCM for any data set. We propose a MiniMax ε-stable cluster validity index based on the uncertainty associated with the level of fuzziness within the framework of interval valued Type 2 fuzziness. If the data have a clustered structure, the optimum number of clusters may be assumed to have minimum uncertainty under upper and lower levels of fuzziness. Upper and lower values of the level of fuzziness for Fuzzy C-Mean (FCM) clustering methodology have been found as m = 2.6 and 1.4, respectively, in our previous studies. Our investigation shows that the stability of cluster centers with respect to the level of fuzziness is sufficient for the determination of the number of clusters.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号