首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
模糊C均值( FCM)聚类算法最终形成的聚类质量会受到初始值的设定、簇的个数选定及参数选择等多方面因素的影响。文中对最近发表的5种代表性聚类有效性指数在不同的数据维数、聚类个数和参数等条件下对FCM的聚类有效性评价结果进行对比分析。实验结果表明基于类内紧致度和类间离散度比值的聚类有效性指数对数据维度及噪声较为鲁棒,基于隶属度的聚类有效性指数不适于高维数据等,上述结果可帮助研究人员在不同的应用环境下选择合适的模糊聚类有效性函数。  相似文献   

2.
王玲  孟建瑶 《控制与决策》2018,33(3):471-478
针对传统的贝叶斯增量聚类算法需要人为设置参数,且对分布不均衡数据聚类效果不佳的问题,提出一种基于局部分布的贝叶斯自适应共振理论增量聚类算法.首先,利用数据快照读取数据;然后,在无需设置参数的情况下,考虑类簇的局部分布情况,自适应地确定新数据的所属类别,并更新获胜类簇;最后,确定相邻快照中类簇的演化关系.不同数据集的仿真结果表明,所提出的算法在准确性和自适应性方面均有显著提高.  相似文献   

3.
针对密度峰值聚类算法(Density Peaks Clustering,DPC)需要人为指定截断距离d c,以及局部密度定义简单和一步分配策略导致算法在复杂数据集上表现不佳的问题,提出了一种基于自然最近邻的密度峰值聚类算法(Density Peaks Clustering based on Natural Nearest Neighbor,NNN-DPC)。该算法无需指定任何参数,是一种非参数的聚类方法。该算法首先根据自然最近邻的定义,给出新的局部密度计算方法来描述数据的分布,揭示内在的联系;然后设计了两步分配策略来进行样本点的划分。最后定义了簇间相似度并提出了新的簇合并规则进行簇的合并,从而得到最终聚类结果。实验结果表明,在无需参数的情况下,NNN-DPC算法在各类数据集上都有优秀的泛化能力,对于流形数据或簇间密度差异大的数据能更加准确地识别聚类数目和分配样本点。与DPC、FKNN-DPC(Fuzzy Weighted K-nearest Density Peak Clustering)以及其他3种经典聚类算法的性能指标相比,NNN-DPC算法更具优势。  相似文献   

4.
密度峰值聚类算法(Density Peaks Clustering,DPC),是一种基于密度的聚类算法,该算法具有不需要指定聚类参数,能够发现非球状簇等优点。针对密度峰值算法凭借经验计算截断距离[dc]无法有效应对各个场景并且密度峰值算法人工选取聚类中心的方式难以准确获取实际聚类中心的缺陷,提出了一种基于基尼指数的自适应截断距离和自动获取聚类中心的方法,可以有效解决传统的DPC算法无法处理复杂数据集的缺点。该算法首先通过基尼指数自适应截断距离[dc],然后计算各点的簇中心权值,再用斜率的变化找出临界点,这一策略有效避免了通过决策图人工选取聚类中心所带来的误差。实验表明,新算法不仅能够自动确定聚类中心,而且比原算法准确率更高。  相似文献   

5.
模糊连接点聚类算法(Fuzzy Joint Points, FJP)用最大间隔下降法划分聚类的簇数目,这种确定簇数目的方法具有主观性,不利于算法的应用推广。针对此问题,提出一种基于有效近邻簇指标的自适应FJP聚类算法,通过Kernels-VCN指标来评估聚类的有效性,从而实现最佳簇数目的自适应确定,最后在UCI数据集和人工数据集上验证所提算法的可行性。  相似文献   

6.
针对密度峰值聚类算法CFSFDP(Clustering by fast search and find of density peaks)计算密度时人为判断截断距离和人工截取簇类中心的缺陷,提出了一种基于非参数核密度估计的密度峰值的聚类算法。首先,应用非参数核密度估计方法计算数据点的局部密度;其次,根据排序图采用簇中心点自动选择策略确定潜在簇类中心点,将其余数据点归并到相应的簇类中心;最后,依据簇类间的合并准则,对邻近相似子簇进行合并,并根据边界密度识别噪声点,得到聚类结果。在人工测试数据集和UCI真实数据集上的实验表明,新算法较之原CFSFDP算法,不仅有效避免了人为判断截断距离和截取簇类中心的主观因素,而且可以取得更高的准确度。  相似文献   

7.
周欢欢  郑伯川  张征  张琦 《计算机应用》2022,42(5):1464-1471
针对基于共享最近邻的密度峰聚类算法中的近邻参数需要人为设定的问题,提出了一种基于自适应近邻参数的密度峰聚类算法。首先,利用所提出的近邻参数搜索算法自动获得近邻参数;然后,通过决策图选取聚类中心;最后,根据所提出的代表点分配策略,先分配代表点,后分配非代表点,从而实现所有样本点的聚类。将所提出的算法与基于共享最近邻的快速密度峰搜索聚类(SNN?DPC)、基于密度峰值的聚类(DPC)、近邻传播聚类(AP)、对点排序来确定聚类结构(OPTICS)、基于密度的噪声应用空间聚类(DBSCAN)和K-means这6种算法在合成数据集以及UCI数据集上进行聚类结果对比。实验结果表明,所提出的算法在调整互信息(AMI)、调整兰德系数(ARI)和FM指数(FMI)等评价指标上整体优于其他6种算法。所提算法能自动获得有效的近邻参数,且能较好地分配簇边缘区域的样本点。  相似文献   

8.
针对现有聚类算法普遍存在聚类质量低、参数依赖性大、孤立点难识别等问题,提出一种基于数据场的聚类算法。该算法通过计算每个数据对象点的势值,根据类簇中心的势值比周围邻居的势值大,且与其他类簇中心有相对较大距离的特点,确定类簇中心;根据孤立点的势值等于零的特点,选出孤立点;最后将其他数据对象点划分到比自身势值大且最近邻的类簇中,从而实现聚类。仿真实验表明,该算法在不需要人为调参的情况下准确找出类簇中心和孤立点,聚类效果优良,且与数据集的形状无关。  相似文献   

9.
针对密度峰值聚类算法受人为干预影响较大和参数敏感的问题,即不正确的截断距离dc会导致错误的初始聚类中心,而且在某些情况下,即使设置了适当的dc值,仍然难以从决策图中人为选择初始聚类中心。为克服这些缺陷,提出一种新的基于密度峰值的聚类算法。该算法首先根据K近邻的思想来确定数据点的局部密度,然后提出一种新的自适应聚合策略,即首先通过算法给出阈值判断初始类簇中心,然后依据离初始类簇中心最近分配剩余点,最后通过类簇间密度可达来合并相似类簇。在实验中,该算法在合成和实际数据集中的表现比DPC、DBSCAN、KNNDPC和K-means算法要好,能有效提高聚类准确率和质量。  相似文献   

10.
文章提出了一种基于算法选择和结果评估的自动聚类方法。对给定数据集,该方法首先通过分析数据集的潜在簇结构,并依据所发现的簇结构为数据集挑选一种合适的备选聚类算法集;然后利用聚类有效性指标对这个算法集的算法聚类结果进行评估,以确保得到高质量聚类结果。实验结果表明该方法能够自动地挑选适合数据集的聚类算法,并获得高质量的聚类结果。  相似文献   

11.
A cluster operator takes a set of data points and partitions the points into clusters (subsets). As with any scientific model, the scientific content of a cluster operator lies in its ability to predict results. This ability is measured by its error rate relative to cluster formation. To estimate the error of a cluster operator, a sample of point sets is generated, the algorithm is applied to each point set and the clusters evaluated relative to the known partition according to the distributions, and then the errors are averaged over the point sets composing the sample. Many validity measures have been proposed for evaluating clustering results based on a single realization of the random-point-set process. In this paper we consider a number of proposed validity measures and we examine how well they correlate with error rates across a number of clustering algorithms and random-point-set models. Validity measures fall broadly into three classes: internal validation is based on calculating properties of the resulting clusters; relative validation is based on comparisons of partitions generated by the same algorithm with different parameters or different subsets of the data; and external validation compares the partition generated by the clustering algorithm and a given partition of the data. To quantify the degree of similarity between the validation indices and the clustering errors, we use Kendall's rank correlation between their values. Our results indicate that, overall, the performance of validity indices is highly variable. For complex models or when a clustering algorithm yields complex clusters, both the internal and relative indices fail to predict the error of the algorithm. Some external indices appear to perform well, whereas others do not. We conclude that one should not put much faith in a validity score unless there is evidence, either in terms of sufficient data for model estimation or prior model knowledge, that a validity measure is well-correlated to the error rate of the clustering algorithm.  相似文献   

12.
Novel Cluster Validity Index for FCM Algorithm   总被引:5,自引:0,他引:5       下载免费PDF全文
How to determine an appropriate number of clusters is very important when implementing a specific clustering algorithm, like c-means, fuzzy c-means (FCM). In the literature, most cluster validity indices are originated from partition or geometrical property of the data set. In this paper, the authors developed a novel cluster validity index for FCM, based on the optimality test of FCM. Unlike the previous cluster validity indices, this novel cluster validity index is inherent in FCM itself. Comparison experiments show that the stability index can be used as cluster validity index for the fuzzy c-means.  相似文献   

13.
In this paper the problem of automatic clustering a data set is posed as solving a multiobjective optimization (MOO) problem, optimizing a set of cluster validity indices simultaneously. The proposed multiobjective clustering technique utilizes a recently developed simulated annealing based multiobjective optimization method as the underlying optimization strategy. Here variable number of cluster centers is encoded in the string. The number of clusters present in different strings varies over a range. The points are assigned to different clusters based on the newly developed point symmetry based distance rather than the existing Euclidean distance. Two cluster validity indices, one based on the Euclidean distance, XB-index, and another recently developed point symmetry distance based cluster validity index, Sym-index, are optimized simultaneously in order to determine the appropriate number of clusters present in a data set. Thus the proposed clustering technique is able to detect both the proper number of clusters and the appropriate partitioning from data sets either having hyperspherical clusters or having point symmetric clusters. A new semi-supervised method is also proposed in the present paper to select a single solution from the final Pareto optimal front of the proposed multiobjective clustering technique. The efficacy of the proposed algorithm is shown for seven artificial data sets and six real-life data sets of varying complexities. Results are also compared with those obtained by another multiobjective clustering technique, MOCK, two single objective genetic algorithm based automatic clustering techniques, VGAPS clustering and GCUK clustering.  相似文献   

14.
Cluster validity indices are used for estimating the quality of partitions produced by clustering algorithms and for determining the number of clusters in data. Cluster validation is difficult task, because for the same data set more partitions exists regarding the level of details that fit natural groupings of a given data set. Even though several cluster validity indices exist, they are inefficient when clusters widely differ in density or size. We propose a clustering validity index that addresses these issues. It is based on compactness and overlap measures. The overlap measure, which indicates the degree of overlap between fuzzy clusters, is obtained by calculating the overlap rate of all data objects that belong strongly enough to two or more clusters. The compactness measure, which indicates the degree of similarity of data objects in a cluster, is calculated from membership values of data objects that are strongly enough associated to one cluster. We propose ratio and summation type of index using the same compactness and overlap measures. The maximal value of index denotes the optimal fuzzy partition that is expected to have a high compactness and a low degree of overlap among clusters. Testing many well-known previously formulated and proposed indices on well-known data sets showed the superior reliability and effectiveness of the proposed index in comparison to other indices especially when evaluating partitions with clusters that widely differ in size or density.  相似文献   

15.
基于近邻传播算法的最佳聚类数确定方法比较研究   总被引:2,自引:0,他引:2  
在聚类分析中,决定聚类质量的关键是确定最佳聚类数.提出采用聚类效果较好的近邻传播聚类算法对样本进行聚类,运用6种聚类有效性指标分别对聚类结果进行有效性分析,以确定最佳聚类数.具体分析了这些有效性指标,并改进了IGP指标确定最佳聚类数的方法.针对8个数据集,通过实验比较这些指标的性能.分析和实验结果表明,基于近邻传播聚类...  相似文献   

16.
Cluster analysis is used to explore structure in unlabeled batch data sets in a wide range of applications. An important part of cluster analysis is validating the quality of computationally obtained clusters. A large number of different internal indices have been developed for validation in the offline setting. However, this concept cannot be directly extended to the online setting because streaming algorithms do not retain the data, nor maintain a partition of it, both needed by batch cluster validity indices. In this paper, we develop two incremental versions (with and without forgetting factors) of the Xie-Beni and Davies-Bouldin validity indices, and use them to monitor and control two streaming clustering algorithms (sk-means and online ellipsoidal clustering), In this context, our new incremental validity indices are more accurately viewed as performance monitoring functions. We also show that incremental cluster validity indices can send a distress signal to online monitors when evolving structure leads an algorithm astray. Our numerical examples indicate that the incremental Xie-Beni index with a forgetting factor is superior to the other three indices tested.  相似文献   

17.
Performance evaluation of some clustering algorithms and validity indices   总被引:16,自引:0,他引:16  
In this article, we evaluate the performance of three clustering algorithms, hard K-Means, single linkage, and a simulated annealing (SA) based technique, in conjunction with four cluster validity indices, namely Davies-Bouldin index, Dunn's index, Calinski-Harabasz index, and a recently developed index I. Based on a relation between the index I and the Dunn's index, a lower bound of the value of the former is theoretically estimated in order to get unique hard K-partition when the data set has distinct substructures. The effectiveness of the different validity indices and clustering methods in automatically evolving the appropriate number of clusters is demonstrated experimentally for both artificial and real-life data sets with the number of clusters varying from two to ten. Once the appropriate number of clusters is determined, the SA-based clustering technique is used for proper partitioning of the data into the said number of clusters.  相似文献   

18.
张妨妨  钱雪忠 《计算机应用》2012,32(9):2476-2479
针对传统GK聚类算法无法自动确定聚类数和对初始聚类中心比较敏感的缺陷,提出一种改进的GK聚类算法。该算法首先通过基于类间分离度和类内紧致性的权和的新有效性指标来确定最佳聚类数;然后,利用改进的熵聚类的思想来确定初始聚类中心;最后,根据判定出的聚类数和新的聚类中心进行聚类。实验结果表明,新指标能准确地判断出类间有交叠的数据集的最佳聚类数,且改进后的算法具有更高的聚类准确率。  相似文献   

19.
The self-organizing map (SOM) has been widely used in many industrial applications. Classical clustering methods based on the SOM often fail to deliver satisfactory results, specially when clusters have arbitrary shapes. In this paper, through some preprocessing techniques for filtering out noises and outliers, we propose a new two-level SOM-based clustering algorithm using a clustering validity index based on inter-cluster and intra-cluster density. Experimental results on synthetic and real data sets demonstrate that the proposed clustering algorithm is able to cluster data better than the classical clustering algorithms based on the SOM, and find an optimal number of clusters.  相似文献   

20.
石文峰  商琳 《计算机科学》2017,44(9):45-48, 66
Fuzzy C-Means(FCM)是模糊聚类中聚类效果较好且应用较为广泛的聚类算法,但是其对初始聚类数的敏感性导致如何选择一个较好的C值 变得十分重要。因此,确定FCM的聚类数是使用FCM进行聚类分析时的一个至关重要的步骤。通过扩展决策粗糙集模型进行聚类的有效性分析,并进一步确定FCM的聚类数,从而避免了使用FCM时不好的初始化所带来的影响。文中提出了一种基于扩展粗糙集模型的模糊C均值聚类数的确定方法,并通过图像分割实验来验证聚类的效果。实验通过比对不同聚类数下分类结果的代价获得了一个较好的分割结果,并将结果与Z.Yu等人于2015年提出的蚁群模糊C均值混合算法(AFHA)以及提高的AFHA算法(IAFHA)进行对比,结果表明所提方法的聚类结果较好,图像分割效果较明显,Bezdek分割系数比AFHA和IAFHA算法的更高,且在Xie-Beni系数上也有较大优势。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号